Top
Best
New

Posted by simedw 22 hours ago

Show HN: Spegel, a Terminal Browser That Uses LLMs to Rewrite Webpages(simedw.com)
383 points | 166 comments
qsort 22 hours ago|
This is actually very cool. Not really replacing a browser, but it could enable an alternative way of browsing the web with a combination of deterministic search and prompts. It would probably work even better as a command line tool.

A natural next step could be doing things with multiple "tabs" at once, e.g: tab 1 contains news outlet A's coverage of a story, tab 2 has outlet B's coverage, tab 3 has Wikipedia; summarize and provide references. I guess the problem at that point is whether the underlying model can support this type of workflow, which doesn't really seem to be the case even with SOTA models.

hliyan 4 hours ago||
For me, a natural next step would be to turn this into a service -- rather than doing it in the browser, this acts as a proxy, strips away all the crud and serves your browser clean text. No need to install a new browser, just point the browser to the URL via the service.

But if we do it, we have to admit something hilarious: we will soon be using AI to convert text provided by the website creator into elaborate web experiences, which end users will strip away before consuming it in a form very close to what the creator wrote down in the first place (this is already happening with beautifully worded emails that start with "I hope this email finds you well").

TeMPOraL 17 hours ago|||
> tab 1 contains news outlet A's coverage of a story, tab 2 has outlet B's coverage, tab 3 has Wikipedia; summarize and provide references.

I think this is basically what https://ground.news/ does.

(I'm not affiliated with them; just saw them in the sponsorship section of a Kurzgesagt video the other day and figured they're doing the thing you described +/- UI differences.)

doctoboggan 14 hours ago||
I am a ground news subscriber (joined with a Kurzgesagt ref link) and it does work that way (minus the wikipedia summary). It's pretty good and I particularly like their "blindspot" section showing news that is generally missing from a specific partisan new bubble.
simedw 22 hours ago|||
Thank you.

I was thinking of showing multiple tabs/views at the same time, but only from the same source.

Maybe we could have one tab with the original content optimised for cli viewing, and another tab just doing fact checking (can ground it with google search or brave). Would be a fun experiment.

myfonj 20 hours ago|||
Interestingly, the original idea of what we call a "browser" nowadays – the "user agent" – was built on the premise that each user has specific needs and preferences. The user agent was designed to act on their behalf, negotiating data transfers and resolving conflicts between content author and user (content consumer) preferences according to "strengths" and various reconciliation mechanisms.

(The fact that browsers nowadays are usually expected to represent something "pixel-perfect" to everyone with similar devices is utterly against the original intention.)

Yet the original idea was (due to the state of technical possibilities) primarily about design and interactivity. The fact that we now have tools to extend this concept to core language and content processing is… huge.

It seems we're approaching the moment when our individual personal agent, when asked about a new page, will tell us:

    Well, there's nothing new of interest for you, frankly:
    All information presented there was present on pages visited recently.
    -- or --
    You've already learned everything mentioned there. (*)
    Here's a brief summary: …
    (Do you want to dig deeper, see the content verbatim, or anything else?)
Because its "browsing history" will also contain a notion of what we "know" from chats or what we had previously marked as "known".
bee_rider 18 hours ago|||
It would have to have a pretty good model of my brain to help me make these decisions. Just as a random example, it will have to understand that an equation is a sort of thing that I’m likely to look up even if I understand the meaning of it, just to double check and get the particulars right. That’s an obvious example, I think there must be other examples that are less obvious.

Or that I’m looking up a data point that I already actually know, just because I want to provide a citation.

But, it could be interesting.

dotancohen 6 hours ago|||

  > Or that I’m looking up a data point that I already actually know, just because I want to provide a citation.
Or what were know has changed.

When I was a child we knew that the North Star consisted of five suns. Now we know that it is only three suns, and through them we can see another two background stars that are not gravitationally bound to the three suns of the Polaris system.

Maybe in my grandchildren lifetimes we'll know something else about the system.

myfonj 17 hours ago|||
Well we should first establish some sort of contract how to convey the "I feel that I actually understand this particular piece of information, so when confronted with it in the future, you can mark is as such". My lines of thought were more about a tutorial page that would present the same techniques as course you have finished a week prior, or news page reporting on an event you just read about on a different news site a minute before … stuff like this … so you wold potentially save the time skimming/reading/understanding only to realise there was no added value for you in that particular moment. Or while scrolling through a comment section, hide comment parts repeating the same remark, or joke.

Or (and this is actually doable absolutely without any "AI" at all):

    What the bloody hell actually newly appeared on this particular URL since my last visit?
(There is one page nearby that would be quite unusable for me, had I not a crude userscript aid for this particular purpose. But I can imagine having a digest about "What's new here?" / "Noteworthy responses?" would be way better.)

For the "I need to cite this source", naturally, you would want the "verbatim" view without any amendments anyway. Also probably before sharing / directing someone to the resource, looking at the "true form" would be still pretty necessary.

idiotsecant 15 hours ago||||
I can definitely see a future in which we are qch have our own personal memetic firewall, keeping us safe and cozy in our personal little worldview bubbles.
aspenmayer 8 hours ago||
Some people think the sunglasses in They Live let you see through the propaganda, others think that the sunglasses themselves are just a different kind of pysop.

So, you gonna “put on those sunglasses, or start chewing on that trashcan?” It’s a distinction without a difference!

https://www.youtube.com/watch?v=1Rr4mQiwxpA

ffsm8 19 hours ago|||
> Well, there's nothing new of interest for you, frankly

For this to work like a user would want, the model would have to be sentient.

But you could try to get there with current models, it'd just be very untrustworthy to the point of being pointless beyond a novelty

myfonj 19 hours ago||
Not any more "sentient" than existing LLMs even in the limited chat context span are already.

Naturally, »nothing new of interest for you« here is indeed just a proxy for »does not involve any significant concept that you haven't previously expressed knowledge about« (or how to put it), what seems pretty doable, provided that contract of "expressing knowledge about something" had been made beforehand.

Let's say that all pages you have ever bookmarked you have really grokked (yes, a stretch, no "read it later" here) - then your personal model would be able to (again, figuratively) "make qualified guess" about your knowledge. Or some kind of tag that you could add to any browsing history entry, or fragment, indicating "I understand this". Or set the agent up to quiz you when leaving a page (that would be brutal). Or … I think you got the gist now.

nextaccountic 21 hours ago||||
In your cleanup step, after cleaning obvious junk, I think you should do whatever Firefox's reader mode does to further clean up, and if that fails bail out to the current output. That should reduce the number of tokens you send to the LLM even more

You should also have some way for the LLM to indicate there is no useful output because perhaps the page is supposed to be a SPA. This would force you to execute Javascript to render that particular page though

simedw 20 hours ago||
Just had a look and three is quite a lot going into Firefox's reader mode.

https://github.com/mozilla/readability

dotancohen 5 hours ago||
For the vast majority of pages you'd actually want to read, isProbablyReaderable() will quickly return a fair bool guess whether the page can be parsed or not.
phatskat 11 hours ago||||
> I was thinking of showing multiple tabs/views at the same time, but only from the same source.

I think the primary reason I use multiple tabs but _especially_ multiple splits is to show content from various sources. Obviously this is different that a terminal context, as I usually have figma or api docs in one split and the dev server on the other.

Still, being able to have textual content from multiple sources visible or quickly accessible would probably be helpful for a number of users

wrsh07 21 hours ago||||
Would really love to see more functionality built into this. Handling post requests, enabling scripting, etc could all be super powerful
baq 19 hours ago|||
wonder if you can work on the DOM instead of HTML...

almost unrelated, but you can also compare spegel to https://www.brow.sh/

andrepd 18 hours ago||
LLMs to generate SEO slop of the most utterly piss-poor quality, then another LLM to lossilly "summarise" it back. Brave new world?
bubblyworld 22 hours ago||
Classic that the first example is for parsing the goddamn recipe from the goddamn recipe site. Instant thumbs up from me haha, looks like a neat little project.
lpribis 13 hours ago||
Another great example of LLM hype train re-inventing something that already existed [1] (and was actually thought out) but making it worse and non-deterministic in the worst ways possible.

https://schema.org/Recipe

bubblyworld 5 hours ago|||
Can we stop with the unprovoked dissing of anyone using LLMs for anything? Or at least start your own thread for it. It's an unpleasant, incredibly boring/predictable standard for discourse (more so than the LLMs themselves lol).
alt187 4 hours ago||
It's in fact very provoked. The LLM just changes the instructions of the recipe and creates new ones. That's an unpleasant standard of user experience.
bubblyworld 1 hour ago||
That is a terrible reason to be a dick to someone. Especially someone who has created free software that you have no obligation to use.
VMG 5 hours ago||||
The LLM thing actually works. Who cares if it's deterministic. Maybe the same people who come up with arcane schemas that nobody ever uses?
soap- 6 hours ago||||
And that would be great, if anyone used it.

LLMs are specifically good at a task like this because they can extract content from any webpage, regardless of it supports whatever standard that no one implements

komali2 5 hours ago|||
That's a cool schema, but the LLM solution is necessary because recipe website makers will never use the schema because they want you to have to read through garbage, with some misguided belief that this helps their SEO or something. Or maybe they get more money if you scroll through more ads?
bubblyworld 5 hours ago||
I'm genuinely a bit confused by the recipe blog business model. Like there's got to be one, right? People don't usually spew the same story about their grandma hundreds of times on a real blog.

Just hitting keywords for search? Many of them don't even have ads so I feel like that can't be it. Maybe referrals?

Revisional_Sin 4 hours ago||
SEO. Longer articles get ranked higher.
bubblyworld 1 hour ago||
Makes sense, thanks, but how do you actually make money from that without tons of ads? I realise this is a super naive question haha
gpm 19 minutes ago|||
> without tons of ads

This is a requirement? I literally only browse the web with an ad blocker but I always assumed those sites had tons of ads.

RobertBobert 40 minutes ago|||
[dead]
andrepd 18 hours ago|||
Which it apparently does by completely changing the recipe in random places including ingredients and amounts thereof. It is _indeed_ a very good microcosm of what LLMs are, just not in the way these comments think.
simedw 18 hours ago|||
It was actually a bit worse than that the LLM never got the full recipe due to some truncation logic I had added. So it regurgitated the recipe from training, and apparently, it couldn't do both that and convert units at the same time with the lite model (it worked for just flash).

I should have caught that, and there are probably other bugs too waiting to be found. That said, it's still a great recipe.

andrepd 16 hours ago||
You're missing the point, but okay.
0x696C6961 12 hours ago|||
What is the point?
plonq 9 hours ago||
I’m someone else but for me the point is a serious bug resulted _incorrect data_, making it impossible to trust the output.
bubblyworld 5 hours ago||
Assuming you are responding in good faith - the author politely acknowledged the bug (despite the snark in the comment they responded to), explained what happened and fixed it. I'm not sure what more I could expect here? Bugs are inevitable, I think it's how they are handled that drives trust for me.
throwawayoldie 18 hours ago||||
The output was then posted to the Internet for everyone to see, without the minimal amount of proofreading that would be necessary to catch that, which gives us a good microcosm of how LLMs are used.

On a more pleasant topic the original recipe sounds delicious, I may give it a try when the weather cools off a little.

bubblyworld 18 hours ago|||
What do you mean? The recipes in the screenshot look more or less the same, the formatting has just changed in the Spiegel one (which is what was asked for, so no surprises there).

Edit: just saw the author's comment, I think I'm looking at the fixed page

IncreasePosts 17 hours ago||
There are extensions that do that for you, in a deterministic way and not relying on LLMs. For example, Recipe Filter for chrome. It just shows a pop up over the page when it loads if it detects a recipe
bubblyworld 16 hours ago||
Thanks, I already use that plugin, actually, I just found the problem amusingly familiar. Recipe sites are the original AI slop =P
ghm2180 53 minutes ago||
This is great! Another useful amendment to this that would make me use it add a chrome browser tool to allow access to pages that need authn and then scrape them for you.

My #1 usecase is fetching wikis on my hard drive and letting a local coding agent use it for creating plans.

gvison 25 minutes ago||
Great project, much less memory than opening a web page in a browser.
leroman 9 hours ago||
Cool idea! but kind of wasteful.. I just feel wrong if I waste energy.. At least you could first turn it into markdown with a library that preserves semantic web structures (I authored this- https://github.com/romansky/dom-to-semantic-markdown) saving many tokens = much less energy used..
otabdeveloper4 1 hour ago|
This is exactly the sort of thing that should be running on a local LLM.

Using a big cloud provider for this is madness.

mromanuk 20 hours ago||
I definitely like the LLM in the middle, it’s a nice way to circumvent the SEO machine and how Google has optimized writing in recent years. Removing all the cruft from a recipe is a brilliant case for an LLM. And I suspect more of this is coming: LLMs to filter. I mean, it would be nice to just read the recipe from HTML, but SEO has turned everything into an arms race.
tines 15 hours ago||
> Removing all the cruft from a recipe is a brilliant case for an LLM

Is it though, when the LLM might mutate the recipe unpredictably? I can't believe people trust probabilistic software for cases that cannot tolerate error.

joshvm 13 hours ago|||
There is a well-defined solution to this. Provide your recipes as a Recipe schema: https://schema.org/Recipe

Seems like most of the usual food blog plugins use it, because it allows search engines to report calories and star ratings without having to rely on a fuzzy parser. So while the experience sucks for users, search engines use the structured data to show carousels with overviews, calorie totals and stuff like that.

https://recipecard.io/blog/how-to-add-recipe-structured-data...

https://developers.google.com/search/docs/guides/intro-struc...

EDIT: Sure enough, if you look at the OPs recipe example, the schema is in the source. So for certain examples, you would probably be better off having the LLM identify that it's a recipe website (or other semantic content), extract the schema from the header and then parse/render it deterministically. This seems like one of those context-dependent things: getting an LLM to turn a bunch of JSON into markdown is fairly reliable. Getting it to extract that from an entire HTML page is potentially to clutter the context, but you could separate the two and have one agent summarise any of the steps in the blog that might be pertinent.

    {"@context":"https://schema.org/","@type":"Recipe","name":"Slowly Braised Lamb Ragu ...
kccqzy 15 hours ago|||
I agree with you in general, but recipes are not a case where precision matters. I sometimes ask LLMs to give me a recipe and if it hallucinates something it will simply be taste bad. Not much different from a human-written recipe where the human has drastically different tastes than I do. Also you basically never apply the recipe blindly; you have intuition from years of cooking to know you need to adjust recipes to taste.
Uehreka 14 hours ago|||
Hard disagree. I don’t have “years of cooking” experience to draw from necessarily. If I’m looking up a recipe it’s because I’m out of my comfort zone, and if the LLM version of the recipe says to add 1/2 cup of paprika I’m not gonna intuitively know that the right amount was actually 1 teaspoon. Well, at least until I eat the dish and realize it’s total garbage.

Also like, forget amounts, cook times are super important and not always intuitive. If you screw them up you have to throw out all your work and order take out.

kccqzy 13 hours ago||
All I'm arguing is that you should have the intuition to know the difference between 1/2 cup of paprika and a teaspoon. Okay maybe if you just graduated from college and haven't cooked much you could make such a mistake but realistically outside the tech bubble of HN you won't find people confusing 1/2 cup with a teaspoon. It's just intuitively wrong. An entire bottle of paprika I recently bought has only 60 grams.

And yes cook times are important but no, even for a human-written recipe you need the intuition to apply adjustments. A recipe might be written presuming a powerful gas burner but you have a cheap underpowered electric. Or the recipe asks for a convection oven but your oven doesn't have the feature. Or the recipe presumes a 1100W microwave but you have a 1600W one. You stand by the food while it cooks. You use a food thermometer if needed.

tines 14 hours ago||||
Huh? You don't care if an LLM switches pounds to kilograms because... recipes might taste bad anyway????
kccqzy 13 hours ago||
Switching pounds with kilograms is off by a factor of two. Most people capable of cooking should have the intuition to know something is awfully wrong if you are off by a factor of two, especially since pounds and kilograms are fairly large units when it comes to cooking.
whatevertrevor 13 hours ago|||
Not really an apt comparison.

For one an AI generated recipe could be something that no human could possibly like, whereas the human recipe comes with at least one recommendation (assuming good faith on the source, which you're doing anyway LLM or not).

Also an LLM may generate things that are downright inedible or even toxic, though the latter is probably unlikely even if possible.

I personally would never want to spend roughly an hour or so making bad food from a hallucinated recipe wasting my ingredients in the process, when I could have spent at most 2 extra minutes scrolling down to find the recommended recipe to avoid those issues. But to each their own I guess.

visarga 16 hours ago|||
I foreseen this a couple years ago. We already have web search tools in LLMs, and they are amazing when they chain multiple searches. But Spegel is a completely different take.

I think the ad blocker of the future will be a local LLM, small and efficient. Want to sort your timeline chronologically? Or want a different UI? Want some things removed, and others promoted? Hide low quality comments in a thread? All are possible with LLM in the middle, in either agent or proxy mode.

I bet this will be unpleasant for advertisers.

yellow_lead 20 hours ago|||
LLM adds cruft, LLM removes cruft, never a miscommunication
hirako2000 20 hours ago||
Do you also like what it costs you to browse the web via an LLM potentially swallowing millions of tokens per minutes ?
prophesi 20 hours ago||
This seems like a suitable job for a small language model. Bit biased since I just read this paper[0]

[0] https://research.nvidia.com/labs/lpr/slm-agents/

hambes 3 hours ago||
I've thought about getting a web browser to work on the terminal for a while now. This is an idea that hadn't occured to me yet and I'm intrigued.

But I feel it doesn't solve the main issue of terminal-based web browsing. Displaying HTML in the terminal is often kind of ugly and css-based fanciness does not work at all, but that can usually just be ignored. The main problem is javascript and dynamic content, which this approach just ignores.

So no real step forward for cli web browsing, imo.

treyd 21 hours ago||
I wonder if you could use a less sophisticated model (maybe even something based on LSTMs) to walk over the DOM and extract just the chunks that should be emitted and collected into the browsable data structure, but doing it all locally. I feel like it'd be straightforward to generate training data for this, using an LLM-based toolchain like what the author wrote to be used directly.
askonomm 19 hours ago|
Unfortunately in the modern web simply walking the DOM doesn't cut it if the website's content loads in with JS. You could only walk the DOM once the JS has loaded, and all the requests it makes have finished, and at that point you're already using a whole browser renderer anyway.
kccqzy 14 hours ago||
Yeah but this project doesn't use JS anyway.
Jotalea 5 hours ago||
Insanely resource expensive, but still a very interesting "why not?" idea. I think a fitting use case would be adapting newer websites for them to work on older hardware. That is, assuming the new technologies used are not vital to the functionality of the website (ex. Spotify, YouTube, WhatsApp) and can be adapted to older technologies (ex. Google Search, from all the styles that it has, to a simple input and a button).

In theory this could be used for ad blocking; though more expensive and less efficient, but the idea is there.

So, it is a very curious idea, but we still have to find an appropriate use case.

robbles 8 hours ago|
I'm curious whether anyone has run into hallucinations with this kind of use of an LLM.

They are pretty great at converting data between formats, but I always worry there's a small chance it changes the actual data in the output in some small but misleading way.

More comments...