Posted by simedw 7/1/2025
In theory this could be used for ad blocking; though more expensive and less efficient, but the idea is there.
So, it is a very curious idea, but we still have to find an appropriate use case.
Pounds of lamb become kilograms (more than doubling the quantity of meat), a medium onion turns large, one celery stalk becomes two, six cloves of garlic turn into four, tomato paste vanishes, we lose nearly half a cup of wine, beef stock gets an extra ¾ cup, rosemary is replaced with oregano.
The recipe site was so long that it got truncated before being sent to the LLM. Then, based on the first 8000 characters, Gemini hallucinated the rest of the recipe, it was definitely in its training set.
I have fixed it and pushed a new version of the project. Thanks again, it really highlights how we can never fully trust models.
It's beyond parody at this point. Shit just doesn't work, but this fundamental flaw of LLMs is just waved away or simply not acknowledged at all!
You have an algorithm that rewrites textA to textB (so nice), where textB potentially has no relation to textB (oh no). Were it anything else this would mean "you don't have an algorithm to rewrite textA to textB", but for gen ai? Apparently this is not a fatal flaw, it's not even a flaw at all!
I should also note that there is no indication that this fundamental flaw can be corrected.
"Theoretical"? I think you misspelled "ubiquitous".
> Sometimes you don't want to read through someone's life story just to get to a recipe... That said, this is a great recipe
I compared the list of ingredients to the screenshot, did a couple unit conversions, and these are the discrepancies I saw.
But I feel it doesn't solve the main issue of terminal-based web browsing. Displaying HTML in the terminal is often kind of ugly and css-based fanciness does not work at all, but that can usually just be ignored. The main problem is javascript and dynamic content, which this approach just ignores.
So no real step forward for cli web browsing, imo.