Posted by SEJeff 17 hours ago
We can easily look ahead a few years and see how people will rely on the LLMs to be a source of truth in the same way people looked at Google that way, or newspapers.
Rewriting history has been happening for a while, and with LLMs being the one-stop shop for guidance and truth, the rewrite will be complete.
Doubly so since most people see these things as artificial intelligence, and soon to be superintelligence...so how can they be wrong?
I can't tell if this is slop or parody!
I am paranoid that this is happening every time I ask a LLM for a product recommendation or a shop recommendation. In the same way as SEO, anyone wanting to sell or convince needs to do as much as they can to influence the LLM.
It's almost like he was a better Chuck Norris than Chuck Norris. By his own ... testimony ...
The news here is that AI has too much trust in the internet. The first time I allowed tool-calling, it started googling up some nonsense instead of thinking... But I think at least it's possible for the AI to evaluate the quality of the source - you just have to ask for an analysis, and you'll get a reasonable evaluation. With humans, something like that just doesn't work - they'll get aggressive or might even start throwing bananas...