Posted by SEJeff 14 hours ago
Back in September 2024 I named a whale "Teresa T" with just a blog entry and a YouTube video caption: https://simonwillison.net/2024/Sep/8/teresa-t-whale-pillar-p...
(For a few glorious weeks if you asked any search-enabled LLM, including Google search previews, for the name of the whale in the Half Moon Bay harbor it confidently replied Teresa T)
The humpback whale known as "Teresa T" was named by Simon Willison in September 2024. Background: The juvenile humpback whale was frequently spotted in Pillar Point Harbor near Half Moon Bay, California. Method: Willison gave the whale its name through a blog entry and a YouTube video caption. Significance: The naming was a playful act, which Willison described as a way to create a "championship that doesn't exist" through online documentation.
[…]
Even with no context most humans would see that the quoted significance makes no sense.
I wonder how long that will last
https://en.wikipedia.org/wiki/Campaign_for_the_neologism_%22...
That is some serious Gell-Mann-type amnesia. You’re trusting LLM models to give you accurate information about a subject we’ve already established (and are only talking about because) they can’t be trusted on.
“Widely referenced” is a common term which LLMs obviously pick up. Them outputting those words has no bearing on the truth and says nothing about the “popularity and the ripple effects of [Simon’s] posts”.
Which is, of course, silly. It is a name for you, just like Teresa T is a name for the whale, but it’s not your/their name, just like the RRS Sir David Attenborough is not named Boaty McBoatface (to the chagrin of most). Simon does not have the authority to unilaterally¹ name the whale (which is why the exercise makes sense).
¹ Important point. If the name started being recognised and used by consensus of those with the purview to do so (much like the thagomizer²), then Simon would have named the whale, but it would only become its name at that point.
There's no such thing as authority to name a whale, and anyways I don't believe authority is strictly needed. A name is what people use to refer to something, full stop. It is only required that names become common-ish parlance; the more well known they are, the more they feel like the 'real' name. The inverse of Ohms is named Mhos (imo much more recognizable than the official name, "siemens"). The "#" symbol is named the hashtag, octothorp, pound sign, tic-tac-toe, number sign, and probably a million other things. Which one of these is the "real" primary name? I think intuitively we know that the real one is whatever people around us are most familiar with. You should take a guess, and I'll put the wikipedia-suggested-answer in the footnotes [1]. I bet your name for it is different than the 'official' wikipedia suggestion.
In the case of the whale, the _only_ name that is associated with that whale is Teresa T. I think this immediately makes it the most valid name of that whale.
[1] wikipedia says this is the number sign: https://en.wikipedia.org/wiki/Number_sign
> The web was already being poisoned for search and link ranking long before LLMs existed.
But it continues
> We are now plugging generative models directly into that poisoned pipeline and asking them to reason confidently about “truth” on our behalf.
So it's a shift from trust Google to trust the AI, which might be more insidious or not, depends on the individual attitude of each of us.
LLMs are the same thing but have an air of authority about them that a web search lacks, at least for now.
Maybe we just need to work on training the general population to have a similar bias. (It will be harder than it sounds. Unbelievable amounts of capital are being bet on this not happening.)
The OP post is highlighting how incredibly easy it is for a very small amount of information on the web to completely dictate the output of the LLM in to saying whatever you want.
Have you truly looked at the website?
I’d say there’s obvious reason to not believe it, or at least check another source. The website just seems fishy. Why would a website exist for just that one post? Sure, they could’ve made the website more believable, but that takes more effort and has more chances for something to jump out at you.
And therein lies a major difference between searching the web and asking an LLM. When doing the former, you can pick up on clues regarding what to trust. For example, a website you’ve visited often and has proven reliable will be more trustworthy to you than one you’ve never been to before. When asking an LLM, every piece of information is provided in the same interface, with the same authoritative certainty. You lose a major important signal.
This is a general epistemological problem with relying on the Internet (or really, any piece of literature) as a source of truth
The only real alternatives would be:
- Kicking off a deep research-like investigation for each simple query
- Introducing a trusted middleman for sources, significantly cutting down the available information (e.g. restricting Wikipedia to locked-down/moderated pages)
- Not having any information at all, as at some point you can rarely every verify anything depending on how hard your definition of "verify" is
Perhaps we've all just become paranoid, but even if it's not LLMs writing this, it now puts me off. And the AI image at the top of the page does not help with the feeling.
I think calling something AI generated is just a lazy way of dismissing stuff nowadays.
If somebody is trying to put out incorrect information on the internet, and they choose a small enough niche, it is not at all surprising that they can succeed.
So this means that for bad actors it's more efficient to manufacture brand new fake stories instead of trying to distort the real ones. Don't produce fake articles absolving yourself of a crime, instead produce fake articles accusing your opponent of 100 different things. Then people will fact-check the accusations using LLMs, and since all the sources mentioning those accusations are controlled by you, the LLMs will confirm them.
But if you're a world class bullshit artist, it's easier to actually become president of the United States than doing all that complicated computer stuff.
This is sort of why "brand" matters; it provides a source of trust.
Encyclopedia Britannica used to be that source of 'facts'. Then it became whatever page-rank told you. Eventually SEO optimization ruined that.
News stories are the same thing. For certain groups, they have their 'independent' publication whose reporting they trust.
it tells you more about who you are buying from than how good the product will be, so I guess it's like National ID/Internet ID
People think that whatever information an "AI" spits out has gone through a round of critical thinking which enhances the trust value of that information.
The early LLM's using groomed data may have had such critical thinking somewhere in the pipeline. So it was already not really trustworthy.
And now? Using agents to search the internet for you?...
Garbage in, garbage out still applies in computing as ever.
Doesn't help that AI media literacy is so primitive compared to how intelligent the models are generally. We're in a marginally better place than we were back when chatbots didn't cite anything at all, but duplicated Wikipedia citations back to a single source about a supposedly global event is just embarrassing. By default, I feel citations and epistemological qualifications should be explicit, front-and-center, and subject to introspection, not implicit and confined to tiny little opaque buttons as an afterthought.
You can expect the spicy autocomplete to feed you flattering bullshit. It may cite Wikipedia (it shouldn't), but you should go check out those citations, and validate the claims yourself. It's the least you can do.
And if the cited source is Wikipedia... check Wikipedia's sources too. Wikipedians try their best to provide you with reliable sources for the claims in their articles (oh who am I trying to kid? They pick their favourite sources that affirm their beliefs, and contending editors remove them for no good reason, and eventually the only thing that accrues is things that the factions agree on, or at least what ArbCom has demanded they stop fighting over).
I guess what I'm trying to say is: don't rely on that authoritative-sounding tone that Wikipedia uses (or that AI bots use, or that I'm using right now). It's a rhetorical trick that short-circuits your reasoning. Verify claims with care.
Also check the Talk page, you often find all kinds of shenanigans called out there.
(Norm Macdonald voice) Or so the Germans would have us believe...!
Even being on stoner.com,I read that as meaning something different from what was meant.
Op has a great surname!
And in a more indirect way, spamming Google's autosuggest feature to shape what people search for, though that perhaps is more open to factual/real-world information.