Posted by jsheard 9/1/2025
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.
Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.
If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.
"ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.
Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.
Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.
The same people blindly trusting AI nonsense are the same people who trusted nonsense from social media or talking heads on unreputable news channels.
Like, who actually reads the output of The Sun, etc? Those people do, have always done and will continue to do so. And they vote, yaaay democracy - if your voter base lives in a fantasy world of fake news and false science is democracy still sancrosact?
I like the term "echoborg" for these people. I hope it catches on.
answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()
Trust but verify?
>Temporal.Instant.fromEpochSeconds(0).toPlainDate()
Uncaught TypeError: Temporal.Instant.fromEpochSeconds is not a function
Hmm, docs [1] say it should be fromEpochMilliseconds(0). Let's try with that! Temporal.Instant.fromEpochMilliseconds(0).toPlainDate()
Uncaught TypeError: Temporal.Instant.fromEpochMilliseconds(...).toPlainDate is not a function
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...In the first chapter it claimed that most adult humans have 20 teeth.
In the second chapter you read that female humans have 22 chromosomes and male humans have 23.
You find these claims in the 24 pages you sample. Do you buy the book?
Companies are paying huge sums to AI companies with worse track records.
Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.
[edit: replace paragraph that somehow got deleted, fix typo]
Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
What does that have to do with my comment?
The OP explicitly wrote
> prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal
The answer ends in `toPlainDate()` which returns an object with year, month and day properties. ie it does not output the requested format.
This is in addition to the issue that `fromEpochSeconds(timestamp)` really should probably be `fromEpochMilliseconds(timestamp * 1000)`
I think this is a good habit to get people into, even in casual conversations. Even if someone didn't directly get their info from AI and got it online, the content could have still been generated by AI. Like you said, the trust part of trust but verify is quickly dwindling.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
I think you're underestimating the effect of billions of dollars on the legal system, and the likely impact of the Have Your Cake And Eat It Act 2026.
And companies have always been able to get away with relatively minor fines for things that get individuals locked up until they rot.
(no easy answers: UK libel law errs in the other direction)
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.
probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.
(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)
People make judgments about people based on second hand information. That is just how people work.
Sure, there is plenty of misinformation being thrown in multiple different directions, but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are you are just looking at the issue through your own misinformed frame of reference.
yes, i literally do think that, so there are no odds.
i think i am well informed on the related subjects to the extent that whatever point someone might want to make i'll probably have a counterpoint
And it took me decades of studying this to determine what to call the two sides.
It's out of fashion and perhaps identified with Christianity, and some people think I'm being tongue-in-cheek or gently trolling by using it. But IMO it's neutral and unambiguous: that's a part of the world that is sacred to all the major religions of the Western hemisphere, while not being tied to any particular set of boundaries.
I really don’t need to do much more than compare ‘number of children killed’ between Israel and Palestine to see who is on the right side of history here. I’ll absolutely form judgements of people based on how they feel about that.
It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...
Benn Jordan has several videos and projects devoted to "digital sabotage", e.g. https://www.google.com/search?hl=en&q=benn%20jordan%20data%2...
So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.
But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.
Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.
So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.
That's already part of the problem. Who defines what integrity is? How do you measure it? And even if you come up with something, how do you convince everyone to agree on it? One person's most trusted source will always be just another bought spindoctor to the next. I don't think this problem is salvageable anymore. I think we need to consider the possibility that the internet will die as a source for any objective information.
Good thing I know aunt Sally is a pathological liar and strawberry cake addict, and anyone who says otherwise is a big fat fake.
You either try hard to tell the objective truth or you bend the truth routinely to try to make a "larger" point. The more you do the latter the less credit people will give your word.
in googlis non est, ergo non est
which sums very well how people are super biased to believe the search results.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
Its literally bending languages into american with other words.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.
In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.
AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.
Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.
> Video and trip to Israel On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel: What I Learned on the Ground, which detailed his recent trip to Israel.
This sounds like the recent Ryan Macbeth video https://youtu.be/qgUzVZiint0?si=D-gJ_Jc9gDTHT6f4. I believe the title is the same. Scary how it just misattributed the video.
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
It doesn't even cover non-renewable resources, or state that the window intact is a form of wealth on its own!
I'm not naive, I'm sure thousands have made these arguments before me. I do think intact windows are good. I'm just surprised that particular framing is the one that became the standard
We have them on the record in multiple lawsuits stating that they did exactly this.
no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification.
"AI responses may include mistakes. Learn more"
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
And yeah, to your point about the literal interpretation of sarcasm being so absurd people want to correct it, I think you’re right. HN is a particularly pedantic corner of the internet, many of us like to be “right” for whatever reason.
But that aside, it is just simply the case that there are a lot of reasons why sarcasm can fail to land. So you just have to decide whether to risk ruining your joke with a tone indicator, or risk your joke failing to land and someone "correcting" you.
Apart from that, it is also true that a lot of people here aren't Americans (hello from Australia). I know this is a US-hosted forum, but it is interesting to observe the divide between Americans who speak as if everyone else here is an American (e.g. "half the country") and those who realise many of us aren't
But you're overstating it as a "divide" - I'm in both of your camps. I spoke with a USian context because yes, this site is indeed US-centric. The surveillance industry is primarily a creation of US culture, and is subject to US politics. And as much as I wish this weren't the case (even as a USian), it is, which is why you're in this topic. So I don't see that it's unreasonable for there to be a bit more to unpack coming from a different native context.
But as to your comment applying to my actual point - yes, in addition to "fraying" culture in the middle, we're also expanding it at the edges to include many more people. Although frankly on the topic of sarcasm I feel it's my fellow USians who are really falling short these days.
You'd be surprised how many Australians have never heard of "drop bears". Because it is just an old joke about pranking foreigners, yes many people remember it, but also many have no clue what it is. It is one of those stereotypical Australianisms which tends to occupy more space in many non-Australian minds than in most Australian minds.
> or how "the front fell off".
I'm in my 40s, and I've lived in Australia my whole life, my father was born here, and my mother moved here when she was three years old... and I didn't know what this was, it sounded vaguely familiar but no idea what it meant. Then I look it up and discover it is a reference to an old Clarke and Dawe skit. I know who they are, I used to watch them on TV all the time when I was young (tweens/teens), but I have no memory of ever seeing this skit in particular. Again, likely one of those Australianisms which many non-Australians know, many Australians don't.
Your examples of Australianisms are the stereotypes a non-Australian would mention; we could talk instead about the Australianisms which many Australians use without even realising they are Australianisms: for example, "heaps of" – a recognised idiom in other major English dialects, but in very common use in Australian English, much rarer elsewhere. Or "capsicum", for "bell peppers"–the Latin scientific name everywhere, but the colloquial name only in a few countries–plus botanically the hot ones are capsicum too, but in Australian English (I believe New Zealand English and Indian English too) only the mild ones are "capsicums", the hot ones are "chilis". Or "peak body"–now we are talking bureaucratese not popular parlance–which essentially means the top national activist/lobbyist group for a given subject area, whether that's LGBT people or homelessness or financial advisors.
Thanks for the clarifications. I think my first exposure to drop bears was a few decades ago on a microcontroller mailing list (PIClist). So maybe that poster was just pulling our legs.
I did perceive "front fell off" as an online phenomenon (ie meme). Which speaks to a growing pan-country online culture (I mean, you did get the reference, it's just not part of your Australian identity)
"peak body" is an interesting one, for the concept being acknowledged. I don't think we really explicitly state such a things in the US. I can come up with lobbying groups I think are notable, but perhaps other USians perspectives differ on that notability. Although I'm sure by the time you get to Washington DC and into the political industry there has to be a similar term.
There is also a cultural element. Countries like the UK are used to deadpan where sarcasm is delivered in the same tone as normal, so thinking is required. In Japan the majority of things are taken literally.
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
So all google had to do was reword their disclaimer differently?
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
I do understand it is a complicated matter, but looks like Google just want to be there, no matter what, in the GenAI race. How much will it take for those snippets to be sponsored content? They are marketing them as the first thing a Google user should read.
What you said might be true in the early days of google, but google clearly doesn't do exact word matches anymore. There's quite a lot of fuzzy matches going on, which means there's arguably some editorializing going on. This might be relevant if someone was searching for "john smith rapist" and got back results for him sexually harassing someone. It might even be phrased in such a way that makes it sound like he was a rapist, eg. "florida man accused of sexually...". Moreover even before AI results, I've seen enough people say "google says..." in reference to search results that it's questionable to claim that people think non-AI search results aren't by google.
Considering the extent to which people have very strong opinions about "their" side in the conflict, to the point of committing violent acts especially when feeling betrayed, I don't think spreading this particular piece of disinformation is any less potentially dangerous than the things I listed.
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
We actually win customers who's primarily goal is getting AI to stop badmouthing them.
What does it mean to “make and example”?
I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.
But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.
This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?
If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.
Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.
As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.
The disclaimer is moot if people consider AI to be authoritative anyway.
Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?
What we're talking about here are legal democratic weapons. The only thing stopping us from using these weapons right now is democratic governance. "The bad people", being unconcerned with democracy, can already use these weapons right now. Trumps unilateral application of tariffs wasn't predestined by some advancement of governmental power by the democrats. He just did it. We don't even know if it was even legal.
Secondly, the people in power are who are spreading this misinformation we are looking at. Information is getting suppressed by the powerful. Namely Google.
Placing limits on democracy in the name of "stopping the bad guys" will usually just curtail the good guys from doing good things, and bad guys doing the bad thing anyway.
A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.
(I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
https://en.wikipedia.org/wiki/Robby_Starbuck#Lawsuit_against...
> (I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
It seems to be public information that this was a condition of the settlement, so no speculation necessary:
https://www.theverge.com/news/757537/meta-robby-starbuck-con... | https://archive.is/uihsi
https://www.wsj.com/tech/ai/meta-robby-starbuck-ai-lawsuit-s... | https://archive.is/0VKrL
It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation".
Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
Is it? Or can it be just reckless, without any regard for the truth?
Can I create a slander AI that simply makes up stories about random individuals and publicizes them, not because I'm trying to hurt people (I don't know them), but because I think it's funny and I don't care about people?
Is the only thing that determines my guilt or innocence when I hurt someone my private, unverifiable mental state? If so, doesn't that give carte blanche to selective enforcement?
I know for a fact this is true in some places, especially the UK (at least since the last time I checked), where the truth is not a defense. If you intend to hurt a quack doctor in the UK by publicizing the evidence that he is a quack doctor, you can be convicted for consciously intending to destroy his fraudulent career, and owe him compensation.
In French law, truth is not required for a statement to be defamatory, but intent is. Intent is usually obvious, for example, if I am saying a restaurant owner poisons his clients, there is no way I am not intentionally hurting his business, it is defamation.
However, if I say that Benn Jordan supports Israel's occupation of Gaza in a neutral tone, like Gemini does here, then it shows no intention to hurt. It may even be seen positively, I mean, for a Palestine supporter to go to Israel to understand the conflict from the opponent side shows an open mind and it is something I respect. Benn Jordan sees it as defamatory because it grossly misrepresent his opinion, but from an outside perspective, is is way less clear, especially if the author of the article has no motive to do harm.
If instead the article had been something along the lines of "Benn Jordan showed support for the genocide in Gaza by visiting Israel", then intent becomes clear again.
As for truth, it is a defense and it is probably the case in the UK too. The word "defense" is really important here, because the burden of proof is reversed. The accused has to prove that everything written is true, and you really have to be prepared to pull that off. In addition, you can't use anything private.
So yeah, you can be convicted for hurting a quack doctor using factual evidence, if you are not careful. You should probably talk to a lawyer before writing such an article.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
No, the ask here is that companies be liable for the harm that their services bring
We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.
Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)