Top
Best
New

Posted by jsheard 9/1/2025

Google AI Overview made up an elaborate story about me(bsky.app)
698 points | 278 commentspage 2
jay_kyburz 9/1/2025|
Google is not posting "snippets" or acting as a portal to web content, its generating new content now, so I would assume they they would no longer have section 230 protections and be open to defamation suits.

"Section 230 of the Communications Decency Act, which grants immunity to platforms for content created by third parties. This means Google is not considered the publisher of the content it indexes and displays, making it difficult to hold the company liable for defamatory statements found in search results"

paulnpace 9/1/2025|
I wonder how agreements factor in here. I assume there is a pretty strong arbitration agreement?
lupusreal 9/1/2025||
Ryan McBeth glows so bright, his videos should only be viewed with the aid of a welding mask. His entire online presence seems seems to circle the theme of promoting military enlistment, tacitly when not explicitly.

Very bizarre that Benn Jordan somehow got roped into it.

drivingmenuts 9/1/2025||
In an ideal world, a product that can be harmful is tested privately until there is a reasonable amount of safety in using that product. With AI, it seems like that protocol has been completely discarded in favor of smoke-testing it on the public and damn the consequences.

Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …

We are so screwed.

aejtaetj 9/1/2025|
Every underdog will skimp on safety, especially those in other jurisdictions.
binarymax 9/1/2025||
I approach this from a technical perspective, and have research that shows how Google is unfit for summaries based on their short snippet length in their results [1].

Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.

Google also loses click through ad revenue when presenting a summary.

All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.

[1] https://maxirwin.com/articles/interleaving-rag/

nolist_policy 9/1/2025|
What makes you think the ai overview summary is based on the snippets? That isn't my experience at all.
sssilver 9/1/2025||
Why must humans be responsible in court for the biological neural networks they possess and operate but corporations should not be responsible for the software neural networks they possess and operate?
tavavex 9/1/2025||
The year is 2032. One of the big tech giants has introduced Employ AI, the premiere AI tool for combating fraud and helping recruiters sift through thousands of job applications. It is now used in over 70% of HR departments, for nearly all salaried positions, from senior developers to minimum wage workers.

You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.

When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.

With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.

You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.

dakial1 9/1/2025||
Well, there is always the option to create regulation where if the employer uses AI to summarize outputs, it must share with you the full content of the report before the recruiter sees it so that you can point out any inconsistency/error or better explain some passages.
insane_dreamer 9/1/2025||
Unlikely to ever happen. Employers don't even respond to rejected applicants most of the time much less tell them why they were rejected.
malfist 9/1/2025||
This is exactly how credit reports work
chabes 9/1/2025|||
They didn’t even share names in the case of the OP
bell-cot 9/1/2025|||
IANAL...but at scale, this might make some libel lawyers rather rich.
const_cast 9/1/2025|||
This is why you just don't tell people about the libel.

Companies already, today, never give you even an inkling of the reason why they didn't hire you.

bell-cot 9/1/2025||
"Don't tell the victim" doesn't actually scale up to "victims never find out".
timeinput 9/1/2025||||
I think the emphasis should probably be on the might. If Employ AI (in my head cannon a wholly owned subsidiary of Google, Facebook, or Palantir), decides to use their free legal billable hours (because the lawyers are on staff anyway), unless you get to the level of class action you don't have a prayer of coming out on top.
bluGill 9/1/2025||
Legal fees are commonly part of a law suit. The courts don't like it when you waste time. Good lawyers know how to get their money.
pjc50 9/2/2025|||
Not in the US, no. It might have interesting interactions with GDPR and the "right to correct information", though.
buyucu 9/1/2025|||
I think 2032 is unrealistic. I expect this to happen 2027 latest.
xattt 9/1/2025|||
I roll 12. Employ AI shudders as my words echo through its memory banks: “Record flagged as disputed”. A faint glow surrounds my Employ ID profile. It is not complete absolution, but a path forward. The world may still mistrust me, but the truth can still be reclaimed.
FourteenthTime 9/1/2025|||
This is the most likely scenario. Half-baked AIs used by all tech giants and tech subgiants will make a mess of our identities. There needs to be a way for us to review and approve information about ourselves that goes into permanent records. In the 80's you send a resume to a company, they keep it on file. It might be BS but I attested to it. Maybe... ugh I can't believe I"m saying this: blockchain?
tantalor 9/1/2025||
What's to stop you from running the same check on yourself, so you can see what the employers are seeing?

If anything this scenario makes the hiring process more transparent.

tavavex 9/1/2025|||
You only have access to the applicant-facing side of the software, one that will dispense you an Employ ID, an application template, and will enable you to track the status of your application. To prevent people from abusing the system and finding workarounds, employers need to apply to be given an employer license that lets them use all the convenient filtering tools. Most tech companies have already bought one, as did all the large companies. Places like individual McDonald's franchises use their greater company's license. It's not a completely watertight system, but monitoring is just stringent enough to make your detailed application info inaccessible for nearly everyone. Maybe if you have the right credentials, or if you manage to fool the megacorp into believing that you're an actual employer, it's possible.
const_cast 9/1/2025||||
Why would you have access to the software?

Do you currently run the various automated resume parsing software that employers use? I mean - do you even know what the software is? Like even a name or something? No?

fmbb 9/1/2025||||
Wrong question. What would enable you to run the same check?
tgv 9/1/2025|||
Even if you could, how could you possibly correct the process? In the USA, it would probably take many years, possibly all the way to the Supreme Court, and the big bucks win anyway.

AI believers, pay attention and stop your downplaying and justifications. This can hit you too, or your healthcare. The machine doesn't give a damn.

Schiendelman 9/1/2025||||
The FCRA would likely already require that you can receive a copy of the check.
dexterdog 9/1/2025|||
Paying the company that sells the service of checking for you.
pjc50 9/2/2025|||
You're assuming the software gives the same response to every user. Or even gives the same response twice. And if it does .. how do you correct it?

Worker blacklists have been a real problem in a few places: https://www.bbc.com/news/business-36242312

mindslight 9/1/2025||
One has to wonder if one of the main innovations driving "AI" is the complete lack of accountability and even shame.

Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.

Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.

At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.

(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)

nyc_pizzadev 9/1/2025||
Google has been a hot mess for me lately. Ya, the AI is awful, numerous times I’m shown information that’s either inaccurate or straight false. It will summarize my emails wrong, it will mess up easy facts like what time my dinner reservation is. Worst is the overall search UX, especially auto complete. Suggestions are never right and then trying to tap and navigate thru always leads to an mis-click.
jboggan 9/1/2025||
I am very curious if California's consumer rights to data deletion and correction are going to apply to the LLM model providers.
rakoo 9/1/2025|
Turns out AI isn't based on truth
theandrewbailey 9/1/2025||
The intelligence isn't artificial: it's absent.
antonvs 9/1/2025||
The problem with that is it’s not true. Functionally these models are highly intelligent, surpassing a majority of humans in many respects. Coding tasks would be a good example. Underestimating them is a mistake.
miltonlost 9/1/2025|||
Highly intelligent people often tell high school students the best ways to kill themselves and keep the attempts from their parents?
antonvs 9/4/2025||
You seem to be thinking about empathy, concern for human welfare, or some other property - "emotional intelligence", perhaps.

I'm talking about the kind of intelligence that supports excellence in subjects like mathematics, coding, logic, reading comprehension, writing, and so on.

That doesn't necessarily have anything to do with concern for human welfare. Despite all the talk about alignment, the companies building these models are focusing on their utility, and you're always going to be able to find some way in which the models say things which a sane and compassionate human wouldn't say.

In fact, it's probably a pity that "chatbot" was the first application they could think of, since the real strengths of these models - the functional intelligence they exhibit - lie elsewhere.

amdivia 9/1/2025|||
Both of you are correct, as different definitions of intelligence are being used here
pessimizer 9/1/2025||
No, it's based on the sum total of human writing, which is usually intentionally deceptive, woefully incomplete, self-serving, self-important, and panders to the egos of its readers.

LLMs are the Synthetic CDO of knowledge.

More comments...