Posted by jsheard 9/1/2025
"Section 230 of the Communications Decency Act, which grants immunity to platforms for content created by third parties. This means Google is not considered the publisher of the content it indexes and displays, making it difficult to hold the company liable for defamatory statements found in search results"
Very bizarre that Benn Jordan somehow got roped into it.
Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …
We are so screwed.
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.
When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.
With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.
You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.
Companies already, today, never give you even an inkling of the reason why they didn't hire you.
If anything this scenario makes the hiring process more transparent.
Do you currently run the various automated resume parsing software that employers use? I mean - do you even know what the software is? Like even a name or something? No?
AI believers, pay attention and stop your downplaying and justifications. This can hit you too, or your healthcare. The machine doesn't give a damn.
Worker blacklists have been a real problem in a few places: https://www.bbc.com/news/business-36242312
Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.
Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.
At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.
(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)
I'm talking about the kind of intelligence that supports excellence in subjects like mathematics, coding, logic, reading comprehension, writing, and so on.
That doesn't necessarily have anything to do with concern for human welfare. Despite all the talk about alignment, the companies building these models are focusing on their utility, and you're always going to be able to find some way in which the models say things which a sane and compassionate human wouldn't say.
In fact, it's probably a pity that "chatbot" was the first application they could think of, since the real strengths of these models - the functional intelligence they exhibit - lie elsewhere.
LLMs are the Synthetic CDO of knowledge.