Top
Best
New

Posted by blenderob 20 hours ago

Three Inverse Laws of AI(susam.net)
453 points | 315 commentspage 2
davebranton 13 hours ago|
This phrase always fascinates me : "AI-generated content must not be treated as authoritative without independent verification appropriate to its context."

I've heard the same thing expressed somewhat more concisely as "Never ask AI a question to which you don't already know the answer".

Which raises the question, and I do think it's an important one. Given that this is true, what function does AI answering a question actually serve? You can't rely on its output, so you have to go and check anyway. You could achieve precisely the same outcome by using search engines and normal research.

This, and for many other reasons, is exactly why I never ask it anything.

nijave 9 hours ago||
>You could achieve precisely the same outcome by using search engines and normal research

When it comes to software engineering (as a software engineer myself), the AI is generally a lot quicker than me researching "the old fashion way"

I can fumble around and say "list free software that does X" without knowing I'm looking for, say, a CRM and then spend a couple minutes looking over the results when the "manual" method I would have spent 10-30 minutes just figuring out I was looking for "CRMs"

I like to think of these as sort of "psuedo NP hard" or questions that are slow to answer but quick to validate

poszlem 13 hours ago||
"Give me the answer for: [x]. Provide sources".
teiferer 15 hours ago||
> An AI system is a tool and like any other tool, responsibility for its use rests with the people who decide to rely on it

Doesn't that argument backfire though? If I use a chainsaw then to a certain extend I will need to rely on it not blowing up in my face or cutting my throat. If I drive a car I need to rely on that its brakes work and the engine doesn't suddenly explode. If a pilot flies an airplane which suddenly has a technical issue and they crashland heroically save half the souls on board then the pilot isn't criminally responsible for manslaughter of the other half.

Unless there is gross negligence, in any of the above cases, just like with AI, how can you make somebody responsible for a tool failure?

jpitz 15 hours ago||
I'm gonna push the responsibility up a level in the ladder:

A competent adult using a tool ought to understand the inherent pitfalls of using that tool.

Chainsaws are dangerous, in obvious and non obvious ways. The tool can operate as designed and still amputate your foot.

namenotrequired 6 hours ago||
Not OP, but I think their point was the corollary of that.

Yes, obviously bad use of a good tool is dangerous. But correct use of a malfunctioning tool is also dangerous.

Millions of people understand when they get in their car that there’s a tiny chance the car will crash/explode that day through no fault of the driver. Most do not have the knowledge and competence (or even the time) to thoroughly check the engine every day to guarantee that that won’t happen. They get in anyway.

At some point you have to trust in something.

technotarek 18 hours ago||
“ Humans must not blindly trust the output of AI systems. AI-generated content must not be treated as authoritative without independent verification appropriate to its context.”

I’m lost, how do individuals actually do this in our current world? Is each person expected to keep a “white list” of reliable sources of truth in their head. Please don’t confuse what I’m saying with a suggestion that there is no truth. It just seems like there are far more sources of mis- of half-truths and it’s increasingly difficult for people to identify them.

ericmcer 18 hours ago||
I... am not sure. Computers are machines that create order (like db tables) from the chaos of reality. Now we have LLMs that make computers spit out chaos as well.

They don't have to though, we can still leverage LLMs to organize chaos, which is what I hope they ultimately end up doing.

For example an AI therapist is a nightmare, people putting the chaos of their mental state into a machine that spits dangerous chaos back out. An AI tool that parsed responses for hard data (i.e. rate 0-9 how happy was the person) and then returned that as ordered data (how happy was I each day for the last month) that an actual therapist and patient could review is the correct use of AI and could be highly trusted. The raw token output from LLMs should just be used for thinking steps that lead to a parseable hard data answer that can be high trust.

Of course that isn't going to happen, but I can see some extremely cool and high trust products being built using LLMs once we stop treating them like miracle machines.

3form 18 hours ago|||
Did AI change anything in that regard? I believe that same as before, you couldn't trust everything you see, and research effort was always more than keeping a white list; means vary, case-by-case.

And same it is now. It's a change in quantity, but not quality.

jimbokun 16 hours ago|||
Humanity has spent millennia creating and evolving institutions to address exactly this problem, and have recently decided to essentially throw out the whole lot and replace it with nothing.
soks86 18 hours ago||
Checking AI citations and reading.

Critical thinking and reading comprehension and the primary tools in determining truth, AFAIK. Knowing facts beforehand helps too but a trustworthy source can provide false information as much as an untrustworthy source can provide true information.

This has always been an issue, and in the past it was a more difficult issue because your sources of knowledge were more limited. Nowadays its mostly about choosing the right source(s) rather than having to go out of your way to find them (like traveling to a regional/university library).

heresie-dabord 8 hours ago||
> Non-Abdication of Responsibility

Previously stated as

“A computer can never be held accountable, therefore a computer must never make a management decision.”

– IBM Training Manual, 1979

Ifkaluva 18 hours ago||
The thing that I find difficult about adjusting to AI tools is the roulette-like nature.

When they produce correct output, they produce it much faster than I could have, and I show up to meetings with huge amounts of results. When the AI tool fails and I have to dig in to fix it, I show up to the next meeting with minimal output. It makes me seem like I took an easy week or something.

taeshdas 19 hours ago||
“Don’t anthropomorphise” is fighting the wrong layer. The entire product design of chat interfaces is built to encourage anthropomorphism because it increases engagement. Expecting users to resist that is like asking people not to click notifications. If this is a real concern, it has to be solved at the product level, not via user discipline.
layer8 18 hours ago|
The article does propose changes at the product level.
pbw 18 hours ago||
Rather than “the book explains how bread is made” say “the sheets of paper which make up the book have ink in the shape of letterforms which correlate with information about how bread is made”.
j2kun 17 hours ago|
Rather than "the book explains how bread is made" say "the book has a recipe for baking bread" and do not say, "the book is my soul mate"
AdamH12113 18 hours ago||
Anthropomorphizing LLMs is something that happens in the design stage, when they're given human names and trained to emit first-person sentences. If AI companies and developers stop anthropomorphizing them, users won't be misled in the first place.
eranation 9 hours ago||
Two of these laws I see being violated repeatedly, but it’s not always as obvious as one would hope.

Claude Code, Cursor, Codex etc impersonate your GitHub user. Either via CLI or MCP or using your git credentials. It’s perfectly reasonable that a piece of code made it to production where not a single human actually looked at it (Alice wrote it with AI, Bob “reviewed it” with AI, including posting PR comments as Bob, Alice “addresses” these comments, e.g. fixes / pushes back, and back and forth using the PR as an inefficient yet deceptive mechanism for AI to have a conversation with itself, while adding a false sense of process. Eventually Bob will prompt “is it prod ready” and will ship it, with 100% unit test coverage and zero understanding of what was implemented). Now this may sound like an imaginary scenario, but if it could happen, it will happen, and it probably already happens.

Cloud agents are nice enough to set the bot as the author and you as a co author, but still the GitHub MCP or CLI will use your OAuth identity.

I don’t have a clear answer to how to solve it, maybe force a shadow identity to each human so it’s clear the AI is the one who commented. But it’s easy to bypass. I’m worried not more people are worried about it.

polynomial 9 hours ago|
Not everybody isn't worried: https://ctolunchnyc.substack.com/p/cto-lunch-nyc-spring-2026...
dormento 17 hours ago|
To note:

> - Humans must not anthropomorphise AI systems.

> - Humans must not blindly trust the output of AI systems.

> - Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.

My take: humans should never depend on AI for anything serious.

My boss' take: Cool. I'm gonna ask Gemini about it, he's such a smart guy. I know I can trust him, and in case it goes bad i can always throw him under the bus.

goatlover 17 hours ago|
Interesting that Frank Herbert thought this was the direction humanity was headed when writing Dune in the 60s, way before AI was prevalent.

Granted that was over ten thousand years before his story is set, but subsequent Dune novels (or at least God Emperor) explained his warning about over-reliance on technology for doing our thinking for us, not that it should never be developed (given the prohibition in the Dune universe and how it's skirted in Frank's later novels).

More comments...