Top
Best
New

Posted by blenderob 21 hours ago

Three Inverse Laws of AI(susam.net)
461 points | 316 commentspage 3
dormento 18 hours ago|
To note:

> - Humans must not anthropomorphise AI systems.

> - Humans must not blindly trust the output of AI systems.

> - Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.

My take: humans should never depend on AI for anything serious.

My boss' take: Cool. I'm gonna ask Gemini about it, he's such a smart guy. I know I can trust him, and in case it goes bad i can always throw him under the bus.

goatlover 18 hours ago|
Interesting that Frank Herbert thought this was the direction humanity was headed when writing Dune in the 60s, way before AI was prevalent.

Granted that was over ten thousand years before his story is set, but subsequent Dune novels (or at least God Emperor) explained his warning about over-reliance on technology for doing our thinking for us, not that it should never be developed (given the prohibition in the Dune universe and how it's skirted in Frank's later novels).

kelseyfrog 19 hours ago||
All of these are entropy-lowering behaviors so without a forcing function, no one will adopt them.

Whether they are the right things to donate not is tangential. As such, they're dead on arrival.

gwbas1c 14 hours ago||
> I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.

Guess what?

Books in the library can be wrong, even peer-reviewed encyclopedias.

Pages on the internet can be wrong, even Wikipedia.

When accuracy is important, you must look at multiple sources. I think AI will get better at providing accurate information, but only a fool relies on a single information source for critical decisions.

vhantz 14 hours ago||
Yes LLM text prediction and peer-reviewed encyclopedias are the same. Good on you throwing internet pages in there too, that brings balance or something
senko 14 hours ago||
My understanding of the parent is more charitable: If your thinking process relies on being told only the truth, you are going to fare lousy in this world.

LLMs are an example, but so are random pages on the internet, a buch of stuff we get served by the media (mainstream or otherwise), "expert opinions" by biased or sponsored experts or experts in a different field, etc, etc.

As the popular quip goes: It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

With LLMs, we actually do get the warnings: Here's the ChatGPT footer: ChatGPT can make mistakes. Check important info. For Claude: Claude is AI and can make mistakes. Please double-check responses.

Such disclaimers, if written, are usually hidden deeply in terms of use for a random website, not stated up front.

protocolture 13 hours ago||
>I think AI will get better at providing accurate information

I think AI will get better at providing multiple sources.

sanderjd 17 hours ago||
Most of the discussion here is about anthropomorphizing, which I honestly think is a bit of a distraction.

The third one about responsibility is the most important one, IMO. This was attributed to an IBM manual decades ago, and I think it remains the correct stance today:

> A computer can never be held accountable, therefore a computer must never make a management decision.

There should be some human who is ultimately responsible for any action an AI takes. "I just let the AI figure it out" can be an explanation for a screw up, but that doesn't mean it excuses it. The person remains responsible for what happened.

janceek 16 hours ago||
> I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.

That won’t help in my opinion. It’s the same like financial gurus saying: “this is not a financial advice”. People just get used to it and brush it off as a legal thing and still fully trust it. I agree that something must be done, but this is not the right way.

ChrisMarshallNY 19 hours ago||
> Humans must not anthropomorphise AI systems.

One of the most salient moments in Ex Machina, is near the very end, where it suddenly becomes obvious that the protagonist (and, let's be frank; "she" was definitely the protagonist) is a robot, with no real human drivers.

I feel as if that movie (like a lot of Garland's stuff), was an interesting study on human (and inhuman) nature.

corobo 19 hours ago||
I just treat it as if I'd asked a public forum the question like reddit.

Decent for stuff that doesn't really matter, even if it gets it wrong.

Still gonna be polite to it because I'm about ready to slap the next person that talks to me like an LLM, I don't want to get used to not being polite in a chat interface

chrisweekly 19 hours ago||
Great point about being polite. I think it's pragmatic to keep "please" and "thank you" out of AI interactions, but I try to remain conscious of their ommission so I don't start down that slope.
jimbokun 17 hours ago||
> I just treat it as if I'd asked a public forum the question like reddit.

Because that's likely the source of the answer it's giving you.

zuzululu 17 hours ago||
I been using codex heavily for the past 6 months and I've observed myself going through different types of emotions. Even now, when it does a sloppy job, I still feel emotion, even while it is just a neutral statistical response, its hard to separate natural human instincts.

I often wish I could reach through the screen and give him a good shake. Sometimes I want to thank him but then cannot due to scarcity of weekly usages granted.

These 3 laws I think will be a lot harder than it looks. It's very easy to get attached to the tool when you rely on it.

seizethecheese 16 hours ago|
Consider how sailors lovingly refer to their craft as “she”. My vague sense is that society views this as a positive.
zuzululu 12 hours ago||
I definitely do not feel codex gives off feminine energy

it feels as frustrating as talking to a junior dev from a decade ago

claude felt more feminine

djoldman 16 hours ago||
This is sound advice but isn't really about AI:

  Humans must not anthropomorphise {non-humans}
  Humans must not blindly trust the output of {anything}
  Humans must remain fully responsible and accountable for consequences arising from the use of {anything}
Naturally, none of this advice matters at all as humans will do what they do. This just documents a subset of the ways real humans consistently make choices to their own detriment.
fxwin 15 hours ago|
I kind of agree with 1, but not really with 2 and 3. It's easy to come up with trivial examples where it is both unreasonable and not feasible to follow those two, both for AI and non-AI scenarios.
More comments...