> - Humans must not anthropomorphise AI systems.
> - Humans must not blindly trust the output of AI systems.
> - Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.
My take: humans should never depend on AI for anything serious.
My boss' take: Cool. I'm gonna ask Gemini about it, he's such a smart guy. I know I can trust him, and in case it goes bad i can always throw him under the bus.
Granted that was over ten thousand years before his story is set, but subsequent Dune novels (or at least God Emperor) explained his warning about over-reliance on technology for doing our thinking for us, not that it should never be developed (given the prohibition in the Dune universe and how it's skirted in Frank's later novels).
Whether they are the right things to donate not is tangential. As such, they're dead on arrival.
Guess what?
Books in the library can be wrong, even peer-reviewed encyclopedias.
Pages on the internet can be wrong, even Wikipedia.
When accuracy is important, you must look at multiple sources. I think AI will get better at providing accurate information, but only a fool relies on a single information source for critical decisions.
LLMs are an example, but so are random pages on the internet, a buch of stuff we get served by the media (mainstream or otherwise), "expert opinions" by biased or sponsored experts or experts in a different field, etc, etc.
As the popular quip goes: It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
With LLMs, we actually do get the warnings: Here's the ChatGPT footer: ChatGPT can make mistakes. Check important info. For Claude: Claude is AI and can make mistakes. Please double-check responses.
Such disclaimers, if written, are usually hidden deeply in terms of use for a random website, not stated up front.
I think AI will get better at providing multiple sources.
The third one about responsibility is the most important one, IMO. This was attributed to an IBM manual decades ago, and I think it remains the correct stance today:
> A computer can never be held accountable, therefore a computer must never make a management decision.
There should be some human who is ultimately responsible for any action an AI takes. "I just let the AI figure it out" can be an explanation for a screw up, but that doesn't mean it excuses it. The person remains responsible for what happened.
That won’t help in my opinion. It’s the same like financial gurus saying: “this is not a financial advice”. People just get used to it and brush it off as a legal thing and still fully trust it. I agree that something must be done, but this is not the right way.
One of the most salient moments in Ex Machina, is near the very end, where it suddenly becomes obvious that the protagonist (and, let's be frank; "she" was definitely the protagonist) is a robot, with no real human drivers.
I feel as if that movie (like a lot of Garland's stuff), was an interesting study on human (and inhuman) nature.
Decent for stuff that doesn't really matter, even if it gets it wrong.
Still gonna be polite to it because I'm about ready to slap the next person that talks to me like an LLM, I don't want to get used to not being polite in a chat interface
Because that's likely the source of the answer it's giving you.
I often wish I could reach through the screen and give him a good shake. Sometimes I want to thank him but then cannot due to scarcity of weekly usages granted.
These 3 laws I think will be a lot harder than it looks. It's very easy to get attached to the tool when you rely on it.
it feels as frustrating as talking to a junior dev from a decade ago
claude felt more feminine
Humans must not anthropomorphise {non-humans}
Humans must not blindly trust the output of {anything}
Humans must remain fully responsible and accountable for consequences arising from the use of {anything}
Naturally, none of this advice matters at all as humans will do what they do. This just documents a subset of the ways real humans consistently make choices to their own detriment.