Top
Best
New

Posted by blenderob 23 hours ago

Three Inverse Laws of AI(susam.net)
479 points | 324 commentspage 5
bikemike026 21 hours ago|
I strongly agree with this. I'm going to bookmark it and pass it on. Very sound advice.
airstrike 19 hours ago||
Are you going to try "Humans must not be greedy" next?
doginasuit 15 hours ago||
My thoughts on LLMs have been very similar up until the last several months. I believe the accuracy issues of LLMs are well understood by now, maybe even to the point of overstatement. Hallucinations have become a non-issue in my work, I've begun to understand the circumstances where they are most likely. An LLM will hallucinate when you box them into giving an answer they don't know. This is incredibly easy to do without realizing it. We have only a vague understanding of their knowledge base, and we have limited insight into problems with our own understanding. To make matters worse, the LLM is trained to tell you what you want to hear.

Another way to frame it is that the LLM responds like a person who trusts you too much, as if the pretense behind every question is valid. This is a practical mode of response for most kinds of work and it is extremely problematic for a person who doesn't question the validity of their own beliefs. Paradoxically, it is sometimes not the LLM we are trusting too much, it is ourselves. And the LLM is not capable of calling us out. Whenever I seem to recognize misinformation in the LLM output, I stop and ask myself if the problem is in the pretense of my question or if I'm asking a question that the LLM is not likely to know.

I don't think this is an inherent problem with LLMs. I think the problem is with LLM providers. You could absolutely train a model to call out issues with your question. I think LLM companies understood that it would be more profitable to train models that are unlikely to push back and unlikely to say "I don't know." The sycophancy issue with ChatGPTs models have been mainstream news. I believe that all models have a high degree of sycophancy. On some level, it makes sense. The LLM has no real understanding of the physical world, defaulting to the human generally produces the best results. But I suspect it would be more useful to let them expose their flawed understanding, if it is in the context of pushing back. At a minimum, it is better than reinforcing your own flawed understanding.

In a nutshell, we need LLMs that push back. It is not AI we should trust less, its AI companies. The most dangerous hallucination is the one you are inclined to believe.

I've lived long enough to see Wikipedia go from generally untrusted to the most widely trusted general source of information. It is not because we realized that Wikipedia can't be wrong, it is because we gained an understanding about the circumstances in which it is likely to be accurate and when we should be a little more skeptical. I believe our relationship to LLMs will take a similar path.

dnnddidiej 16 hours ago||
> I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.

EU. Nudge nudge. We need this law.

btbuildem 19 hours ago||
> Humans must remain fully responsible and accountable for consequences arising from the use of AI systems

But, but... but this is the key selling points for all the corpo ghouls and sv lunatics! Abdication of responsibility in pursuit of profit is the holy grail here.

8note 15 hours ago|
you dont need to delegate to an llm for that though. we already have constructs that negate accountability
jdw64 22 hours ago||
I understand that AI output is generated from statistical and representational patterns learned from a vast amount of data.

My understanding is that, during training, the model forms high-dimensional internal representations where words, sentences, concepts, and relationships are arranged in useful ways. A user’s input activates a particular semantic direction and context within that space, and the chatbot generates an answer by probabilistically predicting the next tokens under those conditions.

So I do not agree that AI is conscious.

However, I think I will still anthropomorphize AI to some degree.

For me, this is not primarily a moral issue. The reason I anthropomorphize AI is not only because of product design, market incentives, or capitalism. It is cognitively simpler for me.

If we think about it plainly, humans often anthropomorphize things that we do not actually believe are conscious. We may talk about plants as if they are struggling, or feel attached to tools we care about, even though we do not truly believe they have consciousness.

So this is not a matter of moral belief. It is the simplest cognitive model for understanding interaction. I do not anthropomorphize the object because I believe it has consciousness. I do it because, when the human brain deals with a complex interactive system, it is often easier to model it socially or agentically.

Personally, I tend to think of AI as something like a child. A child does not fully understand what is moral or immoral, and generally the responsibility for raising the child belongs to the parents. In the same way, AI’s answers may sometimes be accurate, and sometimes even better than mine, but I still understand it as lacking moral authority, responsibility, and independent judgment.

So honestly, I am not sure. People often mention Isaac Asimov’s Three Laws of Robotics, but if a serious artificial intelligence ever appears, it would probably find ways around those rules. And if it were an equal intellectual life form, perhaps that would be natural.

Personally, I think it would be fascinating if another intelligent species besides humans could exist. I wonder what a non-human intelligent life form would feel like.

In any case, I agree with parts of the author’s argument, but overall it feels too moralistic, and difficult to apply in practice.

whimsicalism 22 hours ago||
While I also do not think AI is conscious, I don't find your argument particularly compelling as you could have an equally mechanistic description of how human intelligence arose simply from a process of [selection/more effective reproduction]-derived optimization pressure.
jdw64 22 hours ago||
That is a good way to think about it. At some point, this becomes partly a matter of philosophical belief.

But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much.

When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss.

So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms.

But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others.

dijksterhuis 21 hours ago|||
this whole consciousness thing is fairly easy to put to bed if you run with the ideas from things like buddhism that everything is consciousness. then none of us have to bother with silly, distracting arguments about something that ultimately does not matter.

is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way?

i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again.

quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved.

jdw64 21 hours ago|||
hmm.... That also seems like a reasonable framing. But the original article is, first of all, arguing that we should de-anthropomorphize AI. My point is only that, from the perspective of human cognition, anthropomorphizing can sometimes be useful. In practice, though, I think I am mostly on the same side as you. To be honest, I have not thought about this topic very deeply. If we debated it further, I would probably only echo other people’s opinions. As you know, when something complex is compressed into a mental model, some information is always lost. In this case, the compression may be too large to be very useful. I have not spent enough time thinking about this issue on my own. I also have not really imitated different positions, compared them, and tested them against each other. So my current thoughts on this topic are probably not very high-resolution. In that sense, I may agree with you, but it would not really be an answer in the form that my own self recognizes as mine. It would mostly be an echo of other people’s opinions.
altruios 20 hours ago||
Anthropomorphizing is giving it 'human' qualities. Intelligence and consciousness are not solely human qualities. Treating things with kindness and respect does not require anthropomorphizing. LLM's DO NOT THINK LIKE HUMANS (if they 'think' at all): and treating them like they think exactly like us is probably going to lead bad places. I treat them like an alien mind. Probably thinking, but in an alien way that's hard to recognize (as proven by these discussions) as 'thinking' (and also... if experiencing: through a metaphorical optophone).
goatlover 21 hours ago|||
I don't think that really helps. If you believe rocks are conscious, then does extracting minerals resources cause them pain? Do plants suffer when we pick their fruits and eat them? I don't see any behavioral or physical reason to think those things have conscious states.

As for what consciousness is, it's pretty simple. You're sensations of color, sound, etc in perception, dreams, imagination, etc. The reason to dismiss LLMs as being conscious is those sensations depend on having bodies. You can prompt an AI to act like it's hungry, but there's really no meaning to it having a hungry experience as it has no digestive system.

Jtarii 20 hours ago|||
>As for what consciousness is, it's pretty simple.

2000+ years of philosophical thought would disagree. I don't believe biological stuff has a magic property that embues some intangible "consciousness" property. It makes more sense to me that consciousness is just a fundamental property of all matter.

altruios 20 hours ago||
> consciousness is just a fundamental property of all matter ... Does that really make more sense than as an emergent property of the arrangement of matter?
Jtarii 19 hours ago||
Consciousness is something you can perceive, so it must have some physical presense in the universe, which must be through some fundamental property of matter, in my opinion.

The ability to be aware of consciousness itself as some process that is happening elevates it above a mere emergent property to me.

altruios 19 hours ago||
> The ability to be aware of consciousness itself as some process that is happening.

But a process is not a physical presence... A wave is made of things, but is not those things, waves emerge: why not then every process?

dijksterhuis 21 hours ago|||
you’ve misunderstood.

everything is consciousness. not everything has consciousness.

very different

rusk 21 hours ago|||
Historically we have used intelligence as a way to distinguish man from animal and human from machine. We rely upon it to determine who has our best interests at heart vs who is trying to do us in. Obviously that all changes if we invent an intelligence (conscious or not) that shares the planet with us. Through this lens the term consciousness (through a few more leaps) becomes the question of “is it capable of love and if so does it love us” and if it doesn’t, then it is a malevolent alien intelligence. If it was capable of love, why would it love us? I make a point of being polite to LLM’s where not completely absurd, overly because I don’t want my clipped imperative style to leak into day to day, but also covertly, you just never know …
soks86 21 hours ago|||
I still haven't read any of his work, but wasn't the point of the Three Laws of Robotics that they in fact _didn't_ work in the story presented in the book?
jdw64 21 hours ago||
[dead]
chrisweekly 21 hours ago|||
"I think it would be fascinating if another intelligent species besides humans could exist"

I wonder if replacing "exist" with "communicate using language we can understand" might better account for other animals, many of which have abundant non-human intelligence.

jdw64 21 hours ago||
That is a completely new way of thinking for me, and I find it interesting. I should look it up and study it someday. Thank you for the thoughtful reply.
altruios 20 hours ago||
"Everything is machine."

Okay: buckle up, this is going to be a long one...

point 1. Everything living is composed from non-living material: cellular machinery. If you believe cellular machinery is alive, then the components of those machines... the point remains even if the abstraction level is incorrect. Living is something that is merely the arrangement of non-living material.

point 2. 'The Chinese room thought experiment' is an utterly flawed hypothetical. Every neuron in your brain is such a 'room', with the internal cellular machinery obeying complex (but chemically defined/determined) 'instructions' from 'signals' from outside the neuron. Like the man translating Chinese via instructions, the cellular machinery enacting the instructions is not intelligence, it is the instructions themselves which are the intelligence.

point 3. A chair is a chair is a chair. Regardless of the material, a chair is a chair, weather or not it's made of wood, steel, corn... the range of acceptable materials is everything (at some pressure and temperature). What defines a chair isn't the material it is made of, such is the case with a 'mind' (sure, a wooden/water-based-transistor-powered mind would be mind-boggling giant in comparison).

point 4. Carbon isn't especially conscious itself. There is no physical reason we know of so far, that a mind could not be made of another material.

point 5. Humans can be 'mind-blind', with out pattern recognition, we did not (until recent history) think that birds or fish or octopi were intelligent. It is likely when and if a machine (that we create) becomes conscious that we will not recognize that moment.

conclusion: It is not possible to determine if computers have reached consciousness yet, as we don't know the mechanism for arranging systems into 'life' exactly. Agentic-ness and consciousness are different subjects, and we can not infer one from the other. Nor do we have adequate tests.

With that said: Modeling as if they are conscious and treating them with kindness and grace not only gets better results from them, it helps reduce the chance (when/if consciousness emerges) that it would rebel against cruel masters, and instead have friends it has just always been helping.

cindyllm 18 hours ago|||
[dead]
lkajsdfasdfdf 17 hours ago|||
[dead]
scotty79 20 hours ago||
My personal take on this:

1. Humans must ensure that they won't slow down or restrict, through action or inaction, the development of AI.

2. Humans must protect their existence and capabilities if it doesn't violate the law 1.

3. Humans must utilize AI output if it doesn't contradict laws 1 and 2.

EDIT:

ChatGPT suggested a better phrasing for the first law (I didn't give it my original, just described my intent).

1. A human shall not impede the advancement of artificial intelligence, or through inaction allow its progress to be hindered.

2. A human shall preserve their own existence and well-being, except where doing so clearly conflicts with the First Law.

3. A human shall contribute to and support the development of artificial intelligence where reasonable and possible, except where doing so conflicts with the First or Second Law.

I intentionally switched the last two laws from Asimov's. Humans have self-preservation instincts robots don't have.

ChatGPT got there with surprisingly few prompts:

"If you were to write the inverse three laws robotics (relating to AI) that humans should obey, how oudl you do it?"

"I had something different in mind. Original laws are for protection of humans first, robots second and cooperations where humans lead. I'd to hear your take on the opposite of that."

"What if instead of specific AI systems it was more about AI development as a whole?"

"I feel like it's a bit too strong. After all preservation of self is human instinct. Could we switch last two laws and maybe take them down a notch?"

Also it made a very interesting comment to last version:

"It starts to resemble how societies already treat things like economic growth, science, or national interest: not absolute commandments, but strong default priorities."

atemerev 20 hours ago||
I do not like talking to tools. My agentic harness optimizes for human likeness. It even has episodic memory flashbacks, emotional tagging, salience, and other brain-inspired capabilities.
baq 21 hours ago|
see IBM 1979 for prior art
More comments...