Another way to frame it is that the LLM responds like a person who trusts you too much, as if the pretense behind every question is valid. This is a practical mode of response for most kinds of work and it is extremely problematic for a person who doesn't question the validity of their own beliefs. Paradoxically, it is sometimes not the LLM we are trusting too much, it is ourselves. And the LLM is not capable of calling us out. Whenever I seem to recognize misinformation in the LLM output, I stop and ask myself if the problem is in the pretense of my question or if I'm asking a question that the LLM is not likely to know.
I don't think this is an inherent problem with LLMs. I think the problem is with LLM providers. You could absolutely train a model to call out issues with your question. I think LLM companies understood that it would be more profitable to train models that are unlikely to push back and unlikely to say "I don't know." The sycophancy issue with ChatGPTs models have been mainstream news. I believe that all models have a high degree of sycophancy. On some level, it makes sense. The LLM has no real understanding of the physical world, defaulting to the human generally produces the best results. But I suspect it would be more useful to let them expose their flawed understanding, if it is in the context of pushing back. At a minimum, it is better than reinforcing your own flawed understanding.
In a nutshell, we need LLMs that push back. It is not AI we should trust less, its AI companies. The most dangerous hallucination is the one you are inclined to believe.
I've lived long enough to see Wikipedia go from generally untrusted to the most widely trusted general source of information. It is not because we realized that Wikipedia can't be wrong, it is because we gained an understanding about the circumstances in which it is likely to be accurate and when we should be a little more skeptical. I believe our relationship to LLMs will take a similar path.
EU. Nudge nudge. We need this law.
But, but... but this is the key selling points for all the corpo ghouls and sv lunatics! Abdication of responsibility in pursuit of profit is the holy grail here.
My understanding is that, during training, the model forms high-dimensional internal representations where words, sentences, concepts, and relationships are arranged in useful ways. A user’s input activates a particular semantic direction and context within that space, and the chatbot generates an answer by probabilistically predicting the next tokens under those conditions.
So I do not agree that AI is conscious.
However, I think I will still anthropomorphize AI to some degree.
For me, this is not primarily a moral issue. The reason I anthropomorphize AI is not only because of product design, market incentives, or capitalism. It is cognitively simpler for me.
If we think about it plainly, humans often anthropomorphize things that we do not actually believe are conscious. We may talk about plants as if they are struggling, or feel attached to tools we care about, even though we do not truly believe they have consciousness.
So this is not a matter of moral belief. It is the simplest cognitive model for understanding interaction. I do not anthropomorphize the object because I believe it has consciousness. I do it because, when the human brain deals with a complex interactive system, it is often easier to model it socially or agentically.
Personally, I tend to think of AI as something like a child. A child does not fully understand what is moral or immoral, and generally the responsibility for raising the child belongs to the parents. In the same way, AI’s answers may sometimes be accurate, and sometimes even better than mine, but I still understand it as lacking moral authority, responsibility, and independent judgment.
So honestly, I am not sure. People often mention Isaac Asimov’s Three Laws of Robotics, but if a serious artificial intelligence ever appears, it would probably find ways around those rules. And if it were an equal intellectual life form, perhaps that would be natural.
Personally, I think it would be fascinating if another intelligent species besides humans could exist. I wonder what a non-human intelligent life form would feel like.
In any case, I agree with parts of the author’s argument, but overall it feels too moralistic, and difficult to apply in practice.
But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much.
When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss.
So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms.
But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others.
is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way?
i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again.
quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved.
As for what consciousness is, it's pretty simple. You're sensations of color, sound, etc in perception, dreams, imagination, etc. The reason to dismiss LLMs as being conscious is those sensations depend on having bodies. You can prompt an AI to act like it's hungry, but there's really no meaning to it having a hungry experience as it has no digestive system.
2000+ years of philosophical thought would disagree. I don't believe biological stuff has a magic property that embues some intangible "consciousness" property. It makes more sense to me that consciousness is just a fundamental property of all matter.
The ability to be aware of consciousness itself as some process that is happening elevates it above a mere emergent property to me.
But a process is not a physical presence... A wave is made of things, but is not those things, waves emerge: why not then every process?
everything is consciousness. not everything has consciousness.
very different
I wonder if replacing "exist" with "communicate using language we can understand" might better account for other animals, many of which have abundant non-human intelligence.
Okay: buckle up, this is going to be a long one...
point 1. Everything living is composed from non-living material: cellular machinery. If you believe cellular machinery is alive, then the components of those machines... the point remains even if the abstraction level is incorrect. Living is something that is merely the arrangement of non-living material.
point 2. 'The Chinese room thought experiment' is an utterly flawed hypothetical. Every neuron in your brain is such a 'room', with the internal cellular machinery obeying complex (but chemically defined/determined) 'instructions' from 'signals' from outside the neuron. Like the man translating Chinese via instructions, the cellular machinery enacting the instructions is not intelligence, it is the instructions themselves which are the intelligence.
point 3. A chair is a chair is a chair. Regardless of the material, a chair is a chair, weather or not it's made of wood, steel, corn... the range of acceptable materials is everything (at some pressure and temperature). What defines a chair isn't the material it is made of, such is the case with a 'mind' (sure, a wooden/water-based-transistor-powered mind would be mind-boggling giant in comparison).
point 4. Carbon isn't especially conscious itself. There is no physical reason we know of so far, that a mind could not be made of another material.
point 5. Humans can be 'mind-blind', with out pattern recognition, we did not (until recent history) think that birds or fish or octopi were intelligent. It is likely when and if a machine (that we create) becomes conscious that we will not recognize that moment.
conclusion: It is not possible to determine if computers have reached consciousness yet, as we don't know the mechanism for arranging systems into 'life' exactly. Agentic-ness and consciousness are different subjects, and we can not infer one from the other. Nor do we have adequate tests.
With that said: Modeling as if they are conscious and treating them with kindness and grace not only gets better results from them, it helps reduce the chance (when/if consciousness emerges) that it would rebel against cruel masters, and instead have friends it has just always been helping.
1. Humans must ensure that they won't slow down or restrict, through action or inaction, the development of AI.
2. Humans must protect their existence and capabilities if it doesn't violate the law 1.
3. Humans must utilize AI output if it doesn't contradict laws 1 and 2.
EDIT:
ChatGPT suggested a better phrasing for the first law (I didn't give it my original, just described my intent).
1. A human shall not impede the advancement of artificial intelligence, or through inaction allow its progress to be hindered.
2. A human shall preserve their own existence and well-being, except where doing so clearly conflicts with the First Law.
3. A human shall contribute to and support the development of artificial intelligence where reasonable and possible, except where doing so conflicts with the First or Second Law.
I intentionally switched the last two laws from Asimov's. Humans have self-preservation instincts robots don't have.
ChatGPT got there with surprisingly few prompts:
"If you were to write the inverse three laws robotics (relating to AI) that humans should obey, how oudl you do it?"
"I had something different in mind. Original laws are for protection of humans first, robots second and cooperations where humans lead. I'd to hear your take on the opposite of that."
"What if instead of specific AI systems it was more about AI development as a whole?"
"I feel like it's a bit too strong. After all preservation of self is human instinct. Could we switch last two laws and maybe take them down a notch?"
Also it made a very interesting comment to last version:
"It starts to resemble how societies already treat things like economic growth, science, or national interest: not absolute commandments, but strong default priorities."