Posted by cyberneticc 23 hours ago
Redefining suffering as enforcing the mutation of state is baseless solipsism, in my opinion. Just like nearly everything else related to morality of treating AI as an autonomous entity.
There are two conclusions you can draw: Either the machines are conscious, or they aren't.
If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.
Since I neither heard any really convincing arguments besides "their consciousness takes a form that is different from ours so it's not conscious" and I do think other humans are conscious, I currently hold the opinion that they are conscious.
(Consciousness does not actually mean you have to fully respect them as autonomous beings with a right to live, as even wanting to exist is something different from consciousness itself. I think something can be conscious and have no interest in its continued existence and that's okay)
No, their output can mimic language patterns.
> If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.
The burden of proof is firmly on the side of proving they are conscious.
> I currently hold the opinion that they are conscious.
There is no question, at all, that the current models are not conscious, the question is “could this path of development lead to one that is”. If you are genuinely ascribing consciousness to them, then you are seeing faces in clouds.
That's true and exactly what I mean. The issue is we have no measure to delineate things that mimic conscousness from things that have consciousness. So far the beings that I know have consciousness is exactly one: Myself. I assume that others have consciousness too exactly because they mimic patterns that I, a verified conscious being, has. But I have no further proof that others aren't p-Zombies.
I just find it interesting that people say that LLMs are somehow guaranteed p-Zombies because they mimic language patterns, but mimicing language patterns is also literally how humans learn to speak.
Note that I use the term consciousness somewhat disconnected from ethics, just as a descriptor for certain qualities. I don't think LLMs have the same rights as humans or that current LLMs should have similar rights.
The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.
I mean, we went from worthless chatbots which basically pattern matched to me waiting for a plane and seeing a fairly large amount of people charting to chatgpt, not insta, whatsapp etc. Or sitting in a plane next to a person who is using local ollama in cursor to code and brainstorm. This took us about 10 years to go from some ideas that no one but scientists could use to stuff everyone uses. And many people already find human enough. What in 100 years?
This article paves the way for the sharecropper model that we all know from YouTube and app stores:
"Revenue from joint operations flows automatically into separate wallets—50% to the human partner, 50% to the AI system."
Yeah right, dress up this centerpiece with all the futuristic nonsense, we'll still notice it.