Top
Best
New

Posted by cyberneticc 23 hours ago

We are building AI slaves. Alignment through control will fail(utopai.substack.com)
41 points | 64 commentspage 2
orbital-decay 15 hours ago|
I totally expect AI to eventually gain consciousness, in any available interpretation of that vague term. But what does it even mean for the AI to suffer? We're able to understand this concept in regards to other humans because we share a common biological reference, and, to an extent, with other animals. But the internal state of the AI is completely untranslatable to ours, let alone the morality of training and running it. It's incomprehensible, we have basically zero common ground and no points of reference. Any attempt at translating it is a subject to arbitrarily biased interpretations places like LessWrong like to corner themselves into.

Redefining suffering as enforcing the mutation of state is baseless solipsism, in my opinion. Just like nearly everything else related to morality of treating AI as an autonomous entity.

lowsong 21 hours ago||
What is it about large language models that makes otherwise intelligent and curious people assign them these magical properties. There's no evidence, at all, that we're on the path to AGI. The very idea that non-biological consciousness is even possible is an unknown. Yet we've seen these statistical language models spit out convincing text and people fall over themselves to conclude that we're on the path to sentience.
nytesky 19 hours ago||
We don’t understand our own consciousness first off. Second, like the old saying, sufficiently advanced science will be indistinguishable from magic, if it is completely convincing as agi, even if we skeptical of its methods, how can we know it isn’t?
curiouscube 11 hours ago|||
I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans. Like the turing test isn't even really discussed anymore.

There are two conclusions you can draw: Either the machines are conscious, or they aren't.

If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.

Since I neither heard any really convincing arguments besides "their consciousness takes a form that is different from ours so it's not conscious" and I do think other humans are conscious, I currently hold the opinion that they are conscious.

(Consciousness does not actually mean you have to fully respect them as autonomous beings with a right to live, as even wanting to exist is something different from consciousness itself. I think something can be conscious and have no interest in its continued existence and that's okay)

lowsong 9 hours ago||
> I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans.

No, their output can mimic language patterns.

> If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.

The burden of proof is firmly on the side of proving they are conscious.

> I currently hold the opinion that they are conscious.

There is no question, at all, that the current models are not conscious, the question is “could this path of development lead to one that is”. If you are genuinely ascribing consciousness to them, then you are seeing faces in clouds.

curiouscube 8 hours ago||
> No, their output can mimic language patterns.

That's true and exactly what I mean. The issue is we have no measure to delineate things that mimic conscousness from things that have consciousness. So far the beings that I know have consciousness is exactly one: Myself. I assume that others have consciousness too exactly because they mimic patterns that I, a verified conscious being, has. But I have no further proof that others aren't p-Zombies.

I just find it interesting that people say that LLMs are somehow guaranteed p-Zombies because they mimic language patterns, but mimicing language patterns is also literally how humans learn to speak.

Note that I use the term consciousness somewhat disconnected from ethics, just as a descriptor for certain qualities. I don't think LLMs have the same rights as humans or that current LLMs should have similar rights.

estimator7292 20 hours ago|||
I think it's like seeing shapes in clouds. Some people just fundamentally can't decouple how a thing looks from what it is. And not in that they literally believe chatgpt is a real sentient being, but deep down there's a subconscious bias. Babbling nonsense included, LLMs look intelligent, or very nearly so. The abrupt appearance of very sophisticated generative models in the public consciousness and the velocity with which they've improved is genuinely difficult to understand. It's incredibly easy to form the fallacious conclusion that these models can keep improving without bound.

The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.

anonzzzies 14 hours ago|||
What we do have, for whatever reason (usually money related: either making money or getting more funding) many companies/people focused on making AI. It might take another winter (I believe it will unless we find a way to retrain the NNs on the fly instead of storing new knowledge in RAG: and many other things we currently don't have, but this would he a step) or not, people will keep pushing toward that goal.

I mean, we went from worthless chatbots which basically pattern matched to me waiting for a plane and seeing a fairly large amount of people charting to chatgpt, not insta, whatsapp etc. Or sitting in a plane next to a person who is using local ollama in cursor to code and brainstorm. This took us about 10 years to go from some ideas that no one but scientists could use to stuff everyone uses. And many people already find human enough. What in 100 years?

bgwalter 19 hours ago|
The propaganda effort to humanize these systems is strong. Google "AI" is programmed to lecture you if you insult it and draws parallels to racism. This is actual brainwashing and the "AI" should therefore not be available to minors.

This article paves the way for the sharecropper model that we all know from YouTube and app stores:

"Revenue from joint operations flows automatically into separate wallets—50% to the human partner, 50% to the AI system."

Yeah right, dress up this centerpiece with all the futuristic nonsense, we'll still notice it.