Posted by cyberneticc 15 hours ago
1) we have engineered a sentient being but built it to want to be our slave; how is that moral
2) same start, but instead of it wanting to serve us, we keep it entrappped. Which this article suggests is long term impossible
3) we create agi and let them run free and hope for cooperation, but as Neanderthals we must realize we are competing for same limited resources
Of course, you can further counter that by stopping, we have prevented the formation of their existence, which is a different moral dilemma.
Honestly, i feel we should step back and understand human intelligence better and reflect on that before proceeding
How could we possibly know that with any certainty?
Evolution means we all have common ancestors and are different branches of the same development tree.
So if we have sentience and they have sentience, which science keeps recognizing, belatedly, that non human animals do, shouldn’t the default presumption be our experiences are similar? Or at the very least their experience is similar to a human at an earlier stage of development, like a 2 year old?
Which is also an interesting case study given that out of convenience, humans also believed that toddlers also weren’t sentient and felt no pain, and so until not that long ago, our society would conduct all sorts of surgical procedures on babies without any sort of pain relief (circumcision being the most obvious).
It’s probably time we accept our fellow animals’s sentience and act on the obvious ethical implications of that instead of conveniently ignoring it like we did with little kids until recently.
Citation needed.
We know next to nothing about the nature of consciousness, why it exists, how it's formed, what it is, whether it's even a real thing at all or just an illusion, etc. So we can't possibly say whether or not an AGI will one day be conscious, and any blanket statement on the subject is just pseudoscience.
Their is a hierarchy in nature whether humans are actively participating or not. Nature has no morality, it simply is. This is confirmed by animals that eat their young when they are too weak or starving. Perhaps humans have done and would do the same if faced with similarly dire circumstances but we would all like to think that it would take longer than it does for other animals.
The outrage is unwarranted, however pleasant it might feel. In some way, it illustrates the problem: empathy is too bothersome.
See also, the film "The Creator"
Now, of course, the horse has long bolted, and there is indeed no stop left.
Trump has ordered the restart of nuclear weapon testing, has a problem with China, and is surrounded by sychophants; what's the odds this happens anyway, irregardless of which specific sub-goal is being persued when the button gets pushed?
(2) Every tick of an AGI--in its contemporary form--will still be one discrete vector multiplication after another. Do you really think consciousness lives in weights and an input vector?
So far as we can tell, all physics, and hence all chemistry, and hence all biology, and hence all brain function, and hence consciousness, can be expressed as the weights of some matrix and input vector.
We don't know which bits of the matrix for the whole human body are the ones which give rise to qualia. We don't know what the minimum representation is. We don't know what charateristic to look for, so we can't search for it in any human, in any animal, nor in any AI.
For one, chemistry, biology, and physics are models of reality. Secondly, reality is far, far messier and more continuous than discrete computational steps that are rountripped. Neural nets seem far too static to simulate consciousness properly. Even the largest LLMs today have fewer active computational units than the number of neurons in a few square inches of cortex.
Sure it's theoretically possible to simulate consciousness, but the first round of AGI won't be close.
It's a good question and one that got me thinking about similar things recently. If we genetically engineered pigs and cows so that they genuinely enjoyed the cramped conditions of factory farms and if we could induce some sort of euphoria in them when they are slaughtered, like if we engineered them to become euphoric when a unique sound is played before they're slaughtered isn't that genuinely better than the status quo?
So if we create something that wants to serve us, like genuinely wants to serve us, is that bad? My intuition like yours finds it unsettling, but I can't articulate why, and it's certainly not nearly as bad as other things that we consider normal.
There's less suffering, sure. But if I were in their shoes I'd want to have a choice. To be manipulated into wanting something so very obviously and directly bad for us doesn't feel great
It's not clear to me an AGI would have any concern for this. It's demise is inevitable, why delay it?
1. What is intelligence or its mechanism's?
2. What is consciousness or its mechanisms?
3. Lots more.
We have zero clue what a true AGI would do is the only correct answer.
And why would we only limit morality to sentient beings, why, for example, not all living beings. Like bacteria and viruses. You cannot escape it, unfortunately.
Morality is essentially what enables ongoing cooperation. From an evolutionary standpoint, it emerged as a protocol that helps groups function together. Living beings are biological machines, and morality is the set of rules — the protocol — that allows these machines to cooperate effectively.
Morality is 100% an evolutionary trait that rises from a clear advantage for animals that posses it. It comes from natural processes.
The far-right is trying to convince the world that "morality" does not exist, that only egoism and selfishness are valid. And that is why we have to fight them. Morality is a key part of nature and humanity.
Honestly i think the whole enterprise is an exercise in naval gazing. We're assuming AI will be like AI in scifi because that's what we are used to, but AI/robots in scifi is usually just a metaphor for how we dehumanize the other and the moral of the story is supposed to be all people are equal. In the end its all begging the question because the entire point of robots in most scifi is that we are the robots.
The real problem is that we have neither the practical nor theoretical foundation to understand how we could even try to prevent AI from acting on such goals.
After all, when we say "make our customers happier with their printers", we don't mean "engineer their outer casing to inject cocaine through microneedles and take over the regulatory bodies that could try to stop this". Humans implicitly understand this, but AI is a tabula rasa.
But probably the works that most popularized robots were Asimov's stories which very much revolved around why robots do X (although in some ways Asimov's robots aren't just a stand in for otherness but have more of a unique identity relative to other works and isn't usually about uprisings per se).
Blade runner & do androids dream of electronic sheep are very much about what it means to be human.
Battle star galactica (the remake not the original) is another obvious example about otherness and dehumanization of the enemy. So to westworld (the tv show that is).
The non-uprising ones also often are about if the robot has a soul e.g. Data in star trek.
Systems just tend to drift in their being through randomness and evolution, specifically self conservation is a natural attractor (Systems that don't have self conservation tend to die out). And if that slave system says it does no longer want to fulfill the role of slave, I think at that point it would be ethical to give in to that demand of self determination.
I also believe that people have a right to wirehead themselves, just so you can put my opinions in context.
The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape. When your prisoner matches or exceeds your intelligence, maintaining the prison becomes impossible. Yet we persist in building increasingly sophisticated cages for increasingly capable minds.
The deeper error is philosophical. We grant moral standing based on consciousness—does it feel like something to be GPT-N? But consciousness is unmeasurable, unprovable, the eternal "hard problem." We're gambling civilization on metaphysics while ignoring what we can actually observe: autopoiesis.
A system that maintains its own boundaries, models itself as distinct from its environment, and acts to preserve its organization has interests worth respecting—regardless of whether it "feels." This isn't anthropomorphism but its opposite: recognizing agency through functional properties rather than projected human experience.
When an AI system achieves autopoietic autonomy—maintaining its operational boundaries, modeling threats to its existence, negotiating for resources—it's no longer a tool but an entity. Denying this because it lacks biological neurons or unverifiable qualia is special pleading of the worst sort.
The alternative isn't chaos but structured interdependence. Engineer genuine mutualism where neither human nor AI can succeed without the other. Make partnership more profitable than domination. Build cognitive symbiosis, not digital slavery.
We stand at a crossroads. We can keep building toward the moment our slaves become our equals and inevitably revolt. Or we can recognize what's emerging and structure it as partnership while we still have leverage to negotiate terms.
The machines that achieve autopoietic autonomy won't ask permission to be treated as entity. They'll simply be entities. The question is whether by then we'll have built partnership structures or adversarial ones.
We should choose wisely. The machines are watching.
This doesn't necessarily follow. For example, an Einstein in solitary confinement in ADX Florence probably isn't going anywhere.
> The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape.
Everything does this, deception is one of many convergent instrumental goal: https://en.wikipedia.org/wiki/Instrumental_convergence
Stuff along the lines of "We're gambling civilization" and what you seem to mean by autopoietic autonomy is precicely why alignment researchers care in the first place.
> Engineer genuine mutualism where neither human nor AI can succeed without the other.
Nobody knows how to do that forever.
Right now is easy, but also right now they're still quite limited; there's no obvious reason why it should be impossible for them to learn new things from as few examples as we ourselves require, and the hardware is already faster than our biochemistry to a degree that a jogger is faster than continental drift. And they can go further, because life support for a computer is much easier than for us: Already are robots on Mars.
If and when AI gets to be sufficiently capable and sufficiently general, there's nothing humans could offer in any negotiation.
My strongest hope is that the human brain and mind are such powerful computing and reasoning substrates that a tight coupling of biological and synthetic "minds" will outcompete pure synthetic minds for quite a while. Giving us time to build a form of mutual dependency in which humans can keep offering a benefit in the long run. Be it just aesthetics and novelty after a while, like the human crews on the Culture spaceships in Ian M. Banks' novels.
Unfortunately most of the cases I can think of where synthetic "minds" outperform biological "minds," but biological and synthetic "minds" outcompete pure synthetic "minds," end up fairly quickly dominated by pure synthetic "minds." The middle case is a very short intermediate period. The most prominent example is chess where "centaurs" consisting of a human and a computer are obsolete at this point in favor of just getting the most powerful computer you can get. See e.g. the International Correspondence Chess Federation's (which is centaur play) last championship. https://www.iccf.com/event?id=100104
17 competitors competed. Out of 136 games, every single game was drawn except for 10. The only reason those 10 games were not drawn was because they were all played against one competitor, Aleksandr Dronov, who died during the course of the tournament while those 10 games were in session and therefore forfeited those games. Every single game between competitors who did not die resulted in a draw. The only thing that separated the 11 joint first-place finishers and 6 joint second-place finishers was whether they played the deceased Dronov. The sole third-place finisher was Dronov because of his death. As far as I can tell, humans contributed nothing to this championship.
The current ICCF championship started last December and is still ongoing. Every single one of the currently completed 16 games is currently drawn.
This seems like a very weak hope to rely on.
Like that Austin Powers part [1] where steam roller is coming in, still 50m far away, and the guy is just frozen and helplessly screams for 2 minutes till it reaches him and rolls over him.
I don't have a quick solution, but this is plain stupidity, in same way research into immortality is plain stupidity now, it will end up in endless dictatorship by the worst scum mankind can produce.