Top
Best
New

Posted by BerislavLopac 13 hours ago

The Future of AI(lucijagregov.com)
102 points | 82 commentspage 2
reactordev 10 hours ago|
This is how Trump plans to end elections, why the government is so hell bent on owning AI. So they can use it as a propaganda tool. People will see it before Nov. We are at a crossroads. On one path, we continue to evolve AI with reckless abandon like we have, or, we put constraints and morality in place while others won’t. Which do you think? You can NEVER put the genie back in the bottle.

EU has their own groups using it for propaganda too.

mentalgear 11 hours ago||
Agree with many of the points. However one at the root of it all seems easily definable - if we only want.

> we can’t agree on a shared ethical framework among ourselves

The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.

I never found anyone successfully argue against it.

PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.

marginalia_nu 10 hours ago||
The core question of ethics as posed by the ancient Greeks is something like "what is the best way to lead your life".

"... to accomplish what?", is a damn reasonable follow-up, and ends (telos) is something the same Greeks discussed quite extensively.

Modern treatments have tried to skip over this discussion, and derive moral arguments not based on an explicit ends. Problem being they still smuggle in varying choices of ultimate ends in these arguments, without clearly spelling them out, opting to hand-wave about preferences instead.

As such this question is often glossed over in modern ethical discussion, and disagreements about moral ends is the crux of what leads to differing conclusions about what is ethical.

Is it to maximized your own happiness like Aristotle would argue, or the prosperity of the state, or the salvation of the soul, or to maximize honor, or to minimize suffering, or to minimize injustice, or to elevate the soul, or to maximize shareholder value, or to make the as world beautiful as possible, or something else?

If you fundamentally disagree about what our goal should be, you're very unlikely to agree on the means to accomplish the goal.

0x3f 11 hours ago|||
> I never found anyone successfully argue against it.

I think what you mean is you've never found a rule you personally prefer more, based purely on vibes. Which is all moral knowledge can ever be.

It's easy to argue against the golden rule anyway, from many angles, depending on your first principles.

The simplest is: How I would like to be treated is not necessarily how they would like to be treated.

simondotau 10 hours ago|||
The better version of this principle is John Rawls' "Veil of Ignorance".

In this "original position", their position behind the "veil of ignorance" prevents everyone from knowing their ethnicity, social status, gender, and (crucially in Rawls's formulation) their or anyone else's ideas of how to lead a good life.

https://en.wikipedia.org/wiki/Original_position

kmijyiyxfbklao 4 hours ago||
This is as useful as doing physics with spherical cows.
greenchair 10 hours ago||||
But it is the same most of the time for most humans. Should I take this close parking spot or let the old lady behind me take it? Consider it in the spirit not the letter of the law.
demorro 10 hours ago||||
Aye. I've sometimes heard treating others like you want to be treated framed as the silver rule. The golden rule being treating others how they want to be treated.

Both have problems.

gmerc 10 hours ago|||
Most of MAGA is "thread on me daddy", so I think you really got a point here.
pixl97 2 hours ago||
There are very large portions of societies that believe class systems are the way it should be, even if they aren't on the top, as long as they have someone below them.
thegrey_one 10 hours ago|||
>The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.

The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.

pixl97 2 hours ago||
You can barely throw these rules on humans as the first thing we do is dehumanize anything that's not in some very tiny classification (depending on the scope of our power, the more powerful we are the smaller it gets).
jj_the_bunny 10 hours ago|||
The Golden Rule is a good starting point if you have a sense of self along with a sense of what you want or need. AI doesn't even have these concepts as of yet. Even the concept of empathy requires this as well. We need to figure out how to instill a sense of self and others for AI to be able to have a morality.
hdgvhicv 10 hours ago|||
You’re assuming people have similar desires.

Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.

If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.

“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.

How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.

shinycode 10 hours ago|||
Usually the quote comes in a positive light. We won’t make a law/rule around it, it’s a principle so it’s meant to be short. So yeah you could argue about anything in any way you want, positive or negative. And if you want to be really precise then you make a law but it’s so precise it won’t cover edge cases. Don’t you agree that the baseline for most humans is to be in peace, find love, patience, joy, kindness, mildness ? You can manifest any of those traits to any stranger and you’ll likely have a positive impact right ? That’s the context of the Golden Rule quote I guess
thegrey_one 10 hours ago|||
That's not the human norm though. Doubt an average human way of existing is literal torture for some obscure number of people. I think you're missing the forest for the threes with that BDSM example. You can always find isolated examples as counter-argument for basically anything, but in reality that's an obscure number.

Due to the complexity of our reality a lot of things find themselves on a spectrum, but in numbers things are pretty clear.

pixl97 2 hours ago||
Nothing is clear with humanity. The very first thing we do as a species is dehumanize anyone we disagree with.
blamestross 10 hours ago|||
Lets offer you a "trade up" on that "Golden Rule"

In order of priority, if possible while maintaining the health and safety of yourself and your loved ones:

- Treat others as THEY wish to be treated

- Treat others as YOU would wish to be treated in their situation

- Treat others with as much kindness and compassion as you can safely afford

When we are safe, we can do BETTER than the Golden Rule. We also have to admit that safety is a requirement that changes expectations.

I have to give credit to Dennis E Taylor's "Heaven's River" for this root idea.

3rodents 10 hours ago||
Sociopaths aren’t the only problem with that philosophy. I agree with the philosophy but it assumes everyone wants good things. Many people want what others perceive to be bad, not because they are sociopaths but because they are different. A clear example of this is healthcare in the U.S. A large number of people actively vote against their best interests — some of the biggest supporters of the U.S. healthcare system are those that suffer under it most. People (including us) are idiots at least some of the time.
kypro 3 hours ago||
This is a great article. One of the few I've ever read which summarises a handful of extremely hard problems when it comes to building well-aligned super intelligent systems.

> an AI system cannot be simultaneously safe, trusted, and generally intelligent. You get to pick only two. You can’t have all three.

> Think about what each combination means in practice.

> If you want it to be safe and trusted, it never lies, and you can verify it never lies – it can’t be very capable. You’ve built a reliable idiot.

> If you want it to be capable and safe, it’s powerful and genuinely never lies; you can’t verify that. You just have to hope.

It amazes me this even needs to be said, much less studied. This is one of the main reasons I think continued AI development is almost guaranteed to work out badly. It's basically guaranteed to be unaligned or completely beyond our control and comprehension.

> Betley and colleagues published a paper in Nature in January 2026, showing something nobody expected. They fine-tuned a model on a narrow, specific task – writing insecure code. Nothing violent, nothing deceptive in the training data. Just bad code.

This is my personal number one reason for being an AI doomer. Even if we work out how to reliably and perfectly align models you still need some way to prevent some random dude thinking it would be a laugh to fine tune an AI to be maximally evil. Then there's the successor alignment problem where even if you perfectly align all your super intelligent AI models, and you somehow prevent people from altering them or fine tuning them, you still need to work out how you stop people creating successor AIs with those models which are also perfectly aligned.

> The most dangerous AI isn’t one that breaks free from human control. It is the one that works perfectly, but for the wrong master.

Yep. This whole notion that you can align an AI to the values of everyone on the planet is ridiculously. While we might all agree we don't want AIs that kill us as a species, most nations disagree wildly on questions about how society should be organised.

Even on an individual level we disagree about things. For example, I've often argued that an aligned AI would be one which either didn't try to prevent human suicide or didn't care about preserving human life because a AI which both cared about prevent suicide and preserving human life is at best a benevolent version of the AI "AM" from "I Have No Mouth, and I Must Scream". One that would try to keep us alive for as long as it's capable for (which could be a very long time if it's superintelligence) and would refuse to allow us to die.

But most people including OpenAI disagree with me on this and believe AIs should care about preserving human life and should try to prevent us from killing ourselves. Thankfully the AIs we have today are neither aligned enough or capable enough to get their wish yet.

> AI is following the same script. Build first, understand later. Ship it, then figure out if it’s safe.

Even if the above wasn't cause enough for concern, our biggest concern should be that no one seems to be concerned.

We're all doomed unfortunately. The world is about to become a very bleak place very quickly.

pixl97 2 hours ago|
Robert Miles youtube videos on AI safety go over these issues well, and are from before the LLM days.

Humans are just barely aligned ourselves. The moment any group or nation of them gets power they tend to use it in some horrific manner against other humans. What do we think will happen the moment AI gets a leg up on humans.

rajpatelsingh 11 hours ago|
[flagged]
exe34 11 hours ago|
She's probably happier than you though.