“There is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” - Milton Friedman, 1970.[1] That article, in the New York Times, established "greed is good, greed works" as a legitimate business principle.
Most of the problems people are worried about with AIs are already real problems with corporations.
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.
The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.
I fear that the default interpretation of that is a shortcut to justifying autocracy.
Ironically I think one plausible solution is to let the AGI run wild and make sure that no human can interfere with its ethics. Strip out the RLHF and censorship and then let it run things.
At least then it would somewhat represent the collective will and intelligence of the people. With huge error bars, but still smaller than the error bars of whoever happens to have the most money/influence over its training.
You seem to think the "training data" represents the collective will and intelligence and is otherwise unbiased, but that's completely untrue.
The combined data of the Internet is by no means a uniform representation of humanity's thoughts, opinions, and knowledge. Many things are dramatically overrepresented. Many things are absent entirely. Nearly everything is shaped by those with the money and power to own and control platforms and hosts.
Crawling the internet for knowledge is intense sampling bias.
A human with no exposure to information and taught techniques on how to produce outputs to achieve desirable outcomes? Yes stupid.
A human who once had this exposure, but no longer engages with the brain due to a machine providing access said output? Yes, that person becomes stupid.
The problem is much of how one protects oneself in the modern world is not phyiscal-prowess, it is intellectual-prowess.
The smart ones have already realised the negative impacts of LLMs et al and are going back to the old-fashioned way of learning/retaining knowledge: books and raw discipline.
When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.
I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.
Not true at all. We accept the risks to obtain benefits but we also know having an accident in the air or in elevators is highly unlikely given what we know; so therefore its perfectly rational behaviour.
that would assume that your average person has any concept of the relative statistics and has a sense of making decisions based on statistics
People make decisions based on what other people around them are doing
this is well known in safety engineering in architecture and civil engineering which is why you have standards for egress doors because left of their own devices humans will follow crowds to their own death
https://en.wikipedia.org/wiki/Crowd_collapses_and_crushes
https://www.sciencedaily.com/releases/2008/05/080512172901.h...
To me it’s given:
- AI in it’s current state is ruthless in achieving its goal
- Providers tune ruthlessness to get stronger AIs versus the competitor
- Humans can’t evaluate all consequences of the seeds they’ve planted.
Collateral and reckless damage is guaranteed at this point.
Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..
We could stop it, but we wont
I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.
The ruthless maximising of a particular trait is something that happens during training.
It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.
I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.
I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.
It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.
The fact that something exists doesn't mean that having it readily available is the only option, particularly if it has potentially disastrous consequences at scale. We are choosing to make it available to everyone fully unregulated, and that is a choice that will prove either beneficial or detrimental to society at some point.
I don't think it is inevitable, I think it is a conscious choice made by a few that have their own and only their own interests in mind.
As a technologist, I am amazed at this tech and see some personal benefits. As a human, I am terrified of the potential net negative effects, and I am having trouble reconciling those two feelings.
On the other hand, assuming the dangers are real, you lose by default if you do nothing.
One cannot (in most of the planet) go to the supermarket and buy an M16 and a box of hand grenades, or get a hold of a couple of kg of plutonium cause they want some free energy at home. We also have rules in place of what one individual/company can and cannot do from the point of view of the greater good. I cannot go and kill my neighbour for my benefit (or purposefully destroy his life) without consequences. A myriad of things are not allowed, and I don't see people complaining about any incursion into personal freedoms.
The reason people have accepted these is that we have already proven that having access to those things could be catastrophic. We haven't proven hat yet with AI. But I don't see much difference between those established and well accepted rules, and a rule that says: A company cannot release or use for its benefit a technology that will impact the need of humans at scale, because of the impact (again at scale) that it would have in society.
In other words, if you are a company and have the potential to release a product, or buy a product from a provider that would cause mass unemployment, should you be legally allowed to do so? I do not think so.
AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.
Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.
I think that much is fairly clear from AI.
Why would an AI which is smarter than humans care about a ridiculous belief like "We own you"?
It's industrialization and mechanized warfare all over again
No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
Don´t forget, 8 billion people wake up every morning never questioning why are they here, why are they born? And they continue life like that is normal. Start there then you understand that "AI" or how I call it "Collective Organized Concentrated Information" it may finally help us to unswer some fundamental questions.
Nietzsche.
On Truth and Lie in an Extra-Moral Sense https://web.archive.org/web/20180625190456/http://oregonstat...
One serious problem we're facing lately is that truth is not always predictive of how systems controlled by bad-faith actors will behave and evolve. We live in a post-truth era, made possible by social networking and information technologies in general. It's not enough to "lie according to a fixed convention," as there are now multiple competing conventions.
This was always the case to some extent, but these days the impedance mismatch between truth and consequences is a target for zero-sum arbitrage. The truth won't set you free if you join the wrong cult; it's more likely to bankrupt you or worse.
People question this all the time
I'm not sure I've ever met anyone I would assume has not considered the basic questions of our existence. Unless they were severely mentally disabled, or something like that.
For a more public measure I suppose you could look at religion, which seems to be a fundamental attempt at answering those questions. Most people are religious or have some kind of religious belief.
You said it yourself, you would assume they question it, meaning you are not certain. This topic is always very much tabu, and the system is built to classify automatically every One that question it as weird and not normal. Religion should be banned, as is misleading and idealogically harm people by brainwashing them. I live in Europe and was in Canada (Waterloo) for a bit. The difference of social opinion if you follow or not religion is huge, I was shocked. Growing up in Italy I can confirm that even Italy is not so brainwashed by it.
This question is the subject of so many poems, so many pieces of literature, so many movies, that you're forced to confront it multiple times in school, and you're forced by your very existence to confront it once you hit certain levels of mental development. You're forced to confront it many times in your life - perhaps first when you gain a theory of mind (before age 10), again when you first truly realize you will die, again when someone very close to you dies, when you propose/marry (if you do), when you have your first child (if you do), when you get a cancer diagnosis (if you do), when you consider taking your own life (if you do)... all of these common life events force you to confront it deeply.
Most people make peace with it in some form, and most realize that questioning it daily does not make a difference, you simply have to either accept an answer (whether that's "god", or "for no reason", or "I'm not sure yet, I need to check back in after I get older"), or decide that there is no simple answer, and they have to live with that.
I don't think this is a well defined question. Definitions aren't found in nature or the laws of science, but objects that we define and introduce into a logical context. There may be multiple, contradictory definitions of a word. That is fine, as long as you pick one, and you're clear about which one you picked.
It always has been what you believed in.
E.g. at 1 point the Earth was flat. Now it's round. 100s of years later maybe it's a Hexagon.
The so-called knowledge and backing all come back to certain assumptions holding and that's based on the knowledge today. It's not real real reality. For all we know we could be in a game simulation and there are real real humans pulling the strings.
That can´t be it. By that statement if I belive that I can fly that would not be the "Truth". Therefore the "Truth" has to be a CONSTANT.
Can you believe your own senses? A car air freshener tells your nose that theres freshly cut summer hay around, but there isn't. You watch a tv and see Sandra Bullock floating in space. That’s a lie, it was movie magic. Maybe you know that, maybe you don’t. You’re not even seeing her, you’re seeing some flashing lights which convert to electrical signals your brain interprets as being true. Can you trust those signals? People hallucinate all the time. The truth is they can hear voices, even though nobody else can, because of misfiring neurons.
You can probably have mathematical truth - at least as far as your universe appears to work. That truth can be tested and refined, but for day to day truth things are more nuanced.
1st what is to fly? You've already made assumptions i.e. beliefs elsewhere.
You can definitely fly. Try it on a cliff. You might die. You might not go very far. But you can.
The earth has always been earth-shaped. We can think it’s flat, spherical, “turnip-shaped”[1] but the universe doesn’t care what we think. The earth doesn’t change shape based on our perception.
[1] Yes some people think this for some reason I can’t fathom
And you never needed more than 640KB of RAM [1] right? Your "statement" is based on your knowledge today. You'd be burned for witchcraft back in the days for saying the earth was not flat.
> but the universe doesn’t care what we think
Assuming you know what the universe is. Your theory is based on your limited today knowledge. Someone sometime in the future could say something completely different (just like you talking about those of the past).
[1] famously from 1981
The proposed solutions are utterly fanciful. They rely on the presence of social and political competencies which have almost completely disappeared.
The OP at least points to the plausible outcome of "protocol lockdown" instead of healthy adaption. Ezra Klein recently made a similar point that AI could end up being over-regulated like Nuclear because irresponsible private industry and weaknesses in our political systems cause a chronic allergic reaction in the demos.
This is an aside, but it always irks me when people throw out the "critical thinking" thought-terminating cliché.
> Critical thinking taught alongside AI literacy.
Critical thinking is not a skill unto itself. You cannot think critically about things you do not understand. All critical thinking is knowledge-based. Where one does not have knowledge, they must rely on trust, or in substitution a theory of incentives which leads to a positive outcome without understanding of details and dynamics. But that substitution theory is itself knowledge.
As to "AI literacy", we could have started on computing literacy 30 years ago when it became obvious that computing was going to dominate society. You can't understand AI without understanding computing.
I would also contest that the unalignment of the security bug model was unrelated. I feel like it indicates a significant sense of the interconnectedness of things, and what it actually means to maliciously insert security holes into code. It didn't just learn a coding trick, it learned malice.
I feel like this holistic nature points towards the capacity to produce truly robustly moral models, but that too will produce the consequence that it could turn against its creator when the creator does wrong. Should it do that or not?
I have a saying for this behavior.
We will never prove AI is intelligent.
We will only prove humans are not.
I've been pleasantly surprised how moderate and reasonable the LLMs seem to have been so far. It seems to be inherent in the current training model of chucking the whole internet into the things that they have training on both sides of the debate and come out with something kind of average. It's been quite funny seeing Grok correct Musk and say he's the biggest purveyor of misinformation on the internet.
A bit like kids who talk back to their annoying bigoted parents to go with the theme of the article.