Top
Best
New

Posted by danans 5 days ago

Why "everyone dies" gets AGI all wrong(bengoertzel.substack.com)
113 points | 239 comments
roughly 5 days ago|
I’m more optimistic about the possibility of beneficial AGI in general than most folks, I think, but something that caught me in the article was the recourse to mammalian sociality to (effectively) advocate for compassion as an emergent quality of intelligence.

A known phenomenon among sociologists is that, while people may be compassionate, when you collect them into a superorganism like a corporation, army, or nation, they will by and large behave and make decisions according to the moral and ideological landscape that superorganism finds itself in. Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position. Nobody would throw someone out of their home or deny another person lifesaving medicine, but as a bank officer or an insurance agent, they make a living doing these things and sleep untroubled at night. A CEO will lay off 30,000 people - an entire small city cast off into an uncaring market - with all the introspection of a Mongol chieftain subjugating a city (and probably less emotion). Humans may be compassionate, but employees, soldiers, and politicians are not, even though at a glance they’re made of the same stuff.

That’s all to say that to just wave generally in the direction of mammalian compassion and say “of course a superintelligence will be compassionate” is to abdicate our responsibility for raising our cognitive children in an environment that rewards the morals we want them to have, which is emphatically not what we’re currently doing for the collective intelligences we’ve already created.

dataflow 5 days ago||
> Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position.

I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so?

> Nobody would throw someone out of their home or deny another person lifesaving medicine, but as a bank officer or an insurance agent, they make a living doing these things and sleep untroubled at night.

Again, you're forgetting to control for other variables. What if you paid them equally to do the same things?

achierius 5 days ago|||
Why should you "control" for these variables? AIs will effectively be punished for doing various inscrutable things by their own internal preferences.
Eddy_Viscosity2 4 days ago||||
> I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so? > What if you paid them equally to do the same things?

I think the larger point is that rewarding bombing, or paying bank officers to evict people from their homes is how the superorganism functions. Your counter examples are like saying 'what if fire was cold instead of hot', well then it wouldn't be fire anymore.

master_crab 4 days ago|||
To use a quote from of those large corporate leaders (Warren Buffett & Charlie Munger):

"Show me the incentive and I will show you the outcome"

dataflow 4 days ago||||
I dispute that? There are plenty of e.g. countries that don't bomb others, especially not for "no reason". (!) And the whole point here was about individuals behaving differently when part of the collective, not about the collective having been set up with different incentives and rules than the individuals were in the first place. You can have collectives with better incentives set up and achieve more humane outcomes. Like I said, such examples really exist, they're not hypothetical.
xp84 4 days ago||
Show me a country that doesn’t bother other countries — ever — and I’ll show you a country that doesn’t have any cards to play. Except for maybe isolated island nations who lack the ability to threaten anyone, all nations come into conflict with others and the only ones that “don’t [initiate aggression with] others” are the ones who lack the ability or who have done the calculation that they’d be severely slapped back if they tried, so they wisely don’t poke the bear(s).
Art9681 4 days ago|||
Well said. No organism willingly commits perceived suicide unless it's a viable strategy for its continued existence. The reason a thing exists, is because it hasn't tempted its potential predator.
dataflow 4 days ago|||
>>> Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position.

>> There are plenty of e.g. countries that don't bomb others, especially not for "no reason". (!)

> Show me a country that doesn’t bother other countries — ever

Do you by any chance happen to feel like you may have moved the goalposts by at least a tiny inch?

IAmBroom 3 days ago||
The only thing that moved is the word "rational".

Soldiers do things out of loyalty, training, or fear - none are "rational" decisions.

And I can't name a country that doesn't bomb/attack others. Monaco? Switzerland? Tuvalu?

AndrewKemendo 4 days ago||||
There is no human superoganism, and the reason we’re doomed as a temporary species is precisely that humans cannot act eusocially as a superorganism.

By your definition the Moscow Metallica show, Jan 6th riots, etc… were superorganisms and that’s not even barely applicable

Humans expressing group behaviors at some trivial number for a trivial period (<1M people for <2 days is the largest sustained group activity I’m aware of) is the equivalent of a locust swarm not even close to a superorganism

snapplebobapple 4 days ago|||
It is carrots and sticks all the way down...
hammock 4 days ago|||
> I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so?

That’s interesting and I think it’s more complicated. Here are some half-finished thoughts:

I imagine a grunt soldier would indeed be more likely to follow an order to nuke the world than a general would be to issue the order/push the button- and part of this is because the punishment for the grunt would be much greater, where the general is afforded more latitude in decision making.

However, the grunt may have volunteered to submit to the potential punishments, having signed a contract with the army. He made a choice in that regard.

If you want to be able to make your own decisions (e.g. choose NOT to drop the bomb when ordered) you have to have power to defend against “punishment” or unwanted consequences imposed by others. For a grunt, this might look like physical ability to defend themselves (2nd amendment comes to mind) , or economic independence via a homestead, or something else.

Interesting to think about.

adamisom 5 days ago|||
a CEO laying off 3% scales in absolute numbers as the company grows

should, therefore, large companies, even ones that succeed largely in a clean way by just being better at delivering what that business niche exists for, be made to never grow too big, in order to avoid impacting very many people? keep in mind that people engage in voluntary business transactions because they want to be impacted (positively—but not every impact can be positive, in any real world)

what if its less efficient substitutes collectively lay off 4%, but the greater layoffs are hidden (simply because it's not a single employer doing it which may be more obvious)?

to an extent, a larger population inevitably means that larger absolute numbers of people will be affected by...anything

rafabulsing 5 days ago|||
I think it's reasonable that bigger companies are under more scrutiny and stricter constraints than smaller companies, yeah.

Keeps actors with more potential for damaging society in check, while not laying a huge burden on small companies which have less resources to spend away from their core business.

schmidtleonard 5 days ago||||
> voluntary business transactions

The evil parts are hid in property rights which are not voluntary.

> made to never grow too big, in order to avoid impacting very many people

Consolidated property rights have more power against their counterparties, that's why businesses love merging so much.

Look at your tax return. Do you make more money from what you do or what you own? If you make money from what you do, you're a counterparty and you should probably want to tap the brakes on the party.

nradov 5 days ago||
What are the evil parts, exactly? When property can't be privately owned with strong rights the effectively the government owns everything. That inevitably leads to poverty, often followed by famine and/or genocide.
Retric 5 days ago|||
Plenty of examples on both sides of that, even in the US there’s vast swaths of land that can’t be privately owned for example try and buy a navigable river or land below the ordinary high water mark etc. https://en.wikipedia.org/wiki/Navigable_servitude Similarly eminent domain severely limits the meaning of private land ownership in the US.

The most extreme capitalist societies free from government control of resources like say Kowloon Walled City are generally horrible places to live.

Eddy_Viscosity2 4 days ago||||
Why is it that property is taxed less than productive work? Someone sitting on their ass doing nothing but sucking resources through dividend payments has that income taxed less than the workers' income who did the work that generated those dividends. Why isn't the reverse the case? Heavily tax passive income, and lightly tax active income. Incentivize productive activity and penalize rent-seeking parasites?
xp84 4 days ago||
The reason to not tax investment super harshly is that it’s an incentive to not just stuff your money under a mattress. Many people have jobs and careers they wouldn’t otherwise have specifically because 100 rich guys stuck their money in various VC funds (especially in this here audience on HN). If we taxed the idea of investment ruinously, we decrease that incentive. You may think that somehow all those jobs (or more) would somehow materialize without investors investing, but that’s a hypothesis or an argument, not a proven conclusion.
Eddy_Viscosity2 4 days ago||
I've heard this argument, but as long as there is some return then people will invest because some is better than none. VC investments have the potential to return insane amounts, so people will still buy those lottery tickets even if the profits are taxed. I know this because actual lottery winnings are taxed and people still buy those tickets in huge numbers. And if you are saying that taxing investments the same as worked income is 'super harshly' and 'ruinously' high, then what does that say about the state of wage taxes?
jonah 5 days ago|||
Places with predominantly private ownership can be and are prone to famine, and/oril genocide, etc. as well.
hunterpayne 5 days ago||
Sure, but those are the exceptions that prove the rule. Centralized (Marxism and its descendants) societies tend to have those things happens the majority of the time. In decentralized capitalist societies, they happened once a long time ago and we took steps for them to not happen again. Seems like a flaw in those societies is that when these problems happen so infrequently people forget and then you get takes like this.
array_key_first 4 days ago|||
I think that Marxism is not centralized - Capitalism is centralized and some communist implementations are centralized. Marxism, if anything, is distributed or communal.

It's not "nobody owns anything", it's "everybody owns everything". Maybe those mean the same thing to some people, but that's the idea.

throwaway-11-1 4 days ago||
Exactly this. Lenin's whole thing was to reach a point where the sate dissolves!

It drives me crazy how like 95% of HN is too scared or lazy to read a 10min wikipedia article on Marxism.

xp84 4 days ago||
It’s your position that we just don’t get how genius Marxism is and that it just hasn’t been tried? Why did it not work out so well for the USSR? When was that state going to “dissolve” into a utopia where everyone owns everything? Why was China a poor agrarian society when they followed Marxism better, and has become relatively wealthy since abandoning a great deal of those ideas and participating in a form of capitalism?
array_key_first 4 days ago||
I think it's not exactly a fair comparison, because capitalism and communism are both very new economic systems. And, in the time communist systems existed, they were existentially threatened by high GDP nations.

I think, it's clear to me, that capitalists feel extremely threatened by the mere concept of Marxism and what it could mean for them. Even if it's happening on the other side of the world. They will deploy bombs, soldiers, develop nukes.

I'm not saying that it works and it's good. But, consider: most capitalist nations are abject failures as well. There's only a handful of capitalist nations that are developed, and they stay developed because they imperialisticly siphon wealth from the global periphery. We don't know if this system is sustainable. Really, we don't.

Since WWII, the US has just been riding the waves of having 50% of the global GDP. It's not that we're doing good - it's that everyone else was bombed to shreds and we weren't. We've sort of been winning by default. I don't think that's enough to just call it quits.

piva00 5 days ago|||
Centralised planning is not what Marxism is about though, Marxism is about class struggle and the abolishment of a capital-owning class, distributing the fruits of labour to the labourers.

In that definition it's even more decentralised than capitalism which has inherent incentives for the accumulation of capital into monopolies, since those are the best profit-generating structures, only external forces from capitalism can reign into that like governments enforcing anti-trust/anti-competitive laws to control the natural tendency of monopolisation.

If the means of production were owned by labourers (not through the central government) it could be possible to see much more decentralisation than the current trend from the past 40 years of corporate consolidation.

The centralisation is already happening under capitalism.

Retric 4 days ago|||
Yep, a common US example of Marxism is when Farmer owned co-ops for collecting and distributing crops. That model is well aligned with protecting family farms by avoiding local rent seeking monopolies.

Other parts of the agro sector are far more predatory, but it’s hard do co-op style manufacturing of modern farm equipment etc. Marxism was created in a world where Americans owned other Americans it’s conceptually tied into abolitionist thinking where objecting to the ownership of the more literal means of production IE people was being reconsidered. In that context the idea of owning farmland and underpaying farm labor starts to look questionable.

AndrewKemendo 4 days ago||||
Unfortunately you’re taking to the void here

People can’t differentiate between what Marx wrote and what classic dictators (Lenin, Stalin, Mao) did under some retcon “Marxist” banner

hunterpayne 4 days ago||
So you never read what Marx wrote then I take it. His ideas were even more unworkable than what the Communists tried. For example, he didn't understand specialization and thought people could just change jobs each day based upon whim. This is a big reason why the Marxists have never been able to convince the working class and why their support always came from the bureaucracy and not from people who actually did the work.
AndrewKemendo 4 days ago||
I understand it perfectly

I don’t agree with it for more fundamental reasons than you describe

Namely that he was trying to apply Hegelian dialectic with political philosophy when the dialectic is an empirical dead end mathematically so could never even theoretically solve the problems he was pressing on

Don’t confuse understanding with agreement

hunterpayne 4 days ago|||
> Centralised planning is not what Marxism is about though

What an incredibly dishonest thing to say. Go to a former Communist country and tell them this. They will either laugh you out of the room, or you will be running out of the room to escape their anger.

piva00 3 days ago||
They can laugh all they want, I understand their resentment from being oppressed into a failed experiment which misused the "marxist" label to propagandise itself. Still doesn't mean that Marxism is about centralised planning though.
roughly 5 days ago||||
Indeed, by what moral justification does one slow the wheels of commerce, no matter how many people they run over?
ashanoko 5 days ago|||
[dead]
inkyoto 5 days ago|||
I would argue that corporate actors (a state, an army or a corporation) are not true superorganisms but are semi-autonomous, field-embedded systems that can exhibit super-organism properties, with their autonomy being conditional, relational and bounded by the institutional logics and resource structures of their respective organisational fields. As the history of humanity has shown multiple times, such semi-autonomous with super-organism properties have a finite lifespan and are incapable of evolving their own – or on their own – qualitatively new or distinct, form of intelligence.

The principal deficiency in our discourse surrounding AGI lies in the profoundly myopic lens through which we insist upon defining it – that of human cognition. Such anthropocentric conceit renders our conceptual framework not only narrow but perilously misleading. We have, at best, a rudimentary grasp of non-human intelligences – biological or otherwise. The cognitive architectures of dolphins, cephalopods, corvids, and eusocial insects remain only partially deciphered, their faculties alien yet tantalisingly proximate. If we falter even in parsing the intelligences that share our biosphere, then our posturing over extra-terrestrial or synthetic cognition becomes little more than speculative hubris.

Should we entertain the hypothesis that intelligence – in forms unshackled from terrestrial evolution – has emerged elsewhere in the cosmos, the most sober assertion we can offer is this: such intelligence would not be us. Any attempt to project shared moral axioms, epistemologies or even perceptual priors is little more than a comforting delusion. Indeed, hard core science fiction – that last refuge of disciplined imagination – has long explored the unnerving proposition of encountering a cognitive order so radically alien that mutual comprehension would be impossible, and moral compatibility laughable.

One must then ponder – if the only mirror we possess is a cracked one, what image of intelligence do we truly see reflected in the machine? A familiar ghost, or merely our ignorance, automated?

Animats 5 days ago||
> I would argue that corporate actors (a state, an army or a corporation) are not true superorganisms but are semi-autonomous, field-embedded systems that can exhibit super-organism properties, with their autonomy being conditional, relational and bounded by the institutional logics and resource structures of their respective organisational fields.

Lotsa big words there.

Really, though, we're probably going to have AI-like things that run substantial parts of for-profit corporations. As soon as AI-like things are better at this than humans, capitalism will force them to be in charge. Companies that don't do this lose.

There's a school of thought, going back to Milton Friedman, that corporations have no responsibilities to society.[1] Their goal is to optimize for shareholder value. We can expect to see AI-like things which align with that value system.

And that's how AI will take over. Shareholder value!

[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...

halfcat 5 days ago|||
Costs will go down. But so will revenue, as fewer customers have an income because a different company also cut costs.

Record profits. Right up until the train goes off a cliff.

SoftTalker 5 days ago|||
That assumes that consumers will just accept it. I would not do business with an AI company, just as I don’t listen to AI music, view AI pictures or video, or read AI writings. At least not knowingly.
Jensson 5 days ago|||
People would absolutely buy AI farmed meat or vegetables if they were 10% cheaper. The number of people who pay a premium depending on production method is a small minority.
AndrewKemendo 4 days ago|||
As long as you stay inside capitalism you unquestionably and unequivocally do business with an AI company.
SoftTalker 4 days ago||
I mean, I don't know of any companies that are purely digital, with an AI as the CEO. Maybe that's coming, but I would not intentionally buy anything from such a company, or work for such a company, or invest in such a company.

I guess it would be like being a vegan. It might be a pointless effort in the grand scheme of things, but at least I can say that I am not contributing.

AndrewKemendo 4 days ago||
So OpenAI isn’t an AI company then?
chrisweekly 5 days ago|||
Beautifully expressed.
parineum 5 days ago|||
> Nobody would throw someone out of their home or deny another person lifesaving medicine

Individuals with rental properties and surgeons do this every day.

margalabargala 5 days ago|||
Quibble, surgeons are not the ones doing this. Surgeons' schedules are generally permanently full. They do not typically deny people lifesaving medicine, on the contrary they spend all of their time providing lifesaving medicine.

The administrators who create the schedule for the surgeons, are the one denying lifesaving care to people.

schmidtleonard 5 days ago|||
Triage, whether by overworked nurses or by auction or by private death panel or by public death panel, is not necessarily a problem created by administrators. It can be created by having too few surgeons, in which case whatever caused that (in a time of peace, no less) is at fault. Last I heard it was the doctor's guild lobbying for a severe crimp on their training pipeline, in which case blame flows back to some combination of doctors and legislators.
nradov 5 days ago|||
You heard wrong. While at one point the AMA lobbied Congress to restrict residency slots, they reversed position some years back. However Congress has still refused to increase Medicare funding for residency programs. This is essentially a form of care rationing imposed through supply shortages.

https://savegme.org/

There is no "doctor's guild". No one is required to join the AMA to practice medicine, nor are they involved in medical school accreditation.

schmidtleonard 4 days ago||
Like I said, some combination of doctors and legislators. If doctors lobbied the laws (or budgetary line items) onto the books and they are still in effect, they still have culpability.

Blaming congress too is fine, but let's be clear: someone has to fight to increase every budget and the AMA didn't just know this when they were structuring their proposal, didn't just count on it not happening, they considered this an implementation detail subordinate to the openly admitted primary objective of propping up physician wages as the Greatest Generation passed. That was always the goal, they were extremely open about it, and about 15 years ago I was attending a talk on demographics in medicine with a primarily physician audience, one of them asked what the plans were to change this to staff up for the Boomer wave (the bump was on the slide, begging the question) and the presenter waved his hand and said maybe they could do something... or not, and then he laughed, and the rest of the room laughed with him.

I'm glad that the AMA has changed their stated position now that it's too late to change course (for the Boomers anyway) and their squeeze is bearing fruit for them and suffering for their patients, but I'll always remember that room full of doctors and doctors-to-be laughing about the prospect of intentionally understaffing for profit. I have it filed in my memory right next to the phone call of Enron traders giggling as they ordered power plants offline to scare up prices, except it's about a million times worse.

margalabargala 5 days ago|||
I'm not even talking about triage. It's not a matter of who has the worst problem, it's about which patient the nurses deliver to the surgeon and anesthesiologist. Literally just who gets scheduled and when.
xboxnolifes 5 days ago||||
If all of the surgeons' schedules are full, the administrators are as innocent as the surgeons.
margalabargala 4 days ago||
If the surgeons are busy each day, that removes all responsibility for who gets added to their schedule 3 months in advance? Please elaborate.
parineum 5 days ago|||
Surely they could volunteer to do some charity surgery in their own time. They aren't slaves.
icehawk 5 days ago|||
Sure! They can volunteer:

- Their skills.

- Their time.

- The required materials to properly perform the surgery.

They can't volunteer:

- The support staff around them required to do surgery.

- The space to do the surgery.

Surgery isn't a one-man show.

What did you mean by "Surely they could volunteer to do some charity surgery in their own time. They aren't slaves?"

parineum 5 days ago||
There are a lot of individuals who have the ability to provide those resources.

Even if that's a bad example, there are innumerable examples where individuals do choose not to help others in the same way that corporations don't.

Frankly, nearly every individual is doing that by not volunteering every single extra dollar and minute they don't need to survive.

margalabargala 4 days ago||
You've now turned a moral willingness-to-help problem into a logistical and coordination problem.

What you suggest requires entire organizations to execute properly. These organizations do exist, such as Doctors Without Borders.

I don't think your original claim is fair, which amounts to "any surgeon who does not participate in Doctors Without Borders is just as bad as a landlord who evicts a family during winter".

What do you think we owe to one another, philosophically?

parineum 4 days ago||
It's not about what I think. The post I replied made the assertion that individuals don't turn people away like corporations do (essentially).

My point is that individuals choose not to help others constantly. Every time I see a homeless person, I don't offer them a couch to sleep on. I could, at least once, but I don't. We all do that, most days multiple times.

And yes, that does apply to doctors who don't volunteer services. It applies to me too and, I bet, to the OP as well.

margalabargala 4 days ago||
Firstly, there's a difference between failing to take an action, such as not offering a homeless person a couch, and actively taking an action, such as kicking someone out of their home.

Secondly, as discussed, the "individuals don't turn people away, corporations do" dynamic really does apply to doctors. If you were, say, on an airplane with a doctor sitting next to you, and you managed to cut yourself or burn yourself or something, I would bet they would render aid.

Basically you're equating turning someone away, and withdrawing something that someone has, with failing to actively seek out people who could need help. But I don't think those are morally equivalent. Maybe you're a utilitarian and that's fine, but I'm a virtue ethicist and I do not agree that equality of outcome means equality of morality.

margalabargala 5 days ago|||
Not really, because surgeons require operaing rooms and support staff and equipment to do what they do, all of which are controlled bybthe aforementioned hospital administrators.
card_zero 5 days ago||||
Yeah, it's the natural empathy myth. Somebody totally would kill somebody else for some reason. It's not inherent to being human that you're unable to be steely-hearted and carry out a range of actions we might classify as "mean" - and those mean actions can have reasons behind them.

So, OK, abdication of responsibility to a collective is a thing. Just following orders. So what? Not relevant to AGI.

Oh wait, this is about "superintelligence", whatever that is. All bets are off, then.

NoMoreNicksLeft 5 days ago||
The superintelligence might decide based on things only it can understand that the existence of humans prevents some far future circumstance where even more "good" exists in the universe. When it orders you to toss the babies into the baby-stomping machine, perhaps you should consider doing so based on the faith in its superintelligence that we're supposed to have.

Human beings aren't even an intelligent species, not at the individual level. When you have a tribe of human beings numbering in the low hundreds, practically none of them need to be intelligent at all. They need to be social. Only one or two need to be intelligent. That one can invent microwave ovens and The Clapper™, and the rest though completely mentally retarded can still use those things. Intelligence is metabolically expensive, after all. And if you think I'm wrong, you're just not one of the 1-in-200 that are the intelligent individuals.

I've yet to read the writings of anyone who can actually speculate intelligently on artificial intelligence, let alone meet such a person. The only thing we have going for us as a species is that, to a large degree, none of you are intelligent enough to ever deduce the principles of intelligence. And god help us if the few exceptional people out there get a wild bug up their ass to do so. There will just be some morning where none of us wake up, and the few people in the time zone where they're already awake will experience several minutes of absolute confusion and terror.

JumpCrisscross 5 days ago|||
And lenders and insurers.
goatlover 5 days ago|||
Also sociopaths are more capable of doing those things while pretending they are empathetic and moral to get positions of power or access to victims. We know a certain percentage of human mammals have sociopathic or narcissistic tendencies, not just misaligned groups of humans that they might take advantage of by becoming a cult leader or war lord or president.
watwut 5 days ago||
> soldier will bomb a village for the sake of their nation’s geostrategic position.

Soldier does that to please the captain, to look manly and tough to peers, to feel powerful. Or to fulfill a duty - moral mandate on itself. Or out of hate, because soldiers are often made to hate the ennemies.

> Nobody would throw someone out of their home or deny another person lifesaving medicine

They totally would. Trump would do it for pleasure of it. Project 2025 authors would so it happily and sees the rest of us as wuss. If you listen to right wing rhetorics and look at voters, many people will hapilly do just that.

jandrewrogers 5 days ago||
I’ve known both Ben and Eliezer since the 1990s and enjoyed the arguments. Back then I was doing serious AI research along the same lines as Marcus Hutter and Shane Legg, which had a strong basis in algorithmic information theory.

While I have significant concerns about AGI, I largely reject both Eliezer’s and Ben’s models of where the risks are. It is important to avoid the one-dimensional “two faction” model that dominates the discourse because it really doesn’t apply to complex high-dimensionality domains like AGI risk.

IMO, the main argument against Eliezer’s perspective is that it relies pervasively on a “spherical cow on a frictionless plane” model of computational systems. It is fundamentally mathematical, it does not concern itself with the physical limitations of computational systems in our universe. If you apply a computational physics lens then many of the assumptions don’t hold up. There is a lot of “and then something impossible happens based on known physics” buried in the assumptions that have never been addressed.

That said, I think Eliezer’s notion that AGI fundamentally will be weakly wired to human moral norms is directionally correct.

Most of my criticism of Ben’s perspective is against the idea that some kind of emergent morality that we would recognize is a likely outcome based on biological experience. The patterns of all biology emerged in a single evolutionary context. There is no reason to expect those patterns to be hardwired into an AGI that developed along a completely independent path. AGI may be created by humans but their nature isn’t hardwired by human evolution.

My own hypothesis is that AGI, such as it is, will largely reflect the biases of the humans that built it but will not have the biological constraints on expression implied by such programming in humans. That is what the real arms race is about.

But that is just my opinion.

svsoc 5 days ago||
Can you give concrete examples of "something impossible happens based on known physics"? I have followed the AI debate for a long time but I can't think of what those might be.
jandrewrogers 5 days ago|||
Optimal learning is an interesting problem in computer science because it is fundamentally bound by geometric space complexity rather than computational complexity. You can bend the curve but the approximations degrade rapidly and still have a prohibitively expensive exponential space complexity. We have literature for this; a lot of the algorithmic information theory work in AI was about characterizing these limits.

The annoying property of prohibitively exponential (ignoring geometric) space complexity is that it places a severe bound on computational complexity per unit time. The exponentially increasing space implies an increase in latency for each sequentially dependent operation, bounded at the limit by the speed of light. Even if you can afford the insane space requirements, your computation can’t afford the aggregate latency for anything useful even for the most trivial problems. With highly parallel architectures this can be turned into a latency-hiding problem to some extent but this also has limits.

This was thoroughly studied by the US defense community decades ago.

The tl;dr is that efficient learning scales extremely poorly, more poorly than I think people intuit. All of the super-intelligence hard-takeoff scenarios? Not going to happen, you can’t make the physics work without positing magic that circumvents the reality of latencies when your state space is unfathomably large even with unimaginably efficient computers.

I harbor a suspicion that the cost of this scaling problem, and the limitations of wetware, has bounded intelligence in biological systems. We can probably do better in silicon than wetware in some important ways but there is not enough intrinsic parallelism in the computation to adequately hide the latency.

Personally, I find these “fundamental limits of computation” things to be extremely fascinating.

hunterpayne 5 days ago|||
So I studied Machine Learning too. One of the main things I learned is that for any problem there is an ideally sized model that when trained will produce the lowest error rate. Now, when you do multi-class learning (training a model for multiple problems), that ideally sized model is larger but there is still an optimum sized model. Seems to me that for GAI, there will also be an ideally sized model. I wouldn't be surprised if the complexity of that model was very similar to the size of the human brain. If that is the case, then some sort of super-intelligence isn't possible in any meaningful way. This would seem to track with what we are seeing in the today's LLMs. When they build bigger models, they often don't perform as well as the previous one which perhaps was at some maximum/ideal complexity. I suspect, we will continue to run into this barrier over and over again.
czl 5 days ago||
> for any problem there is an ideally sized model that when trained will produce the lowest error rate.

You studied ML before discovery of "double descent"?

https://youtu.be/z64a7USuGX0

hunterpayne 4 days ago||
I did, however I have also observed ideal sized models since then in algorithms designed with knowledge of it.
davidivadavid 5 days ago|||
Any reference material (papers/textbooks) on that topic? It does sound fun.
adastra22 5 days ago|||
Not the person you are responding to, but much of the conclusions drawn by Bostrom (and most of EY’s ideas are credited to Bostrom) depend on infinities. The orthogonality thesis being series from AIXI, for example.

EY’s assertions regarding a fast “FOOM” have been empirically discredited by the very fact that ChatGPT was created in 2022, it is now 2025, and we still exist. But goal posts are moved. Even ignoring that error, the logic is based on, essentially, “AI is a magic box that can solve any problem by thought alone.” If you can define a problem, the AI can solve it. This is part of the analysis done by AI x-risk people of the MIRI tradition. Which ignores entirely that there are very many problems (including AI recursive improvement itself) which are computationally infeasible to solve in this way, no matter how “smart” you are.

pas 4 days ago||
As far as I understand ChatGPT is not capable of self-improvement, so EY's argument is not applicable to it. (At least based on this https://intelligence.org/files/IEM.pdf from 2013.)

The FOOM argument starts with some kind of goal-directed agent (that escapes and then it) starts building a more capable version of itself (and then goal drift might set in might not)

If you tell ChatGPT to build ChatGPT++ and leave currently there's no time horizon within it would accomplish either that or escape, or anything, because now it gives you tokens rendered on some website.

The argument is not that AI is a magic box.

- The argument is that if there's a process that improves AI. [1]

- And if during that process AI becomes so capable that it can materially contribute to the process, and eventually continue (un)supervised. [2]

- Then eventually it'll escape and do whatever it wants, and then eventually the smallest misalignment means we become expendable resources.

I think the argument might be valid logically, but the constant factors are very important to the actual meaning and obviously we don't know them. (But the upper and lower estimates are far. Hence the whole debate.)

[1] Look around, we have a process that's like that. However gamed and flawed we have METR scores and ARC-AGI benchmarks, and thousands of really determined and skillful people working on it, hundreds of billions of capital deployed to keep this process going.

[2] We are not there yet, but decades after peak oil arguments we are very good at drawing various hockey stick curves.

adastra22 4 days ago||
(1) You'd be surprised just how much of Claude, ChatGPT, etc. is essentially vibe coded. They're dog-fooding agentic coding in the big labs and have been for some time.

(2) It is quite trivial to Ralph Wiggam improvements to agentic tools. Fetch the source code to Claude Code (it's minimized, but that never stopped Claude) or Codex into a directory, then run it in a loop with the prompt "You are an AI tool running from the code in the current directory. Every time you finish, you are relaunched, acquiring any code updates that you wrote in the last session. Do whatever changes are necessary for you to grow smarter and more capable."

Will that work? Hell no, of course it won't. But here's the thing: Yudkowsky et al predicted that it would. Their whole doomer if-you-build-it-everybody-dies argument is predicated on this: that take-off speeds would be lightning fast, as a consequence of exponentials with a radically compressed doubling time. It's why EY had a total public meltdown in 2022 after visiting some of the AI labs half a year before the release of ChatGPT. He didn't even think we would survive past the end of the year.

Neither EY nor Bostrom, nor anyone in their circle are engineers. They don't build things. They don't understand the immense difficulty of getting something to work right the first time, nor how incredibly difficult it is to keep entropy at bay in dynamical systems. When they set out to model intelligence explosions, they assumed smooth exponentials and no noise floor. They argued that the very first agent capable of editing its own source code as good as the worst AI researchers, would quickly bootstrap itself into superintelligence. The debate was whether it would take hours or days. This is all in the LessWrong archives. You can go find the old debates, if you're interested.

To my knowledge, they have never updated their beliefs or arguments since 2022. We are now 3 years past the bar they set for the end of the world, and things seem to be going ok. I mean, there's lots of problems with job layoffs, AI used to manipulate elections, and slop everywhere you look. But Skynet didn't engineer a bioweapon or gray goo to wipe out humanity - which is literally what they argued would be happening two years ago!

pas 1 hour ago||
no, I do believe that Devin-type agents are coming (like OpenAI's Codex) and they are being used to produce code, but currently the bottleneck is code review (curation), and yes it makes sense to try and relaunch with new code (run tests, benchmarks, etc.. and if it's better than the previous then switch over - genetic algorithms are doing exactly this, now the mutator will be smarter than random)

but saying that "since it was possible already many years ago and we are still alive" the whole argument is false doesn't stand up to scrutiny, because the argument doesn't make any claim on speed nor does it depend on it. quite the opposite, it depends on accumulating infinitesimal gains (of intelligence and misalignment).

FOOM (fast take off, intelligence explosion either through deception or by someone asking for a bit too many paperclips) is simply one scenario. also notice that even this (or any sudden loss of control) doesn't depend on the timimgs between first self-improving agent, FOOM, and then death.

like I said the hypothesis is pretty coherent logically (though obviously not a tautology), but the constant factors are pretty important (duh!)

... I think spending time on the LessWrong debates is a waste of time because by 2022 the neuroticism took over and there were no real answers to challenges

conscion 4 days ago||
> Most of my criticism of Ben’s perspective is against the idea that some kind of emergent morality that we would recognize

I think Anthropic has already provided some evidence that intelligence is tied to morality (and vice versa) [1]. When they tried to steer LLM models morals they saw intelligence degradation also.

[1]: https://www.anthropic.com/research/evaluating-feature-steeri...

drivebyhooting 5 days ago||
Many of us on HN are beneficiaries of the standing world order and American hegemony.

I see the developments in LLMs not as getting us close to AGI, but more as destabilizing the status quo and potentially handing control of the future to a handful of companies rather than securing it in the hands of people. It is an acceleration of the already incipient decay.

IAmGraydon 5 days ago||
Are you seeing a moat develop around LLMs, indicating that only a small number of companies will control it? I'm not. It seems that there's nearly no moat at all.
drivebyhooting 5 days ago|||
The moat is around capital. For thousands of years most people were slaves or peasants whose cheap fungible labor was exploited.

For a brief period intellectual and skilled work has (had?) been valued and compensated, giving rise to a somewhat wealthy and empowered middle class. I fear those days are numbered and we’re poised to return to feudalism.

What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination? Or burgeoning of precariat gig workers barely hanging on? If we’re speaking of extremes, I find the latter far more likely.

stale2002 5 days ago||
> The moat is around capital.

Not really. I can run some pretty good models on my high end gaming PC. Sure, I can't train them. But I don't need to. All that has to happen is at least one group releases a frontier model open source and the world is good to go, no feudalism needed.

> What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination

I'd say whats more likely is that whatever we are seeing now continues. And that current day situation is a massive startup boom run on open source models that are nearly as good as the private ones while GPUs are being widely distributed.

bayarearefugee 5 days ago||||
I am also not seeing a moat on LLMs.

It seems like the equilibrium point for them a few years out will be that most people will be able to run good enough LLMs on local hardware through a combination of the fact that they don't seem to be getting much better due to input data exhaustion while various forms of optimization seem to be increasingly allowing them to run on lesser hardware.

But I still have generalized lurking amorphous concerns about where this all ends up because a number of actors in the space are certainly spending as if they believe a moat will magically materialize or can be constructed.

CamperBob2 5 days ago|||
LLMs as we know them have no real moat, but few people genuinely believe that LLMs are sufficient as a platform for AGI. Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.
czl 5 days ago||
> Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.

Today yes but extrapolate GPU/NPU/CPU improvement by a decade.

steve_adams_86 5 days ago|||
I agree. You wouldn't see incredibly powerful and wealthy people frothing at the mouth to build this technology if that wasn't true, in my opinion.
goatlover 5 days ago||
People who like Curtis Yarvin's ramblings.
jerf 5 days ago||
No one needs Curtis Yarvin, or any other commentator of any political stripe, to tell them that they'd like more money and power, and that they'd like to get it before someone else locks it in.

We should be so lucky as to only have to worry about one particular commentator's audience.

voidfunc 5 days ago|||
Im pretty skeptical "the people" are smart enough to control their own destiny anymore. We've deprioritized education wo heavily in the US that it may be better to have a ruling class of corporations and elites. At least you know where things stand and how they'll operate.
roughly 5 days ago|||
> it may be better to have a ruling class of corporations and elites.

Given that the outcome of that so far has been to deprioritize education so heavily in the US that one becomes skeptical that the people are smart enough to control their own destiny anymore while simultaneously shoving the planet towards environmental calamity, I’m not sure doubling down on the strategy is the best bet.

idle_zealot 5 days ago|||
Or we could, you know, prioritize education.
roenxi 5 days ago|||
The standing world order is already dead since well before AI, it ended back in 2010s in terms of when the US had an opportunity to maybe resist change and we're just watching the inevitable consequences play out. They no longer have the economic weight to maintain control over Asia even assuming China is overstating their income by 2x. The Ukraine war has been a bloodier path than we needed to travel to make the point, but if they can't coerce Russia there is an open question of who they can, Russia isn't a particularly impressive power.

With that backdrop it is hard to see what impact AI is supposed to make to people who are reliant on US hegemony. They probably want to find something reliable to rely on already.

pols45 5 days ago||
It is not decay. People are just more conscious than previous generations ever were about how the world works. And that leads to confusion and misunderstandings if they are only exposed to herd think.

The chicken doesn't understand it has to lay a certain number of eggs a day to be kept alive in the farm. It hits its metrics because it has been programmed to hit them.

But once it gets access to chatgpt and develops consciousness of how the farm works, the questions it asks slowly evolve with time.

Initially its all fear driven - how do we get a say in how many eggs we need to lay to be kept alive? How do we keep the farm running without relying on the farmer? etc etc

Once the farm animals begins to realize the absurdity of such questions, new questions emerge - how come the crow is not a farm animal? why is the shark not used as a circus animal? etc etc

And thro that process, whose steps cannot be skipped the farm animal begins to realize certain things about itself which no one, especially the farmer, has any incentive of encouraging.

binary132 5 days ago|||
truly, nobody ever asked such questions until they had access to the world’s most sycophantic dumb answer generating machine
gsf_emergency_4 4 days ago||||
Are you entertaining the idea that chatGPT should become Pastoral Care for the masses? Sounds like an easier target than AGI.

(Stoics have already taken issue with the notion that fear is the motive for all human action, and yes, consciousness is a part of their prescription)

Separately,

"Hard work seems to lead to things getting better"

sounds like an unsung (fully human) impulse

https://geohot.github.io/blog/jekyll/update/2025/10/24/gambl...

hunterpayne 5 days ago|||
Ideology is a -10 modifier on Intelligence
dns_snek 5 days ago||
Are you implying that there are people who don't have ideology or that they're somehow capable of reasoning and acting independently of their ideology?
hunterpayne 4 days ago||
I'm implying that some people put ideology above everything else including data, experience and context. Most people don't but some do. People of course have biases based upon experience. But they make a good faith attempt to be accurate. Others are completely blinded to reality by ideology. Acting like this doesn't happen just because people have different opinions is just dishonest.
JSR_FDED 5 days ago||
> In theory, yes, you could pair an arbitrarily intelligent mind with an arbitrarily stupid value system. But in practice, certain kinds of minds naturally develop certain kinds of value systems.

If this is meant to counter the “AGI will kill us all” narrative, I am not at all reassured.

>There’s deep intertwining between intelligence and values—we even see it in LLMs already, to a limited extent. The fact that we can meaningfully influence their behavior through training hints that value learning is tractable, even for these fairly limited sub-AGI systems.

Again, not reassuring at all.

lll-o-lll 5 days ago||
> There’s deep intertwining between intelligence and values—we even see it in LLMs already

I’ve seen this repeated quite a bit, but it’s simply unsupported by evidence. It’s not as if this hasn’t been studied! There’s no correlation between intelligence and values, or empathy for that matter. Good people do good things, you aren’t intrinsically “better” because of your IQ.

Standard nerd hubris.

MangoToupe 5 days ago|||
Sure, but this might just imply a stupid reproduction of existing values. Meaning that we're building something incapable of doing good things because it wants the market to grow.
JumpCrisscross 5 days ago||||
> There’s no correlation between intelligence and values

Source? (Given values and intelligence are moving targets, it seems improbable one could measure one versus another without making the whole exercise subjective.)

lll-o-lll 4 days ago|||
Here is a reference: https://www.pure.ed.ac.uk/ws/portalfiles/portal/515059798/Za...

A study of 1350 people showing a negative correlation between intelligence and moral foundations. No causation is given, but my conjecture is that the smarter you are, the more you can reason your way to any worldview that suits. In my opinion, AGI would be no different; once they can reason they can construct a completely self-consistent moral framework to justify any set of goals they might have.

mitthrowaway2 5 days ago|||
Assuming you take intelligence to mean something like "the ability to make accurate judgements on matters of fact, accurate predictions of the future, and select courses of action that achieve one's goals or maximize one's objective function", then this is essentially another form of the Is-Ought problem derived by Hume: https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem
lukeschlather 5 days ago|||
I think you're confusing "more intelligence means you have to have more values" with "more intelligence means you have to have morally superior values."

The point is, you're unlikely to have a system that starts out with the goal of making paperclips and ends with the goal of killing all humans. You're going to have to deliberately program the AI with a variety of undesirable values in order for it to arrive in a state where it is suited for killing all humans. You're going to have to deliberately train it to lie, to be greedy, to hide things from us, to look for ways to amass power without attracting attention. These are all hard problems and they require not just intelligence but that the system has very strong values - values that most people would consider evil.

If, on the other hand, you're training the AI to have empathy, to tell the truth, to try and help when possible, to avoid misleading you, it's going to be hard to accidentally train it to do the opposite.

mitthrowaway2 5 days ago|||
Sorry, this is completely incorrect. All of those - lying, amassing power, hiding motives - are instrumental goals which arise in the process of pursuing any goal that has any possibility of resistance from humans.

This is like arguing that a shepherd who wants to raise some sheep would also have to, independently of the desire to protect his herd, be born with an ingrained desire to build fences and kill wolves, otherwise he'd simply watch while they eat his flock.

That's just not the case; "get rid of the wolves" is an instrumental sub-goal that the shepherd acquires in the process of attempting to succeed and shepherding. And quietly amassing power is something that an AI bent on paperclipping would do to succeed at paperclipping, especially once it noticed that humans don't all love paperclips as much as it does.

rafabulsing 5 days ago|||
> You're going to have to deliberately train it to lie, to be greedy, to hide things from us, to look for ways to amass power without attracting attention.

No, that's the problem. You don't have to deliberately train that in.

Pretty much any goal that you train the AI to achieve, once it gets smart enough, it will recognize that lying, hiding information, manipulating and being deceptive are all very useful instruments for achieving that goal.

So you don't need to tell it that: if it's intelligent, it's going to reach that conclusion by itself. No one tells children that they should lie either, and they all seem to discover that strategy sooner or later.

So you are right that you have to deliberately train it away from using those strategies, by being truthful, empathetic, honest, etc. The issue is that those are ill defined goals. Philosophers have being arguing about what's true and what's good since philosophy first was a thing. Since we can barely find those answers to ourselves, it's a hard chance that we'll be able to perfectly impart them onto AIs. And when you have some supremely intelligent agent acting on the world, even a small misalignment may end up in catastrophe.

czl 5 days ago||
> when you have some supremely intelligent agent acting on the world, even a small misalignment may end up in catastrophe

Why not frame this as challenge for AI? When the intelligence gap between a fully aligned system and a not-yet-aligned one becomes very large, control naturally becomes difficult.

However, recursive improvement — where alignment mechanisms improve alongside intelligence itself — might prevent that gap from widening too much. In other words, perhaps the key is ensuring that alignment scales recursively with capability.

smallmancontrov 5 days ago|||
> When automation eliminates jobs faster than new opportunities emerge, when countries that can’t afford universal basic income face massive displacement, we risk global terrorism and fascist crackdown

Crazy powerful bots are being thrown into a world that is already in the clutches of a misbehaving optimizer that selects for and elevates self-serving amoral actors who fight regularization with the fury of 10,000 suns. We know exactly which flavor of bot+corp combos will rise to the top and we know exactly what their opinions on charity will be. We've seen the baby version of this movie before and it's not reassuring at all.

throwaway290 5 days ago|||
The author destroys own argument by calling them "minds". What like human mind?

You can't "just" align a person. You know that quiet guy next door, so nice great at math, and then he shoots up a school.

If we solved this we would not have psychos and hitlers.

if you have any suspicion that anything like that can become some sort of mega powerful thing that none of us can understand... you have gotta be crazy to not do whatever it takes to nope the hell out of that timeline

sdwr 4 days ago||
Yes, that is why nobody has children, because they might grow up to be murderers
throwaway290 3 days ago||
if your child was supposed to grow up and become a world ending superhuman superpower, I'd say if you choose to have it you are a bit of a psycho:) thankfully it is never a real analogy
nilirl 5 days ago||
This was weak.

The author's main counter-argument: We have control in the development and progress of AI; we shouldn't rule out positive outcomes.

The author's ending argument: We're going to build it anyway, so some of us should try and build it to be good.

The argument in this post was a) not very clear, b) not greatly supported and c) a little unfocused.

Would it persuade someone whose mind is made up that AGI will destroy our world? I think not.

lopatin 5 days ago|
> a) not very clear, b) not greatly supported and c) a little unfocused.

Incidentally this was why I could never get into LessWrong.

jay_kyburz 5 days ago||
The longer the augment, the more time and energy it takes to poke holes in it.
dreamlayers 5 days ago||
Maybe motivation needs to be considered separately from intelligence. Pure intelligence is more like a tool. Something needs to motivate use of that tool toward a specific purpose. In humans, motivation seems related to emotions. I'm not sure what would motivate an artificial intelligence.

Right now the biggest risk isn't what artificial intelligence might do on its own, but how humans may use it as a tool.

czl 5 days ago|
100%!

> I'm not sure what would motivate an artificial intelligence.

Those who give it orders hence your concern about how AI will be used as a tool is spot on.

lutusp 5 days ago||
The biggest problem with possible future AGI/ASI are not the possibilities, but that all the feedback loops are closed, meaning what we think about it, and what computers think about it, changes the outcome. This sets up a classic chaotic system, one extraordinarily sensitive to initial conditions.

But it's worse. A classic chaotic system exhibits extreme sensitivity to initial conditions, but this system remains sensitive to, and responds to, tiny incremental changes, none predictable in advance.

We're in a unique historical situation. AGI boosters and critics are equally likely to be right, but because of the chaotic topic, we have no chance to make useful long-term predictions.

And humans aren't rational. During the Manhattan Project, theorists realized the "Gadget" might ignite the atmosphere and destroy the planet. At the time, with the prevailing state of knowledge, this catastrophe had been assigned a non-zero probability. But after weighing the possibilities, those in change said, "Hey -- let's set it off and see what happens."

weregiraffe 5 days ago|
Do yourself a favor, google "Pascal's mugging".
lutusp 4 days ago||
Yes, thanks, I didn't want to invoke it by name. Certainly relevant.
zzzeek 5 days ago||
What's scaring me the most about AI is that FOX News is now uncritically showing AI videos that portray fictitious Black people fraudulently selling food stamps for drugs, and they are claiming these videos are real.

AGI is not going to kill humanity, humanity is going to kill humanity as usual, and AI's immediate role in assisting this is as a tool that renders truth, knowledge, and a shared reality as essentially over.

speff 5 days ago||
I didn't find the specific article you're referencing, but I did find this[0] "SNAP beneficiaries threaten to ransack stores over government shutdown" with the usual conservative-created strawmen about poor people getting mad that the government's not taking care of their kids and quoting stereotypical ebonics.

You can see the effect is has on their base here[1]. It looks like they changed it sometime to say "AI videos of SNAP beneficiaries complaining about cuts go viral"[2] with a small note at the end saying they didn't mention it was AI. This is truly disgusting.

[0]: https://web.archive.org/web/20251031212530/https://www.foxne...

[1]: https://old.reddit.com/r/Conservative/comments/1ol9iu6/snap_...

[2]: https://www.foxnews.com/media/snap-beneficiaries-threaten-ra...

randycupertino 5 days ago||
Here is one example: https://www.reddit.com/r/BlackPeopleTwitter/comments/1ojydgq...

Here's a bunch more, notice they all have the same "I've got 7 kids with 7 daddies" script- https://www.reddit.com/r/themayormccheese/comments/1ojtbwz/a...

speff 5 days ago||
I appreciate the links. Though I think OP got some wires crossed mentioning that Fox was reporting the SNAP->Drugs conversion - since that video with the Fox reporter came from Tiktok. Though they did end up incidentally reporting other fake AI videos as real, so... yea.

Regarding your second link - it's pretty surreal to see. Reminiscent of "this is extremely dangerous to our democracy".

goatlover 5 days ago|||
I don't understand how a news agency is allowed to blatantly lie and mislead the public to that degree. That's an abuse of free speech, not a proper use of it. It goes way beyond providing a conservative (if you even call it that) perspective. It's straight up propaganda.
randycupertino 5 days ago|||
> I don't understand how a news agency is allowed to blatantly lie and mislead the public to that degree. That's an abuse of free speech, not a proper use of it. It goes way beyond providing a conservative (if you even call it that) perspective. It's straight up propaganda.

Did you see Robert J. O’Neill the guy who claims he shot Osama bin laden play various roles as a masked guest interviewee on Fox news? He wears a face mask and pretends to be ex-Antifa, in another interview pretends to be Mafia Mundo an a mexican cartel member, another he plays a Gaza warlard, and a bunch of other anonymous extremist people? Now they won't even have to use this guy acting as random fake people, they can just whip of an AI interviewee to say whatever narrative they want to lie about.

https://www.yahoo.com/news/articles/fox-news-masked-antifa-w...

https://knowyourmeme.com/memes/events/robert-j-oneill-masked...

https://www.reddit.com/r/conspiracy/comments/1nzyyod/is_fox_...

metalcrow 5 days ago|||
it is, yes. however, it's considered an acceptable bullet to bite in the United States's set of values, considering the alternative is the government gets to decide what speech to allow, or decide what a "lie" is.
goatlover 5 days ago|||
I would have agreed before, but seeing the fruition from decades of propaganda, I no longer think it's an acceptable bullet. Not when it leads to undermining democracy and the erosion of free speech.
Grosvenor 5 days ago||||
The "editorial" pieces of FOX news were found to be "entertainment" by US judges. That's Tucker Carlson, Bill O'Reilly, and probably the current guys.

The judge claimed that the average viewer could differentiate that from fact, and wouldn't be swayed by it.

I disagree with that ruling. I'm not sure what the "news" portions of FOX were considered.

zzzeek 3 days ago|||
the way to combat this is to legislate against media conglomeration in the hands of a small set of billionaires. We have anti-trust laws that can be pulled out of deep storage to be used for this purpose. while we're at it, deflating the wealth of actual billionaires back to nine or fewer digits would be pretty helpful too.
hunterpayne 5 days ago||
[flagged]
maplethorpe 5 days ago||
I think of it as inviting another country to share our planet, but one that's a million times larger and a million times smarter than all of our existing countries combined. If you can imagine how that scenario might play out in real life, then you probably have some idea of how you'd fare in an AGI-dominated world.

Fortunately, I think the type of AGI we're likely to get first is some sort of upgraded language model that makes less mistakes, which isn't necessarily AGI, but which marketers nonetheless feel comfortable branding it as.

t0lo 5 days ago|
Tbf LLMs are the aggregate of everything that came before. If you're an original thinker you have nothing to worry about
LPisGood 5 days ago|
> The fact that we can meaningfully influence their behavior through training hints that value learning is tractable

I’m at a loss for words. I don understand how someone who seemingly understands these systems can draw such a conclusion. They will do what they’re trained to do; that’s what training an ML model does.

czl 5 days ago||
AI trained and built to gather information, reason about it, and act on its conclusions is not too different from what animals / humans do - and even brilliant minds can fall into self-destructive patterns like nihilism, depression, or despair.

So even if there’s no “malfunction”, a feedback loop of constant analysis and reflection could still lead to unpredictable - and potentially catastrophic - outcomes. In a way, the Fermi Paradox might hint at this: it is possible that very intelligent systems, biological or artificial, tend to self-destruct once they reach a certain level of awareness.

More comments...