Top
Best
New

Posted by adrianhon 1 day ago

Sam Altman may control our future – can he be trusted?(www.newyorker.com)
1692 points | 701 commentspage 5
4ggr0 3 hours ago|
> while Y.C. took a six- or seven-per-cent cut

shamefully have to admit that my monkey-brain smirked because of an accidental 67-meme in a serious article.

cmiles8 6 hours ago||
It seems unlikely OpenAI can survive long term with Sam at the helm. Challenge is folks already realized that once and yet here we are.
mikkupikku 5 hours ago|
You come at the king, you best not miss. Unfortunately, having survived a coup, his odds of surviving the next have improved. Now he knows how they go, what to look for and how he might handle them. I wouldn't bet on him being kicked out, at least while OpenAI is still on top. If OpenAI stumbles and Anthropic or another starts to prevail, only then would I bet on Sam getting pushed out.
dmitrygr 16 hours ago||
The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious
frm88 3 hours ago||
It really stands out and judging from the overall excellent quality of this article, that was very intentional. Answers the headline, too.
jcgrillo 15 hours ago||
Life would be so much easier if I was that forgetful
nextlevelwizard 5 hours ago||
"If I don't destroy humanity someone far worse will do it" -Sam Altman
Cthulhu_ 3 hours ago|
https://en.wikipedia.org/wiki/Roko%27s_basilisk
b8 12 hours ago||
Sam failed upwards.
keepamovin 7 hours ago||
YC invests in people, not ideas. They have vetted him. They are always right about people. It's probably nothing.
einrealist 16 hours ago||
I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

shruggedatlas 16 hours ago||
Your brain is performing "compute-intensive brute-force attacks on the problem/solution space" as you read this very sentence. You trained patterns on English syntax, structure, and semantics since you were a child and it is supporting you now with inference (or interpretation). And, for compute efficiency, you probably have evolution to thank.
JohnMakin 15 hours ago|||
people like to say this like they’re apples to apples but this comparison isn’t remotely how the brain actually works - and even if it did, the brain does it automatically without direction and at an infitesimal percentage of the power required.

And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.

One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.

It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.

ch_fr 5 hours ago|||
Thank you, this comparison has been a huge annoyance of mine for the past 3 years of... this same debate over and over.

I think it's the hubris that I find most offensive in this argument: a guy knows one complex thing (programming) and suddenly thinks he can make claims about neuroscience.

igggh 12 hours ago|||
Great post
stonyrubbish 15 hours ago||||
Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does. AI is more like a parrot which is trained to give a correct-looking response to any question. The parrot doesn't think, doesn't know what its doing etc, it just does it because it gets a treat every time a "good" answer is prompted. This is why it can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition.
fgfarben 8 hours ago|||
I love reading posts like this. When you were a child, learning math or grammar, do you not remember bouncing off the walls of incorrect answers, eventually landing on a trajectory down the corridor of the right answer? Or were you always instantly zero-shotting everything?

In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.

Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.

calf 7 hours ago||
The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.
saxonww 14 hours ago||||
> Human cognition is nothing like AI "cognition."

I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.

What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?

davebren 13 hours ago||
Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.

Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...

They don't need to read every math textbook, paper, and online discussion in existence.

AstroBen 12 hours ago|||
Our DNA does contain our pre-training, though. It's not true that we're an entirely blank slate.
davebren 12 hours ago||
Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.
saxonww 11 hours ago|||
The point I'm trying to make is that I don't think we know, so we can't say either way.

In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?

davebren 3 hours ago||
They grew up in a tribe that hasn't discovered numbers yet.
chpatrick 15 hours ago||||
This is such a boring cliche by now. "thinking" and "knowing what it's doing" are totally vague statements that we barely understand about the human mind but in every comment section about AI people definitively state that LLMs don't do them, whatever they are.
davebren 13 hours ago|||
This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it. Something is extremely evil about these ideologies that are teaching people that they are NPCs.
stonyrubbish 15 hours ago|||
They aren't so vague that you would argue the parrot is thinking.
sph 8 hours ago||||
> Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does.

This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.

We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)

staticman2 2 hours ago||
> This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.

I have a different theory.

Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.

My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.

CamperBob2 10 hours ago|||
AI is more like a parrot which is trained to give a correct-looking response to any question.

A parrot that writes better code and English prose than I do?

I would like to buy your parrot.

wil421 15 hours ago||||
If you think this way then why not talk to LLMs exclusively. Don’t let the oxytocin cloud your ability to problem solve.
slopinthebag 15 hours ago|||
I get you're trying to do the whole "humans and LLMs are the same" bit, but it's just plainly false. Please stop.
stavros 16 hours ago|||
> All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning'

If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".

bjacobel 13 hours ago|||
Has generative AI made material progress on curing cancer? Has it produced any breakthroughs, at all?
igggh 12 hours ago||
In b4

- it’s the worst it’ll ever be - big leaps happened the fast few months bro

Etc.

Personally I think llm’s can be very powerful in a narrow-band. But the more substance a thing involves, the more a human is needed to be involved.

stonyrubbish 15 hours ago||||
> "I don't trust anyone who claims they're intelligent" doesn't follow from "all they do is <how they work>".

It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.

einrealist 13 hours ago|||
I can slow down the compute by a factor of a thousand. It would not change the result. But it changes the economics. We only call it intelligent, because we can do the backpropagation, the inference (and training) fast enough and with enough memory for it to appear this way.
stavros 14 hours ago|||
If LLMs can come up with superhumanly intelligent solutions, then they're superhumanly intelligent, period. Whether they do this by magic or by stochastic whatever doesn't make any difference at all.
davebren 13 hours ago||
Like..a calculator?
CamperBob2 10 hours ago||
Take a calculator to the International Math Olympiad and let's see how you do.
bigyabai 16 hours ago|||
That's moonshot logic that reinforces the parent's point. You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment.
JumpCrisscross 15 hours ago|||
> You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment

That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.

Noumenon72 15 hours ago|||
"The cure for cancer" as a phrase doesn't include those solutions. If the headline was "Pope discovers the cure for cancer" and those were his solutions you would say "No he didn't." OP was referring to AI discovering the cure for cancer that cancer research is working towards.
crazylogger 14 hours ago|||
If all they do is "just" brute-force problem solving, then they are already bound to take over R&D & other knowledge work and exponentially accelerate progress, i.e. the SciFi "singularity" BS ends up happening all the same. Whether we classify them as true reasoning is just semantics.
semiinfinitely 15 hours ago|||
calculator is superhumanly intelligent
Rover222 15 hours ago||
Yeah and everything is just atoms. If you reduce anything enough it’s not real.
bambax 7 hours ago||
> Altman does not recall the exchange.

Altman SAYS he does not recall the exchange. Not the same thing.

avaer 12 hours ago||
Who would you trust more: Sam Altman, or a council of 1000 representative AI models?
ergocoder 17 hours ago|
I wonder if Sam might abandon the ship soon. Other co-founders already did.

The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.

This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.

0x3f 17 hours ago||
But nobody is going to just gift him the same valuation on the next company. It's not like his execution is OpenAI's moat right now. So where would he be going that's a better deal for him?
ergocoder 16 hours ago|||
Founding his own company would be one alternative. Full control. No stigma on the non-profit part. Probably get the same paper money as he got now at OpenAI.
davebren 12 hours ago|||
What is the value he adds anyway, being a delusional cult leader where most people around him characterize him as a sociopath? Is it just his ability to lie and create fear-hype?

It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.

raincole 17 hours ago|||
And OpenAI's influence is hugely exaggerated compared to, say, Google.
ergocoder 16 hours ago||
Yes, and it seems people hate him more than Google co-founders, for example.

All the downsides without much upside...

georgemcbay 16 hours ago||
> Yes, and it seems people hate him more than Google co-founders, for example.

Sergey Brin is trying to change that lately, but Altman still has a sizable head start.

palata 16 hours ago||
IMHO, nobody is remotely worth $1B, period.

The fact that some (usually toxic) individuals get there shows that the system is flawed.

The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.

We shouldn't follow billionaires, we should redistribute their money.

simonh 15 hours ago|||
If someone founds a company, grows it and owns $1bn of its stock, they don’t have $1bn in cash to distribute. They have a degree of control over the economic activity of that company. Should that control be taken away from them? Who should it be given to?

I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?

palata 7 hours ago||
> Some sort of special tax?

Well yeah. After some amount, you get 100% taxes. So that instead of having billionaires who compete against each other on how rich they are or on the first one to go contaminate the surface of Mars or simply on power, maybe we would end up with people trying to compete on something actually constructive :-). Who knows, maybe even philanthropy!

simonh 5 hours ago||
So, who owns and runs the companies? How do new companies get formed?

I'm not against higher taxation of the wealthy. I think inequality is a serious problem. The issue is what the wealth of these people isn't a big pile of cash they are wallowing in, it's ownership of the companies they build and operate. Is that what we want to take away? How, and what would we do with it?

I think it makes more sense to tax it as that power is converted into cash. I'm not clear how a wealth tax should work.

palata 4 hours ago||
> I think it makes more sense to tax it as that power is converted into cash

Yeah, that makes sense to me. And those are all good questions of course :-).

> So, who owns and runs the companies?

I guess ownership stays the same, we just need to prevent the companies from growing too big. Because the bigger they are, the more powerful their leaders get, for once (aside from all the problems coming from monopolies). But by taxing them, we prevent the people owning those companies from owning 15 yachts and going to space for breakfast :D.

> How do new companies get formed?

I don't know if that's what you mean, but I often hear "if you prevent those visionaries from becoming crazy rich, nobody will build anything, ever". And I disagree. A ton of people like to build stuff knowing they won't get rich. Usually those people have better incentives (it's hard to have a worse incentive than "becoming rich and powerful", right?).

Some people say "we need to pay so much for this CEO, because otherwise he will go somewhere else and we won't have a competent CEO". I think this is completely flawed. You will always find someone competent to be the CEO of a company with a reasonable salary. Maybe that person will not work 23h a day, maybe they won't harass their workers, sure. But will it be worse in the end? The current situation is that such tech companies are "part of the problem, not of the solution" (the problem being, currently, that we are failing to just survive on Earth).

r14c 15 hours ago||||
Big agree, at a certain point a company is big enough that their impact has to be managed democratically. I don't have an issue with effective leaders, the problem is that we reward a certain kind of success with transferable credits that don't necessarily align with people's actual talents or skills.

I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.

I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.

palata 7 hours ago||
> Big agree, at a certain point a company is big enough that their impact has to be managed democratically.

100%. First, a company should not be that big. The whole point of antitrust was to avoid that. The US failed at that, for different reasons, and now end up with huge tech monopolies. And it's difficult to go back because they are so big now.

BTW I would recommend Cory Doctorow's book about those tech monopolies: "Enshittification: why everything suddenly got worse and what to do about it". He explains extremely well the antitrust policies and the problems that arise when you let your companies get too big. It's full of actual examples of tech we all know. He even has an audiobook, narrated by himself!

rafterydj 16 hours ago|||
Well, redistributing their money is (in some cases disingenuously) exactly how they are able to pitch investors. "Sure, value my company at $10B and my shares make me $2B, but we're alllllll gonna make money when hit AGI!!!" That kind of thing.
palata 7 hours ago||
Sure, I understand why the people around them who benefit from it also want to do that.

My point is that it all only benefits a few people. Those people used to call themselves "kings", appointed by god. Now they are tech oligarchs. If the people realised that it was bad to have kings, eventually maybe they will realise that it is bad to have oligarchs?

More comments...