Top
Best
New

Posted by adrianhon 1 day ago

Sam Altman may control our future – can he be trusted?(www.newyorker.com)
1865 points | 757 commentspage 7
383toast 20 hours ago|
if you have to ask if someone can be trusted, they usually can't
hungryhobbit 20 hours ago||
It's the golden rule of news article headlines: if the headline is a question, the answer to it is always a negative.
danparsonson 19 hours ago||
https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines
stinkbeetle 19 hours ago||
And if you think you don't have to ask if someone can be trusted, you're usually wrong.
mvkel 10 hours ago||
> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.

Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?

It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.

Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.

Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.

basyt 9 hours ago||
he doesn't control his own future... chatgpt implodes in 18 months max depending upon how the strait of hormuz play goes...
the_arun 18 hours ago||
The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.
sumedh 9 hours ago|
This is the problem with propaganda, you have been told that he was evil as most Indians are led to believe but for people in Sri Lanka he was a great leader.
trakkstar 12 hours ago||
Girls and boys, this is a prime example of a rhetoric question.
flippyhead 4 hours ago||
No
KellyCriterion 22 hours ago||
Na, it will be Dario instead of Sam, Id say? :-))
sph 11 hours ago||
Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”

Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.

---

1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.

game_the0ry 23 hours ago|
For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

Some concepts from the book:

> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

> Trust your instincts over a person's social role (e.g., doctor, leader, parent)

Check and check.

OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.

unsupp0rted 22 hours ago||
I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.

E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.

xg15 20 hours ago|||
That's not a third category, that's just a sociopath as seen by themself.
unsupp0rted 19 hours ago||
I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.

game_the0ry 18 hours ago||
> I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Yes that is the core trait I highlighted in the 1st bullet.

game_the0ry 18 hours ago|||
> I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.

jcgrillo 18 hours ago|||
I was with you right up until the final paragraph, but this made me do a double take:

> OpenAI is too important to trust sama with.

...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.

The whole "super serious what-ifs" game is just marketing.

davebren 15 hours ago||
Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.

I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.

arcfour 13 hours ago||
> I'm not even sure we're any closer to AGI than we were before LLMs.

I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).

Just because something is overhyped doesn't mean you have to be dismissive of it.

davebren 7 hours ago|||
Point is that LLMs could be a local minima we are now economically stuck in until the hype wears off.
jcgrillo 5 hours ago||
Or we could be stuck here for decades pending a breakthrough nobody alive today can even conceive of, or we could be compute limited by a half dozen orders of magnitude. Or it could happen next week. That's the nature of breakthroughs--you just can't have any idea when or how (or if) they'll happen.
jcgrillo 5 hours ago|||
In hindsight there's an obvious evolutionary pathway from the Wright Flyer to Gemeni/Apollo/Soyuz.. but at the time in 1903 there absolutely was not, and anyone telling you so would be a crank of the highest degree. So it may turn out that LLMs have some place on the evolutionary path to AGI, or it could turn out they're a dead end like Cayley's ornithopters. Show me AGI first, then we can discuss whether LLMs had something to do with it.
arcfour 3 hours ago||
In order to get to space, you must first be capable of flight through the atmosphere. That should be apparent to anyone even then because the atmosphere is in between space and the ground.

Regardless of whether spaceflight is still 1000 or 100 or 50 years away, you are still closer than you were before you demonstrated the ability to fly.

gib444 14 hours ago||
It's fairly obvious sociopathy is a prerequisite for top CEO jobs. Some just hide it better than others or have better PR people
More comments...