Top
Best
New

Posted by adrianhon 1 day ago

Sam Altman may control our future – can he be trusted?(www.newyorker.com)
1794 points | 730 commentspage 6
avaer 13 hours ago|
Who would you trust more: Sam Altman, or a council of 1000 representative AI models?
bambax 8 hours ago||
> Altman does not recall the exchange.

Altman SAYS he does not recall the exchange. Not the same thing.

netcan 9 hours ago||
My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.

>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."

Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.

Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.

Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.

By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."

At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.

So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.

That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.

Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.

flippyhead 3 hours ago||
No
nerdyadventurer 12 hours ago||
Why would anyone trust him at all? their tech is used to bomb children, all of these rich folks are immoral only about their selfish gain.
jedberg 11 hours ago|
> their tech is used to bomb children

If you're talking about the school in Iran, that wasn't OpenAI. That was a Palantir system that pre-dates OAI by a few years, and was due to a bad entry in a spreadsheet, that showed the building as military housing. Which it was a few years ago.

180 people lost their lives because of bad data in spreadsheet, but not AI.

steinvakt2 9 hours ago|||
Many years ago. Not "a few years ago". Also you could make the sentence that 180 people lost their lives because of an evil war, of which USA and Israel are the aggressors. And we definitely don't talk enough about that part.
hnbad 8 hours ago||||
180 children lost their lives because of decisions by people in the US military (and ultimately the US government / the POTUS).

Let's not fall into the trap of adopting narratives created to waive accountability. The spreadsheet didn't launch a missile, the spreadsheet didn't authorize the strike and the spreadsheet didn't select the target.

Not to mention that "outdated spreadsheet" is also a hilariously anachronistic excuse for a war crime if you consider what kind of satellite technology the US has publicly acknowledged to have access to, let alone what kind of technology it is likely to have access to.

The difference between intentional premeditated murder and reckless endangerment resulting in a killing is not guilt and innocence but merely the severity and nature of a crime. Both demonstrate a callous disregard for the sanctity of human life, one just specifically seeks to extinguish it, the other merely accepts death and suffering as an acceptable outcome.

inemesitaffia 7 hours ago||
Please talk to your criminal defense lawyer.

This is nonsense.

Hikikomori 10 hours ago|||
Palantir was using anthropic and its use is being replaced by openai.
jedberg 8 hours ago||
Yes but not for the system that decided to bomb a school. That was a Palanter in house system.
Hikikomori 6 hours ago||
Afaik the palantir system utilized ai.
tines 16 hours ago||
Two "insure" typos?
mplanchard 15 hours ago||
The New Yorker prefers insure to ensure. They have a unique house style. I commented on another thread about alternative spellings like vender instead of vendor, too.
pch00 7 hours ago||
> The New Yorker prefers insure to ensure. They have a unique house style.

That's not a stylistic choice, it's just incorrect use of English.

mplanchard 6 hours ago||
Well that’s just, like, your opinion, man. https://www.merriam-webster.com/dictionary/insure
pch00 3 hours ago||
That M-W entry literally says they're different words with different meanings:

> They are in fact different words, but with sufficient overlap in meaning and form as to create uncertainty as to which should be used when.

> We define ensure as “to make sure, certain, or safe” and one sense of insure, “to make certain especially by taking necessary measures and precautions,” is quite similar. But insure has the additional meaning “to provide or obtain insurance on or for,” which is not shared by ensure.

mplanchard 3 hours ago||
Definition 2: "to make certain especially by taking necessary measures and precautions"

From the article:

> He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them.

> Others were uncomfortable sharing concerns about Altman because they felt there was not a sufficient effort to insure anonymity.

> [...] to insure that the technology was deployed safely

All of these work just fine with that definition of "insure." Your comment that it's "incorrect use of English" is wrong.

The bit you quoted says there’s substantial overlap between the two. The New Yorker style is to prefer “insure” in cases where either could work.

pch00 3 hours ago||
I'm unconvinced but I'll ensure I do my homework before grammar-policing again :)
mplanchard 1 hour ago||
To be fair, I use “ensure” myself, but it’s just one of several quirky elements of the New Yorker’s style, along with the diaeresis on repeated vowels with different sounds (like in reëmerge or coöperate), several uncommon spellings, and unusual conjoinings like “teen-ager” and “per cent.” It’s part of the charm, I suppose
Wyverald 15 hours ago|||
In American English, "insure" can also mean "to make sure" as in "ensure", in additional to meaning "to take out insurance for".
tines 3 hours ago||
TIL!
o0-0o 16 hours ago||
Dictation likely and not caught by editing.
pupppet 1 day ago||
Ask Condé Nast if he can be trusted..

https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u

simoncion 19 hours ago|
Uncrappified link: <https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...>
b8 14 hours ago||
Sam failed upwards.
mvkel 9 hours ago|
> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.

Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?

It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.

Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.

Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.

More comments...