Posted by ramimac 11 hours ago
I'm not sure I can trust the author's characterization of Roy, though. I got the impression that they don't like any of the people they interviewed (which, you know, fair), but that doesn't get even close to the depths of hatred towards Roy that they sub-textually exude throughout the article.
If their portrayal is even half accurate, though, that's a perfectly reasonable amount of hate.
Now consider Reddit.
On r/hacking people tend to understand the danger of mindlessness and support war against it: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
In constrast r/programming is full of, let's call them "bot-heads", who are all-in on mindlessness: https://www.reddit.com/r/programming/comments/1r8oxt9/poison...
A project that you spam in every of your comments.
Poison Fountain is top of mind currently so it's understandable I talk about it constantly. Even to my wife. Also I think it's highly relevant to the excellent Harper's article we're reading today.
Whether the Redditors "like the project or not" reflects whether or not they think there is a problem with mindlessness.
What they actually say is almost immaterial. Either it's FUD about malware or illegality or something they imagined without evidence about how easy the poison is to filter. These fictions are just a manifestation of their opposition to the idea.
You can see that among the bot-heads on r/programming (perhaps forced to embrace mindlessness by career considerations) there's nothing that can be said without attack. A dozen downvotes immediately. They actually logged into Hacker News and posted FUD directly to the HN post I linked to. Spectacular.
The opposite is true on r/hacking. Except for a few in opposition (some of whom did unsuccessfully attempt to DDOS the fountain) most people sympathize and agree. They don't want to be dependent on Sam Altman or Elon Musk for their cognition.
There is a red line and it is AI. People viscerally hate it and pushing it will just make people question whether they need computers or the Internet at all (hint, they do not).
CEOs fell validated by the mediocre psychopath parts of their developers who always push the latest fad in order to gain an advantage and control better developers. Fads generally last about two years, and this is it.
It will be very gratifying if the AI hubris is Silicon Valley's downfall and completely needlessly ruins the industry just because the same CEOs who read a couple of science fiction books and had rocket envy now have AI envy.
For a longer and more biting critique of SF one should read
Private Citizens (2016) by Tony Tulathimutte
ā Capturing the anxious, self-aware mood of young college grads in the aughts, Private Citizens embraces the contradictions of our new century: call it a loving satire.ā
I think the "agency" the article talks about is really just "willingness to take risks". And the reason some people are high outliers on that scale is a combination of:
* Coming from such a level of privilege that they will be completely fine even if they lose over and over again.
* Willingness to push any losses onto other undeserving people without experiencing guilt.
* A psychological compulsion towards impulsive behavior and inability to think about long-term consequences.
In short, rich selfish sociopaths.
Some amount of risk-taking is necessary for innovation. But the level we are seeing today is clearly unsustainable and destructive to the fabric of society. It's the difference between confining a series of little bangs to produce an internal combustion engine versus just throwing hand grenades around the public square. The willingness to take chances needs to be surrounded by a structure that minimizes the blast radius of failure.
To be a little more generous, this third point is actually a classic symptom of ADHD. I've known some (non-CEO) folks like this and the kind of risks they take in their personal lives seemed completely alien to me.