Posted by speckx 11 hours ago
Doesn't mean it's correct, or empirically-based.
We've had literal generations of experience with vaccines, tons of data with formal systems to collect it, and most of the "resistance" traces back to "I dun wanna" and hearsay.
In contrast, LLM prompt-injection is an empirically proven issue, along with other problems like wrongful correlations (both conventional ones like racism and inexplicable ones), self-bias among models, and humans generally deploying them in very irresponsible ways.
I find it kind of sad that people are spending time and energy on this. It seems like something depressed people would do. But free country and all that
I feel like the same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.
The "cyber psychosis" thing is overblown just like the "Tesla ignites its passengers" is. The only reason it gets in the news is because it is trendy to do so. The people getting 'infected' would've infected themselves regardless.
Genuinely I think the hatred is overblown by people who have no clue what the actual truth of AI is, something they seem obsessed with.
The only genuine complaint about AI is the data sourcing which is a problem being resolved by CloudFlare along with other platforms that require high payment for the privilege. With that said though, those platforms are still selling user data with users producing the content gaining nothing, that part needs to be fixed.
Like, my aunt just lost the job she had for 33 years working at an insurance company. The company claims it is because of AI (whether companies lie about this sometimes is immaterial, it is sometimes true and becoming more true every month). She’s smart, but at age 60 I do think she’ll have a hard time shifting to a totally different knowledge work paradigm to keep up with 20-something AI natives.
What do we tell people in this position? That they should be happy? That UBI is coming? My aunt has bills to pay now, UBI is currently not in the Overton Window of US politics, and is totally off the table for Republicans (who have the white house through at least 2028).
I’m personally very excited about AI, but the lack of seriousness with which I see tech people talk about these issues is frustrating. If we can’t tell people a believable story where they don’t get screwed, they will decide (totally rationally from their perspective) that this needs to stop.
I don't think it's all that complex tbh. The freeing from labor, both in the past and now, has been achieved largely by firing people, abandoning them to starve while power concentrates in the already-powerful.
This is the exact same thing the Luddites were taking issue with. Because they partly succeeded, we have better labor laws today.
I don't believe that, though. The output will be owned by an elite. The rest of us will be useless and fighting for scraps. No utopia with UBI or similar.
Edit: wow, many made the same comment while I was reading the article. I should remember to refresh before starting to write.
No, AI will only free us from our jobs, while still keeping the need to find money to feed ourselves.
"When harnessed correctly" is exactly what wont happen, and exactly what all the structural and economic forces around AI ensure it wont happen.
And increasingly not even for basics like food, with inflation eating away that PP.
But hey, you can buy tech gadgets cheaper than in the 1990s.
It's easier than ever to access quality education but that doesn't mean people will do it on their own accord. The cost of licensure or a diploma has certainly increased. Education for the disabled has improved dramatically.
Historical diseases of affluence now affect the poor more than the rich due to increased availability and affordability but costly procedures disproportionately favour the wealthy flipping the mortality picture. Despite that all cause mortality from cancer is down and survival rates are better. The disparity is real but it's not easy to attribute the cause in a neat package.
https://pubmed.ncbi.nlm.nih.gov/28408935/
https://www.sciencedirect.com/science/article/abs/pii/S00472...
People live a reality everyday, "hard to measure" or not, and that's not about the "quality difference of housing and healthcare" increasing dramatically, it's them becoming stratospherically expensive...
Life expectancy, cancer mortality, heart disease mortality, infant mortality, infectious disease, high school and college completion, social safety nets, houses w/ a/c, indoor plumbing, w/d, refrigeration... Life for those in the lowest quintile of income is arguably better today than it has ever been despite raging inequality.
Just because things were historically cheaper as a percentage of income, which isn't clearly true across all categories in that timeline, it doesn't mean quality of life was materially better.
I think this is easily explained: Sequencing matters. It I lose my job due to AI and it takes just 1-2 years for AI benefits to arrive at my door, that is plenty of time to be very anxious about my life. If I was guaranteed the AI benefits before I potentially lose my job, very different story.
That seems hard to set up, but alas.
They want to be liberated from bills. If the angle were "AI is going to make your bills go away" everyone would be ecstatic about it. Instead it's "AI is going to make your job go away... so you can't pay your bills".
I think it's laudable (and unprecedented) that AI companies themselves are fairly gloom about some potential prospects, and give people opportunity to rally against them. Still needs work towards a solution, though.
What is your source on them being "the exact same types"?
I changed it to "I feel". I have Claude working on a script to validate or disprove my hypothesis.
Thanks for the call-out!
It is a large subsection, but a subsection, that both rally against capitalism and AI. I haven't found people of the '1$$$% capitalism great' people to hate AI... which I do find ironic: but most things tend to fall into irony on that side of the spectrum, so I don't find it surprising.
We’re automating the interesting work with AI and leaving the drudge work for humans.
I think you have that backwards.
Who said it has to be AI?
"Capitalism sucks" has become a pretty universal slogan, but traditionally, leftists didn't want less labor (that's what the capital owners want), but more control about their labour.
What they're really saying with "Capitalism sucks, free us from our labor" is "free us from wealth inequality." It remains to be seen whether AI can actually help with wealth inequality (I don't think it can, personally), but right now most people associate AI with job loss which is not helpful vis-a-vis inequality at all.
Disclaimer: I'm long-term bearish on the impacts of AI, but I'm also bearish on "Capitalism sucks" and don't make a habit of hanging around groups dedicated to shitting on either topic.
It might be, but I saw it happen to two people in my immediate social circle. And I'm pretty anti-social.
Hating on Waymo is trendy.
Hating on Tesla is the logical result of vehicles with door handles that won't open from the inside when the power is cut.
Hating on tesla is easy because they are STILL lead by a man-child who has chosen to sig-heil behind the presidential podium. And he's still in charge of tesla. At some point: it's on tesla too for continuing to have that person as CEO.
The people who think capitalism sucks are not the ones "harnessing" AI. The capitalists are. There is zero precedent that capital will do anything but exploit and oppress with this fancy new tool they've got (that everyone hates).
No way. The people that run these companies all watched Star Trek and learned the exact wrong lessons from it. If you meant by "free you from your labor" that you will get laid off from your job and have to take up residence under an overpass, I would agree, that is what the want to do.
This is all embedded in their future growth prospects. Nobody is interested in subsidizing AI as a public service forever. They're interested in "AI is going to make this company go 100x".
I agree that this dream of huge returns is luring investors.
I don't think that it will actually work that way. The barriers to making a useful model appear to be modest and keep getting lower. There are a lot of tasks where some AI is useful, but you don't need the very best model if there's a "good enough" solution available at lower prices.
I believe that the irrational exuberance of AI investors is effectively subsidizing technological R&D in this area before AI company valuations drop to realistic levels. Even if OpenAI ends up being analogous to Yahoo! (a currently non-sexy company that was once a darling of investors), their former researchers and engineers can circulate whatever they learned on the job to the organizations that they join later.
I think you fundamentally misunderstand leftists/Maxists here. They don't want to be "freed from labor". They want to own the value they produce instead of bartering their labor. In fact, Marxists tend to view Yang style UBI as a disaster because their analysis of history is one of class struggle, and removing the masses from the thing that gives them an active role in that struggle (their labor) effectively deproletariatizes them. Can't exactly do a general strike to oppose a business or state's actions when things are already set up to be fine when you're not working. You instead just become a glorified peasant, reliant on the magnanimity of your patron but ultimately powerless to do anything if they make your life worse except hope they don't continue to worsen it.
I'm not arguing the Marxist view of history and class struggle here, just making it clear that outside of some reddit teenagers going through an anarchist phase, actual anti-capitalists don't think work will disappear when their worldview materializes.
The fact that modern leftists are (often) anti-technology is puzzling.
The point is not whether or not we have technology but who controls it.
Marxism fundamentally is: productive forces change the society, meaning the technology that exists at that point in time shapes the way people think.
https://en.wikipedia.org/wiki/Means_of_production#Marxism_an...
Yes, technological improvements are an important factor, but not a purely positive one:
> In Marx's work and subsequent developments in Marxist theory, the process of socioeconomic evolution is based on the premise of technological improvements in the means of production. As the level of technology improves with respect to productive capabilities, existing forms of social relations become superfluous and unnecessary as the advancement of technology integrated within the means of production contradicts the established organization of society and its economy.
In particular:
> According to Marx, escalating tension between the upper and lower class is a major consequence of technology decreasing the value of labor force and the contradictory effect an evolving means of production has on established social and economic systems. Marx believed increasing inequality between the upper and lower classes acts as a major catalyst of class conflicts[...]
> Ownership of the means of production and control over the surplus product generated by their operation is the fundamental factor in delineating different modes of production. [capitalism, communism, etc]
>The fact that modern leftists are (often) anti-technology is puzzling.
Not puzzling at all when the world has experience earth shattering advances in technology in the past 30-40 years, and the economic gains it has brought have not been reflected in similar reductions in labor for the workers. Why on earth would AI be any different than the cotton gin or the self checkout?
You can't just will a society to gain consciousness - it has to come from the productive forces. That is materialism.
Correct. So a future where AI does the majority of work means that the proletariat is no longer the historical subject; AI and its ownership class are. In this situation, AI will shape the society, not the workers. Not really a desirable outcome for anyone engaged in mass class politics.
If they could choose complete emancipation from poverty OR completely getting rid of the concept of billionaires - they would choose the second one. Their intention is not the absolute status of a human but how they are relative to others.
This is a machine that has been trained on vast amounts of stolen data.
This is a machine that is being actively sold by the companies that build it as something that will destroy jobs.
This is a machine that has a lot of cheerleaders who are actively hostile to people who say "I do not like that this plagarism machine was trained on my work and is being sold as a way to destroy a craft that I have spent my entire life passionately devoted to getting good at".
This is a machine whose cheerleaders are quick to say that UBI is the solution to the massive unemployment that this machine is promising to create, and prone to never replying when asked what they are doing to help make UBI happen.
Sure, you can say that most of the problems people have with AI are problems with capitalism. This isn't wrong. But unless you can show me an example of how these giant plagarism machines and/or the companies diverting ever-larger amounts of time and money into them are actively working to destroy capitalism and replace it with something much more equitable and kind, then your "this machine will free you from your labor" line is a bunch of total bullshit.
Care to explain why?