Top
Best
New

Posted by speckx 7 hours ago

AI Resistance: some recent anti-AI stuff that’s worth discussing(stephvee.ca)
316 points | 313 commentspage 3
lxgr 5 hours ago|
> This isn’t exactly the modern equivalent of angry textile workers destroying power looms, but (if you’ll forgive the pun) it’s cut from the same cloth.

And how did that work out for the textile workers?

> The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.

This... seems like an absurd asymmetry in effort on the side of the attacker? At least destroying a power loom is much easier than building one.

Filtering out obvious garbage seems like a completely solved problem even with weak, cheap LLMs, and it's orders of magnitudes more efficient than humans coming up with artisanal garbage.

zoogeny 6 hours ago||
I often question my own bias on this because in my interactions with local non-tech people, the adoption of AI has pretty much affected everyone I know and it is by my estimation a majority positive reaction. I live in a fairly rural part of the PNW.

So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.

I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.

xg15 6 hours ago||
> teachers

Given all the borderline apocalyptic articles how students are using it to cheat and teachers have no way to stop them, I'd be honestly surprised by that.

zoogeny 6 hours ago|||
All I can offer is my anecdotal experience. One teacher was describing his usage to generate quizzes on material. He gets course material in the form of pdfs, uploads it to the AI and gets it to generate questions.

On the flip side, one of my other teacher friends has instituted a no phone policy in his classroom.

dualvariable 5 hours ago||
Yeah, a friend of mine is a teacher and is using it to generate material for the classroom all the time, and it dramatically increases her productivity. The school administrators have also been pretty impressed by it, and told her to keep it up more or less.
aksss 5 hours ago|||
There was an article about a year ago concerning the students' using AI to complete work, and teachers using AI tools to detect if AI tools were being used to complete the work, so (even a year ago) you found this absurd scenario where it was just robots checking the work of other robots. Did a quick search for said article but couldn't find it. Anyway, humorous. Coupled with the WaPo article today about people "speed-running" their degrees, it's wacky, wacky world for "education". https://archive.is/bPi82
BeetleB 5 hours ago|||
The bulk of the anti-AI sentiment I see is from people who spend a huge amount of time online (or on HN). Not regular folks.

Most people don't care if something is written by an AI as long as it is reasonable, and reflects the intent of the human who prompted the AI.

If consuming material online (videos, web sites, online forums) is not something you do a lot of, you're relatively unimpacted by LLMs (well, except the whole jobs situation...).

jolt42 4 hours ago|||
It's easy to chalk it up to "fear of the unknown", when in reality it's both good and bad depending on who's wielding it. It can be used to tear down or build up, solve problem or create problems just like every advance before it. So while I'm generally excited with where it can go, I guess I don't mind being reminded there can be downsides.
alfalfasprout 6 hours ago|||
Fascinating, because I've seen the exact opposite across the PNW.
jolt42 4 hours ago|||
Anecdotally, I was just there and ran into a couple of anti-AI people and it's not like I was bringing it up. All I got was they were worried about the water and the heat produced. I wonder if someone has done an analysis of old-search vs AI-search, I most definitely get the info quicker that I want with AI, does that make up for the LLM cost, I have no idea.
zoogeny 6 hours ago|||
That is why I question my own bias. One possible explanation is that I am AI positive. So when people test out "What do you think about AI?" my own responses are generally positive. That probably filters out people who don't want to argue or contradict.

This kind of effect would work both ways. People who are non-confrontational in general will choose to keep quiet if their opinions differ. In this view, both pro-AI and anti-AI sides might find themselves having their bias confirmed due to opposing views self-silencing to avoid conflict.

rmdashrfv 6 hours ago||
I'd say that the molotov cocktails being thrown at the house of an AI company CEO being met with mostly praise and a little bit of apathy is a good hint you might actually be in a bubble
zoogeny 5 hours ago||
I don't agree, in fact I would say if I was surrounded by people glorifying violence that would suggest I was in an extreme minority.

It reminds me of similar late-stage-capitalism like activity, from the assassination of the insurance company CEO, the fire-bombing of Tesla's, etc. It is hard to disentangle hate that is based on economic inequality or power imbalance from hate directed explicitly at AI. That is especially true since one narrative suggests that both types of inequality (economic and power) may be accelerated by an unequal distribution of access to AI.

So we might end up in an argument over whether the hate that drives the violence is towards AI at all, or if that is merely a symptom of existing anti-capitalist sentiment that is on the rise.

KronisLV 6 hours ago||
I bet it's easy to be against AI, instead of those who use it in inhumane ways (and hold considerably more power). To them, AI is just a tool. If it wasn't AI, it would be buildings full of people and automated devices posting misinformation, outsourcing jobs and pushing for gig economy instead of respectable employment, having understaffed call centres and bad phone trees or knowledgebases that basically tell you to f off, lobbying against workers' rights and regulatory capture and any number of other misaligned motivations.
damnesian 6 hours ago||
Thanks to this lovely site, and my distaste for AI, I've found a whole ecosystem of minimalist blogs and artists' personal sites. It's shifting my habits and foci. I don't do socials anymore except forums like this.

Maybe I have slop to thank for it.

sn0n 4 hours ago||
Let’s just break trust more. Makes sense amirite? LoL, in what reality does this even make sense??? it’s literally just spreading misinformation to people who can’t read between the lines because the tism stick got em before they were born.
alyxya 6 hours ago||
This seems like a wasted effort when AI will primarily learn the majority consensus view and not one-off misinformation. AI tries to learn pattern matching for generalization, so garbage data doesn't make AI learn the wrong patterns, at best just slows down learning the actual patterns. When most compute for training is spent on curated data and RL rather than random web-scraped data, the impact is likely negligible.
Mordisquitos 6 hours ago||
> This seems like a wasted effort when AI will primarily learn the majority consensus view and not one-off misinformation.

We have evidence to the contrary. Two blog articles and two preprints of fake academic articles [0] were able to convince CoPilot, Gemini, ChatGPT and Perplexity AI of the existence of a fake disease, against all majority consensus. And even though the falsity of this information was made public by the author of the experiment and the results of their actions were widely published, it took a while before the models started to get wind of it and stopped treating the fake disease as real. Imagine what you can do if you publish false information and have absolutely no reason to later reveal that you did so in the first place.

[0] https://www.nature.com/articles/d41586-026-01100-y

gwern 6 hours ago|||
> Two blog articles and two preprints of fake academic articles [0] were able to convince CoPilot, Gemini, ChatGPT and Perplexity AI of the existence of a fake disease, against all majority consensus

Wrong. There are no 'majority consensus' against 'bixonimania' because they made it up, that was the point. It's unsurprisingly easy to get LLMs to repeat the only source on a term never before seen. This usually works; made-up neologisms are the fruitfly of data poisoning because it is so easy to do and so unambiguous where the information came from. (And retrieval-based poisoning is the very easiest and laziest and most meaningless kind of poisoning, tantamount to just copying the poison into the prompt and asking a question about it.) But the problem with them is that also by definition, it is hard for them to matter; why would anyone be searching or asking about a made-up neologism? And if it gets any criticism, the LLMs will pick that up, as your link discusses. (In contrast, the more sources are affected, the harder it is to assign blame; some papermills picked up 'bixonimania'? Well, they might've gotten it from the poisoned LLMs... or they might've gotten it from the same place the LLMs did which poisoned their retrievals, Medium et al.)

Mordisquitos 5 hours ago||
The LLMs didn't only talk about the disease when prompted by the neologism. They also brought it up when asked about the symptoms. From the article:

> OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.

And yes, sure, in this example the scientific peer-review process may have eventually criticised and countered 'bixonimania' as a hoax were the researcher to have never revealed its falsity—emphasis on 'may', few researchers have the time and energies to trawl through crap papermill articles and publish criticisms. Either way, that is a feature of the scientific process and is not a given to any online information.

What happens when false information is divulged by other means that do not attempt to self-regulate? And how do we distinguish one-off falsities from the myriad of obscure true things that the public is expecting LLMs to 'know' even when there is comparatively little published information about them and therefore no consensus per se?

gwern 47 minutes ago||
"hyperpigmentation on the eyelids from blue-light exposure" is a super specific query almost definitionally 'bixonimania' which probably brought up the 'bixonimania' poison at the time (the search hits for that query right now in Google are weak and poorly relevant so it would not be hard to outrank them or at least get into the top 50 or so where a retrieval LLM would see them and would followup), and so still an instance of what I mean.

> Either way, that is a feature of the scientific process and is not a given to any online information.

Which does not distinguish it in any way from human errors like a crank or activist etc.

And I don't know, how did we handle false information before on niche topics no one cared about and which were unimportant? It's just noise. The worldwide corpus has always been full of extremely incorrect, mislabeled, corrupted, distorted, information on niche topics of no importance. But it's generally not important.

alyxya 5 hours ago|||
All the examples you gave are chatbots with web search integrated. Are you sure those chatbots didn't just reference false information it found in web searches? That's fundamentally different than poisoning the training of AI models.

> The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

This seems to imply the poisoning affected the web search results, not the actual model itself, because it takes months for data to make it into a trained base model.

alfiedotwtf 6 hours ago|||
In the pre-AI-collapse era, we called this PageRank ;)
righthand 6 hours ago||
What is the pattern for truth if I flood your data with lies?
Jtarii 6 hours ago||
The same way humans deal with it, check it against multiple reputable sources.
chongli 6 hours ago|||
We already learned how to defeat this from SEO spammers and citation farmers: by building networks that cross reference and corroborate one another’s fake stories.

We’re already at a point where much of the academic research you find in online databases can’t be trusted without vetting through real world trustworthy institutions and experts in relevant fields. How is an LLM supposed to do this kind of vetting without the help of human curators?

If all the LLM training teams have to stop indiscriminate crawling and fall back to human curation and data labeling then the poisoners will have won.

righthand 6 hours ago|||
Some of the reputable sources are taking flood of the lies for possible truth. Now what?
pj_mukh 6 hours ago||
Fortunately, the slop you visibly see online is just the tip of the iceberg. I would guess 80% of AI's real usage hides beneath the surface in back-office documentation consumption, software development, process optimization and automation, investments in new endeavors companies would've never thought possible/financially feasible etc. All of that usage is hidden from this resistance, and possible now with current models (so all this new poisoning is irrelevant). The valuations could go away tomorrow, and it would've still fundamentally changed the nature of the economy.

It doesn't matter that you don't like the slop on the LinkedIn post, ban it. I think the visible slop on our various feeds that is driving people mad is a rounding error for the AI companies. Moreover, it's more a function of the attention economy than the AI economy and it should've been regulated to all holy hell back in 2015 when the enshittification began.

Now is as good as time as any.

overgard 5 hours ago||
Honestly, it's no wonder there's a lot of pushback. We have these irresponsible CEO's talking non stop about taking peoples jobs at a time when people are struggling to make ends meet, all while taking in insane cash infusions. Why wouldn't people loathe AI at this point when the marketing is "we're going to fuck you over and there's nothing you can do about it".
pesus 4 hours ago|
Billboard company puts up billboards saying "don't hire humans".

HN comments: "I just don't understand why people hate AI".

miltonlost 6 hours ago||
Conflating “kicking over ai delivery bots” and “throwing a Molotov cocktail at Altman’s house” as being both condemnable hasn’t actually been forced off the sidewalk by one of these delivery bots. These are dangerous and anti-human ADA nightmares. They shouldn’t be allowed on sidewalks, emphasis on walk
Aboutplants 6 hours ago|
Maybe when the entire marketing of AI is fear mongering and doom (all your jobs are going away!) the end result is something you should have expected from the very beginning
More comments...