Top
Best
New

Posted by speckx 5 hours ago

AI Resistance: some recent anti-AI stuff that’s worth discussing(stephvee.ca)
290 points | 283 commentspage 2
jumploops 3 hours ago|
I've noticed this trend most heavily on Reddit.

Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]

Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].

Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).

[0]https://www.reddit.com/r/vibecoding/

[1]https://www.reddit.com/r/isthisAI/

[2]https://www.reddit.com/r/aiwars/

[3]https://www.reddit.com/r/antiai/

[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...

[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...

lxgr 3 hours ago|
Reddit (and more generally, human) groupthing in a nutshell. "Quick, clearly position yourself on this one-dimensional line (or maybe even better sort yourself into one of these two sets) so we don't have to engage in that pesky nuance thing!"
jumploops 2 hours ago||
Yes, groupthink certainly seems to be pushing each community into the false dichotomy of AI good/bad, even if it's still early days.

Another example from `r/bayarea` where the author is OK with AI but the top comments are increasingly wary of its potential for harm[0]

[0]https://www.reddit.com/r/bayarea/comments/1sp8wvz/is_it_just...

jmmcd 4 hours ago||
> Since these companies can’t improve their AI models without fresh data created by human beings

Totally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.

rdedev 4 hours ago||
Not totally wrong. Self play works well with if your problem can be easily simulated in an RL environment where the model can easily explore different states. RLHF or similar techniques is not that since we don't have exactly have a simulation environment for language modelling

Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model. Why would they be in business if self play can work out so well?

notpachet 4 hours ago|||
> Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model.

Sounds like Macrodata Refinement.

vidarh 4 hours ago|||
> Why would they be in business if self play can work out so well?

Because it is still cheaper.

cubefox 4 hours ago||
Current models don't yet use RLVR with self-play though, at least as far as we know. They use RLVR with large numbers of manually created RL environments.

But they will probably use self-play soon. See https://www.amplifypartners.com/blog-posts/self-play-and-aut...

cesarvarela 4 hours ago||
I wonder if this will have the opposite effect and produce something similar to antibiotic resistance, making Ais better at handling "poison."
jrflo 5 hours ago||
Seems a bit counterproductive if you're concerned about the environmental impact of AI to trick hyperscalers into burning more compute
dgan 4 hours ago||
I am don't have an opinion on the efficacity of such poisoning, but your comment is about as useful as "when being violently attacked, do not resist, as you only make yourself suffer for longer"
sov 4 hours ago||
maybe, but they're burning the compute regardless. it seems ostensibly likely that reducing the ROI for compute burnt will cause less compute to be burnt long term
fuddle 5 hours ago||
> The poison fountain itself is hosted on rnsaffn.com

Would the scrapers not just add these sites to do not crawl list?

chongli 4 hours ago||
I assume the poisoner community is mirroring and likely remixing the content from there. The whole effort isn’t going to work with a single point of failure like that.
ErroneousBosh 4 hours ago|||
Cool so if you do that they just won't scrape your site?
cute_boi 5 hours ago|||
And someone will come up with service anti-anti-ai.dev which will charge money to labs to filter out these sites.
Jtarii 5 hours ago||
Also aren't models like Mythos capable of checking for poison data on their own at this point?
graphememes 3 hours ago||
They do realize, they filter out this stuff right? You're just making someone elses job more lucrative.
IAmGraydon 2 hours ago|
No, because if properly implemented, it's extremely difficult or impossible to discern as poison.
jadar 3 hours ago||
Hasn’t griefing and trolling been a thing on the internet for a while? What makes this unique just because it’s AI instead of whatever else?
lxgr 3 hours ago|
> What makes this unique just because it’s AI instead of whatever else?

I'd say the notion that expensive acts of sabotage (that can be cheaply neutralized) are a worthwhile pastime and anything other than virtue signaling is somewhat perplexing. (Not in a good way.)

SwellJoe 3 hours ago||
Any human scale "attack", e.g. the made up Everybody Loves Raymond episode isn't doing anything to hurt LLM training data. Might even help them detect exaggeration, satire, etc. when read in context and with other knowledge they have from other sources (like scraping IMDB or whatever, and already knowing the cast and plot summary of every episode of Everybody Loves Raymond).

If there is an effective way to poison them, it'll be automated. And, it'll probably rely on an LLM to produce the poison, since it has to look legit enough to pass the quality filtering and classification stage of the data ingestion process, which is also probably driven by an LLM.

One reason small models are getting better is because the training data being used is not just getting bigger, it's getting cleaner and classified more correctly/precisely. "Model collapse" hasn't happened, yet, even though something like half the web is AI slop, because as the models get smarter for human use in a variety of contexts, they also get smarter for use in preparing data for training the next model. There may very well still be risks of a mad cow disease like problem for LLMs, but I doubt a Markov chain website is going to contribute. The models still can't always tell fact from fiction, but they're not being hoodwinked by a nonsense generator.

sn0n 2 hours ago||
Let’s just break trust more. Makes sense amirite? LoL, in what reality does this even make sense??? it’s literally just spreading misinformation to people who can’t read between the lines because the tism stick got em before they were born.
zoogeny 4 hours ago|
I often question my own bias on this because in my interactions with local non-tech people, the adoption of AI has pretty much affected everyone I know and it is by my estimation a majority positive reaction. I live in a fairly rural part of the PNW.

So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.

I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.

xg15 4 hours ago||
> teachers

Given all the borderline apocalyptic articles how students are using it to cheat and teachers have no way to stop them, I'd be honestly surprised by that.

zoogeny 4 hours ago|||
All I can offer is my anecdotal experience. One teacher was describing his usage to generate quizzes on material. He gets course material in the form of pdfs, uploads it to the AI and gets it to generate questions.

On the flip side, one of my other teacher friends has instituted a no phone policy in his classroom.

dualvariable 4 hours ago||
Yeah, a friend of mine is a teacher and is using it to generate material for the classroom all the time, and it dramatically increases her productivity. The school administrators have also been pretty impressed by it, and told her to keep it up more or less.
aksss 3 hours ago|||
There was an article about a year ago concerning the students' using AI to complete work, and teachers using AI tools to detect if AI tools were being used to complete the work, so (even a year ago) you found this absurd scenario where it was just robots checking the work of other robots. Did a quick search for said article but couldn't find it. Anyway, humorous. Coupled with the WaPo article today about people "speed-running" their degrees, it's wacky, wacky world for "education". https://archive.is/bPi82
BeetleB 3 hours ago|||
The bulk of the anti-AI sentiment I see is from people who spend a huge amount of time online (or on HN). Not regular folks.

Most people don't care if something is written by an AI as long as it is reasonable, and reflects the intent of the human who prompted the AI.

If consuming material online (videos, web sites, online forums) is not something you do a lot of, you're relatively unimpacted by LLMs (well, except the whole jobs situation...).

jolt42 3 hours ago|||
It's easy to chalk it up to "fear of the unknown", when in reality it's both good and bad depending on who's wielding it. It can be used to tear down or build up, solve problem or create problems just like every advance before it. So while I'm generally excited with where it can go, I guess I don't mind being reminded there can be downsides.
alfalfasprout 4 hours ago|||
Fascinating, because I've seen the exact opposite across the PNW.
jolt42 3 hours ago|||
Anecdotally, I was just there and ran into a couple of anti-AI people and it's not like I was bringing it up. All I got was they were worried about the water and the heat produced. I wonder if someone has done an analysis of old-search vs AI-search, I most definitely get the info quicker that I want with AI, does that make up for the LLM cost, I have no idea.
zoogeny 4 hours ago|||
That is why I question my own bias. One possible explanation is that I am AI positive. So when people test out "What do you think about AI?" my own responses are generally positive. That probably filters out people who don't want to argue or contradict.

This kind of effect would work both ways. People who are non-confrontational in general will choose to keep quiet if their opinions differ. In this view, both pro-AI and anti-AI sides might find themselves having their bias confirmed due to opposing views self-silencing to avoid conflict.

rmdashrfv 4 hours ago||
I'd say that the molotov cocktails being thrown at the house of an AI company CEO being met with mostly praise and a little bit of apathy is a good hint you might actually be in a bubble
zoogeny 4 hours ago||
I don't agree, in fact I would say if I was surrounded by people glorifying violence that would suggest I was in an extreme minority.

It reminds me of similar late-stage-capitalism like activity, from the assassination of the insurance company CEO, the fire-bombing of Tesla's, etc. It is hard to disentangle hate that is based on economic inequality or power imbalance from hate directed explicitly at AI. That is especially true since one narrative suggests that both types of inequality (economic and power) may be accelerated by an unequal distribution of access to AI.

So we might end up in an argument over whether the hate that drives the violence is towards AI at all, or if that is merely a symptom of existing anti-capitalist sentiment that is on the rise.

More comments...