Posted by speckx 5 hours ago
Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]
Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].
Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).
[0]https://www.reddit.com/r/vibecoding/
[1]https://www.reddit.com/r/isthisAI/
[2]https://www.reddit.com/r/aiwars/
[3]https://www.reddit.com/r/antiai/
[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...
[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...
Another example from `r/bayarea` where the author is OK with AI but the top comments are increasingly wary of its potential for harm[0]
[0]https://www.reddit.com/r/bayarea/comments/1sp8wvz/is_it_just...
Totally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.
Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model. Why would they be in business if self play can work out so well?
Sounds like Macrodata Refinement.
Because it is still cheaper.
But they will probably use self-play soon. See https://www.amplifypartners.com/blog-posts/self-play-and-aut...
Would the scrapers not just add these sites to do not crawl list?
I'd say the notion that expensive acts of sabotage (that can be cheaply neutralized) are a worthwhile pastime and anything other than virtue signaling is somewhat perplexing. (Not in a good way.)
If there is an effective way to poison them, it'll be automated. And, it'll probably rely on an LLM to produce the poison, since it has to look legit enough to pass the quality filtering and classification stage of the data ingestion process, which is also probably driven by an LLM.
One reason small models are getting better is because the training data being used is not just getting bigger, it's getting cleaner and classified more correctly/precisely. "Model collapse" hasn't happened, yet, even though something like half the web is AI slop, because as the models get smarter for human use in a variety of contexts, they also get smarter for use in preparing data for training the next model. There may very well still be risks of a mad cow disease like problem for LLMs, but I doubt a Markov chain website is going to contribute. The models still can't always tell fact from fiction, but they're not being hoodwinked by a nonsense generator.
So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.
I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.
Given all the borderline apocalyptic articles how students are using it to cheat and teachers have no way to stop them, I'd be honestly surprised by that.
On the flip side, one of my other teacher friends has instituted a no phone policy in his classroom.
Most people don't care if something is written by an AI as long as it is reasonable, and reflects the intent of the human who prompted the AI.
If consuming material online (videos, web sites, online forums) is not something you do a lot of, you're relatively unimpacted by LLMs (well, except the whole jobs situation...).
This kind of effect would work both ways. People who are non-confrontational in general will choose to keep quiet if their opinions differ. In this view, both pro-AI and anti-AI sides might find themselves having their bias confirmed due to opposing views self-silencing to avoid conflict.
It reminds me of similar late-stage-capitalism like activity, from the assassination of the insurance company CEO, the fire-bombing of Tesla's, etc. It is hard to disentangle hate that is based on economic inequality or power imbalance from hate directed explicitly at AI. That is especially true since one narrative suggests that both types of inequality (economic and power) may be accelerated by an unequal distribution of access to AI.
So we might end up in an argument over whether the hate that drives the violence is towards AI at all, or if that is merely a symptom of existing anti-capitalist sentiment that is on the rise.