Top
Best
New

Posted by skwee357 1/18/2026

Dead Internet Theory(kudmitry.com)
697 points | 697 commentspage 9
enos_feedler 1/19/2026|
The darkest hour is just before the dawn
andyish 1/19/2026||
I'm so torn with verification on social media. But i'm surprised companies whose main source of revenue is ads and original content aren't putting something akin to 'verified human' tags on users for all to see. Not just to show authenticity but also to be able to say to ad buyers: your ads have been seen by x real users.

I mean sure, the next step will probably be "your ads have been seen by x real users and here are their names, emails, and mobile numbers" :(

As well as verification there must be teams at Reddit/LinkedIn/Whereever working ways to identify ai content so it can be de-ranked.

Lammy 1/19/2026||
> which on most keyboard require a special key-combination that most people don’t know

I am sick of the em-dash slander as a prolific en- and em-dash user :(

Sure for the general population most people probably don't know, but this article is specifically about Hacker News and I would trust most of you all to be able to remember one of:

- Compose, hyphen, hyphen, hyphen

- Option + Shift + hyphen

(Windows Alt code not mentioned because WinCompose <https://github.com/ell1010/wincompose>)

georgeecollins 1/19/2026||
I don't think only AI says "yes you are absolutely right". Many times I have made a comment here and then realized I was dead wrong, or someone disagreed with my by making a point that I had never thought of. I think this is because I am old and I have realized I wasn't never as smart as I thought I was, even when I was a bit smarter a long time ago. It's easy to figure out I am a real person and not AI and I even say things that people downvote prodigiously. I also say you are right.
shayanbahal 1/19/2026||
Basically: Rage Bait is winning :/

> The Oxford Word of the Year 2025 is rage bait

> Rage bait is defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media content”.

https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...

dalemhurley 1/19/2026||
You are absolutely right...:P

I don't know mind people using AI to create open source projects, I use it extensively, but have a rule that I am responsible and accountable for the code.

Social media have become hellscapes of AI Slop of "Influencers" trying to make quick money by overhyping slop to sell courses.

Maybe where you are from the em dash is not used, but in Queen's English speaking countries the em dash is quite common to represent a break of thought from the main idea of a sentence.

mrweasel 1/19/2026|
While I am fairly skeptical of LLMs and the current version of AI that are being peddled by companies like OpenAI, the main problem is, once again, social media.

That someone is vibe-coding their application, running the symptoms of their illness through ChatGPT, or using it to write the high school essay isn't really a problem. It's also not a massive issue that you can generate random videos, the problem is that social media not only allows propaganda to be spread at the speed of light without any filter and verification, that it actually encourage it in the name of profit.

Remove the engagement retaining algorithms from social media, stop measuring success in terms of engagement. Do that an AI will become much less of a problem. LLMs are a tool for generation, social media is transport mechanism that makes the generated content actually dangerous.

Salgat 1/19/2026||
I'm not saying the entire internet needs to be this way, but I would love to see the expansion of non-anonymous/verified accounts used on web platforms. Take ycombinator for example; some of the best comments come from users with known identities and reputations tied to their accounts, rather than anonymous folks who can spew whatever nonsense without repercussion (and in some cases aren't even real people).
skwee357 1/19/2026|
On the other hand, take LinkedIn for example, and you get the bottom of corporate AI-slop.

I agree that anonymization makes people more hostile to others, but I doubt the de-anonymization is the solution. Old school forums and IRC channels were, _mostly_, safe because they were (a) small, (b) local, and (c) usually had offline meetups.

Salgat 1/20/2026||
On the plus side, LinkedIn isn't full of hate and division like the mess that is Facebook. Everyone is still acting accountable for their words.
granitepail 1/19/2026||
To me, it’s very obvious that the problem is social media. To social media, AI slop is peak efficiency. The affordances and incentives of the network encourage its creation. I don’t care for the media slop, but eg television media has more or less been producing crap like this for a while.

I don’t think LLMs and video/image models are a negative at all. And it’s shocking to me that more people don’t share this viewpoint.

ulf-77723 1/19/2026||
Trying to figure out if human or bot seems to be a fun game for a lot of people. I was being accused on Reddit for being a bot (of course I’m not) and English is not my mother tongue - yet people see what they want to see.

Maybe the future will be dystopian and talking to a bot to achieve a given task will be a skill? When we reach the point that people actually hate bots, maybe that will be a turning point?

narag 1/19/2026|
A couple of different uses of AI, recently detected in YouTube:

1. There are channels specialized in topics like police bodycam and dashcam videos, or courtroom videos. AI there is used to generate voice (and sometimes a very obviously fake talking head) and maybe the script itself. It seems a way to automatize tasks.

2. Some channels are generating infuriating videos about fake motorbikes releases. Many.

More comments...