Top
Best
New

Posted by skwee357 1 day ago

Dead Internet Theory(kudmitry.com)
619 points | 649 comments
seiferteric 17 hours ago|
My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.
spicyusername 6 hours ago||

    divisiveness this kind of stuff will create
I'm pretty sure we're already decades in to the world of "has created".

Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.

And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.

It's just been too much for too long and you can tell.

_heimdall 5 hours ago|||
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite

Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.

spicyusername 3 hours ago|||
I didn't use it disparagingly.

In fact, it's easier than ever to see the intended benefit of such a lifestyle.

dugidugout 5 hours ago||||
Which also has a term with stigma: hipster
pousada 4 hours ago||
Hipster used to mean that but meaning changed to being someone who “doesn’t fit in” but only for performative reasons, not really “for real” but just to project an image of how cool they are
dugidugout 3 hours ago||
Alas, something something capitalism.
KPGv2 3 hours ago|||
> Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits

They didn't say to avoid certain tech. They said to avoid takes and news headlines.

Your conflation of those two is like someone saying "injecting bleach into your skin is bad" and you responding with "oh, so you oppose cleaning bathrooms [with bleach]?"

gooob 6 hours ago||||
it's malware in the mind. it was happening before deep fakes was possible. news outlets and journalists have always had incentive to present extreme takes to get people angry, cause that sells. now we have tools that pretty much just accelerate and automate that process. it's interesting. it would be helpful to figure out how to prevent people (especially upcoming generations) from getting swept away by all this.
butlike 4 hours ago||
I think fatigue will set in and the next generation will 'tock' back from this 'tick.' Getting outraged by things is already feeling antiquated to me, and I'm in my 30's.
Ntrails 5 hours ago||||
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite

It really isn't that hard, if I'm looking at my experience. Maybe a little stuff on here counts. I get my news from the FT, it's relatively benign by all accounts. I'm not sure that opting out of classical social media is particularly luddite-y, I suspect it's closer to becoming vogue than not?

Being led around by the nose is a choice still, for now at least.

abustamam 3 hours ago||
I think the comment you're replying to isn't necessarily a question of opting out of such news, it's the fact that it's so hard to escape it. I swipe on my home screen and there I am, in my Google news feed with the constant barrage of nonsense.

I mostly get gaming and entertainment news for shows I watch, but even between those I get CNN and Fox News, both which I view as "opinion masquerading as news" outlets.

My mom shares so many articles from her FB feed that are both mainstream (CNN, etc) nonsense and "influencer" nonsense.

Ntrails 2 hours ago||
Right, and my point is how easy opting out actually is.

I have no news feed on my phone. I doubt on android it is any harder to evade. Social media itself is gone. The closest I get to click-bait is when my mother spouts something gleaned from the Daily Mail. That vector is harder to shift I concede!

abustamam 2 hours ago|||
Fair points on both fronts! Though I think you may be conflating simple with easy. Removing social media from one's life is certainly simple (just uninstall the app!), but it's not that easy for some people because it's their only method of communication with some folks. I mostly don't use SM but I log onto Instagram because some of my friends only chat there, same with Facebook.
b00ty4breakfast 1 hour ago||||
>I'm pretty sure we're already decades in to the world of "has created".

Simulacra and Simulation came out in '81, for an example of how long this has been a recognized phenomenon

sdoering 1 hour ago||||
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite.

Then I am very proudly one. I don't do TikTok, FB, IG, LinkedIn or any of this crap. I do a bit of NH here and there. I follow a curated list of RSS feeds. And I twice a day look at a curated/grouped list of headlines from around the world, built from a multitude of sources.

Whenever I see a yellow press headline from the German bullshit print medium "BILD" when paying for gas or out shopping, I can't help but smile. That people pay money for that shit is - nowadays - beyond me.

To be fair. This was a long process. And I still regress sometimes. I started my working life at an editorial team for an email portal. Our job was to generate headlines that would stop people from logging in to read their mail and read our crap instead - because ads embedded within content were way better paid than around emails.

So I actually learned the trade. And learned that outrage (or sex) sells. This was some 18 or so years ago - the world changed since then. It became even more flammable. And more people seem to be playing with their matches. I changed - and changed jobs and industries a few times.

So over time I reduced my news intake. And during the pandemic learned to definitely reduce my social media usage - it is just not healthy for my state of mind. Because I am way to easily dopamine addicted and trigger-able. I am a classic xkcd.com/386 case.

bombcar 4 hours ago||||
I honestly think it might be downstream of individualized mass-market democracy; each person is tasked with fully understanding the world as it is so they can make the correct decisions at all level of voting, but ain't nobody got time for that.

So we emotionally convince ourselves that we have solved the problem so we can act appropriately and continue doing things that are important to us.

The founders recognized this problem and attempted to setup a Republic as an answer to it; so that each voter didn't have to ask "do I know everything about everything so I can select the best person" and instead were asked "of this finite, smaller group, who do I think is best to represent me at the next level"? We've basically bypassed that; every voter knows who ran for President last election, hardly anyone can identify their party's local representative in the party itself (which is where candidates are selected, after all).

scelerat 3 hours ago|||
Completely agree, but at the same time I can't bring myself to believe that reinforcing systems like the electoral college or reinstating a state-legislature-chosen Senate would yield better outcomes.

Most people I know who have strong political opinions (as well as those who don't) can't name their own city council members or state assemblyman, and that's a real problem for functioning representative democracy. Not only for their direct influence on local policy, but also because these levels of government also serve as the farm team or proving grounds for higher levels of office.

By the time candidates are running with the money and media of a national campaign, in some sense it's too late to evaluate them on matters of their specific policies and temperaments, and you kind of just have to assume they're going to follow the general contours of their party. By and large, it seems the entrenched political parties (and, perhaps, parties in general) are impediments to good governance.

bombcar 2 hours ago||
I think it's an inherent problem with democracy in itself, and something that will have to be worked out at some time, somewhere.

The accidents that let it occur may no longer be present - there are arguments that "democracy" as we understand it was impossible before rapid communication, and perhaps it won't survive the modern world.

We're living in a world where a swing voter in Ohio may have more effect/impact on Iran than a person living there - or even more effect on Europe than a citizen of Germany.

jvanderbot 1 hour ago||||
I disagree.

Voting on principles is fine and good.

The issue is the disconnect between professed principles and action. And the fact that nowadays there are not many ways to pick and choose principles except two big preset options.

vacuity 3 hours ago|||
It's easier to focus on fewer representatives, and because the federal government has so much power (and then state governments), life-changing policies mainly come top-down. Power should instead flow bottom-up, with the top being the linchpin, but alas.
vitaflo 4 hours ago||||
Nothing wrong with being a Luddite these days. It’s the only way to not have your mind assaulted.
KPGv2 3 hours ago||
I feel like you people are intentionally misconstruing what "Luddite" means. It doesn't mean "avoids specific new tech." It means "avoiding ALL new tech because new things are bad."

A luddite would refuse the covid vaccine. They'd refuse improved trains. They'd refuse EVs. etc. This is because ludditism is the blanket opposition to technological improvements.

Forgeties79 1 hour ago||
> I feel like you people are intentionally misconstruing what "Luddite" means.

That’s a very unfair accusation to throw at someone off the cuff. Anyway, what you wrote is not what a Luddite is at all, especially not the anti-vaccine accusation. I don’t think you’re being deliberately deceptive here, I think you just don’t know what a Luddite is (was).

For starters: They were not anti-science/medicine/all technology. They did not have “blanket opposition to all technological improvement.” You’re expressing a common and simplistic misunderstanding of the movement and likely conflating it with (an also flawed understanding of) the Amish.

They were, at their core, a response against industrialization that didn’t account for the human cost. This was at the start of the 19th century. They wanted better working conditions and more thoughtful for consideration for industrialization took place. They were not anti-technology and certainly not anti-vaccine.

The technology they were talking about was mostly related to automation in factories, which coupled with anti-collective bargaining initiatives, led to further dehumanization of the workforce as well as all sorts novel and horrific workplace accidents for adults and children alike. Their call for “common sense laws” and “guardrails” are echoed today with how many of us talk about AI/LLM’s.

Great comic on this: https://thenib.com/im-a-luddite/

bigmeme 4 hours ago|||
> Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.

Case in point: if you ask for expertise verification on HN you get downvoted. People would rather argue their point, regardless of validity. This site’s culture is part of the problem and it predates AI.

vitaflo 4 hours ago||
This has been going on since Usenet. Nothing new.
b3lvedere 8 hours ago|||
Just twenty minutes ago i got a panic call that someone was getting dozens of messages that their virusscanner is not working and they have hundreds of viruses. By removing Google Chrome from sending messages to the Windows notification bar everything went back to normal on the computer.

Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.

Maybe i should setup a Pi-Hole business...

bryanrasmussen 7 hours ago||
If there was a GDPR type law for any company above a certain size (so as to only catch the big Ad networks) that allowed the propagation of "false" ads claiming security issues, monetary benefits or governmental services, then it could stop transmission of most of the really problematic ads, because any company the size of Google is also in the risk minimization business and they will set up a workflow to filter out "illegal" ads to at least a defensible level so they don't get fines that cost more than the ads pay.

Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.

Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.

shadowgovt 5 hours ago||
Not scold (that is how we shape social behavior), but only note that Safe Harbor essentially grants the opposite of this (away from the potential default of "By transiting malware you are complicit and liable in the effect of the malware") so it'd have to be a finely-crafted law to have the desired effect without shutting down the ability to do both online advertising and online forums at all.

... which doesn't sound impossible. It's also entirely possible that the value of Section 230 has run its course and it should generally be remarkably curtailed (its intent was to make online forums and user-generated-content networks, of which ad networks are a kind, possible, but one could make the case that it has been demonstrated that operators of online forums have immense responsibility and need to be held accountable for more of the harms done via the online spaces they set up).

novok 1 hour ago|||
Eventually it will make everyone say that videos are fake because nobody trusts videos anymore. We will ironically be back to something like the 40s where security cameras didn't exist and photography was rare and relatively expensive. A strange kind of privacy.
lrvick 8 hours ago|||
If there are ad incentives, assume all content is fake by default.

On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.

tomaskafka 8 hours ago|||
That’s not because it’s decentralized or open, it’s because it doesn’t matter. If it was larger or more important, it would get run over by bots in weeks.

Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever

The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.

Okawari 6 hours ago|||
It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.

Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.

There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.

thesuitonym 5 hours ago|||
> I wish we gravitated towards that kind of internet again.

So do it. Forums haven't gone away, you just stopped going to them. Search for your special interest followed by "Powered by phpbb" (or Invision Community, or your preferred software) and you'll find plenty of surprisingly active communities out there.

adzm 6 hours ago||||
You're describing Discord today
Q6T46nT668w6i3m 5 hours ago||
Bots are a major problem on Discord. I frequently receive messages to buy MMO gold.
jayd16 3 hours ago||
That's a flaw in the GP's plan, not a flaw in the observation that Discord is a good example of what they're asking for.
KPGv2 3 hours ago|||
> It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.

IME young people use Discord, and those servers often require permission to even join. Nearly all my fandom communications happen on a few Discord servers, most of which you cannot join without an invitation, and if you're kicked (bad actors will be kicked), you cannot re-join (without permission).

NoMoreNicksLeft 4 hours ago|||
>and be absolutely ruthless in permabanning anyone who posts AI content unmarked,

It would certainly be fun to trick people I dislike into posting AI content unknowingly. Maybe it has to be so low-key that they aren't even banned on the first try, but that just seems ripe for abuse.

I want a solution to this problem too, but I don't think this is reasonable or practical. I do wonder what it would mean if, philosophically, there were a way to differentiate between "free speech" and commercial speech such that one could be respected and the other regulated. But if there is such a distinction I've never been able to figure it out well enough to make the argument.

direwolf20 8 hours ago||||
Of course - because everyone is banned upon first suspicion.
iso1631 5 hours ago|||
Usenet died partly due to the ads, and the inability for adblocking software at the time to keep up.

People left and never came back.

But those bots were certainly around in the 90s

sigio 5 hours ago||
Worst is... the bots, spam and ads are still there, even if there is no-one to read them. Usenet might still be alive (for piracy/binaries at least), and maybe a handfull of still-alive text-groups, but in the text-groups I used to read, it's nothing but a constant flow of spam since 15+ years.
ryanjshaw 13 hours ago|||
I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.
raincole 12 hours ago|||
Those subreddits label content wrong all the time. Some of top commentors are trolling (I've seen one cooking video where the most voted comment is "AI, the sauce stops when it hits the plate"... as thick sauce should do.)

You're training yourself with a very unreliable source of truth.

input_sh 9 hours ago|||
> Those subreddits label content wrong all the time.

Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.

immibis 4 hours ago||
Also, most Reddit users are AI.
ryanjshaw 9 hours ago|||
> You're training yourself with a very unreliable source of truth.

I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.

If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?

Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?

butlike 4 hours ago|||
I don't understand. In the grandparent comment you say you have a problem spending too much time on those subreddits and watching too many of those videos, but then you push back here.

Personally, I don't think that behavior is very healthy, and the other parent comment suggested an easy "get out of jail free" way of not thinking about it anymore while also limiting anxiety: they're unreliable subreddits. I'd say take that advice and move on.

iwontberude 8 hours ago|||
Which themselves are arguments from bots.
bentcorner 4 hours ago||
This itself could be a bot argument casting doubt on reddit. It's an endless cycle.

If bots reference real sources it's still a valid argument.

lukan 12 hours ago||||
"I may have to accept that this is just the new reality - never quite knowing the truth."

Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)

https://en.wikipedia.org/wiki/I_know_that_I_know_nothing

padjo 11 hours ago||
I’m really hoping that we’re about to see an explosion in critical thinking and skepticism as a response to generative AI.

Any day now… right?

ryanjshaw 9 hours ago|||
I show my young daughter this stuff and try to role model healthy skepticism. Critical thinking YT like Corridor Crew’s paranormal UFO/bigfoot/ghosts/etc series is great too. Peer pressure might be the deciding factor in what she ultimately chooses to believe, though.
notarobot123 11 hours ago||||
I think the broader response and re-evaluation is going to take a lot longer. Children of today are growing up in an obviously hostile information environment whereas older folk are trying to re-calibrate in an environment that's changing faster than they are.

If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.

butlike 4 hours ago||
Haha, AGI happens because some future generation just 'forgets' that it was all bullshit slop before
efnx 11 hours ago|||
One can hope!
lukan 10 hours ago||
Yeah, one can. But then I see people just accepting the weak google search AI summary as plain facts and my hope fades away.
lesam 9 hours ago||||
Before photography, we knew something was truthful because someone trustworthy vouched for it.

Now that photos and videos can be faked, we'll have to go back to the older system.

ekianjo 8 hours ago|||
It was always easy to fake photos too. Just organize the scene, or selectively frame what you want. There is no such thing as any piece of media you can trust.
bandrami 8 hours ago||
The construction workers having lunch on the girder in that famous photo were in fact about four feet above a safety platform; it's a masterpiece of framing and cropping. (Ironically the photographer was standing on a girder out over a hundred stories of nothing).
wiz21c 7 hours ago||
Although I do use the dead internet, I didn't see much of "fake news" like that...
expedition32 8 hours ago|||
Ah yes the good old days of witch trials and pogroms.

I am no big fan of AI but misinformation is a tale as old as time.

djeastm 7 hours ago||||
My favorite theory about those subreddits is that it's the AI companies getting free labeling from (supposed) authentic humans so they can figure out how to best tweak their models to fool more and more people.
bradgessler 12 hours ago|||
What if AI is running RealOrAI to trick us into never quite knowing the truth?
sheept 16 hours ago|||
a reliable giveaway for AI generated videos is just a quick glance at the account's post history—the videos will look frequent, repetitive, and lack a consistent subject/background—and that's not something that'll go away when AI videos get better
vitaflo 4 hours ago|||
Sort by oldest. If the videos go back more than 3 years watch an old one. So many times the person narrating the old vids is nothing like the new vids and a dead ringer for AI. If the account is less than a year old, 100% AI.
grugagag 3 hours ago||
New AI narration is a dead giveaway to us but many people can’t tell they’re not listening to a human. It is very concerning.
eru 15 hours ago||||
> [...] and lack a consistent subject/background—and that's not something that'll go away when AI videos get better

Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?

sheept 12 hours ago||
AI is capable of consistent characters now, yes, but the platforms themselves provide little incentive to. TikTok/Instagram Reels are designed to serve recommendations, not a user-curated feed of people you follow, so consistency is not needed for virality
cortesoft 15 hours ago||||
Or they are reposting other people's content
zahlman 9 hours ago||||
How can they look repetitive while being inconsistent? Do you mean in terms of presentation / "editing" style?
fallinditch 15 hours ago||||
A giveaway for detecting AI-generated text is the use of em-dashes, as noted in op - you are caught bang to rights!
nicbou 12 hours ago|||
Some keyboards and operating systems — iOS is one of them — convert two dashes into an emdash.
Sharlin 6 hours ago|||
And macOS, at least my keyboard layout, has both en and em dashes easily typeable with Alt+- and Alt+Shift+- respectively.
JoeJonathan 3 hours ago||
I use emdashes and endashes all the time—but maybe that's because I'm an academic?
Sharlin 3 hours ago||
I use them – well, mostly en dashes because that's the custom where I'm from – because I'm a bit of a typography nerd and have grown to dislike the barrenness of ASCII.
Nevermark 10 hours ago|||
I can’t wait for my keyboard to start auto-completing “Your” with “are absolutely right!”
nicbou 10 hours ago||
In this case Apple has cared about typography since its very beginning. Steve Jobs obsessed over it. The OS also replaces simple quotes with fancier ones.

I do the same on my websites. It's embedded into my static site generator.

Very related: https://practicaltypography.com/

Nevermark 10 hours ago||
Agreed that is useful, despite the unintended consequences of dash meddling.
lucumo 13 hours ago||||
Not long ago, a statistical study found that AI almost always has an 'e' in its output. It is a firm indicator of AI slop. If you catch a post with an 'e', pay it no mind: it's probably AI.

Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.

eks391 13 hours ago|||
I'm incredibly impressed that you managed to make that whole message without a single usage of the most frequently used letter, except in your quotations.
zahlman 9 hours ago|||
Such omission is a hobby of many WWW folk. I can, in fact, think back to finding a community on R*ddit known as "AVoid5", which had this trial as its main point.

Down with that foul fifth glyph! Down, I say!

hardlianotion 4 hours ago||
second vowel actually.
agoodusername63 12 hours ago|||
Bet they asked an AI to make the bit work /s
lucumo 10 hours ago||
:-D

I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".

It's not oft that you run into such alluring confirmation of your point.

throwaway290 9 hours ago||
I'm having my thumbs-up back >:(

My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.

No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)

PurelyApplied 13 hours ago||||
I apprEciatE your dEdication to ExclusivEly using 'e' in quotEd rEfErEncE, but not in thE rEst of your clEarly human-authorEd tExt.

I rEgrEt that I havE not donE thE samE, but plEase accEpt bad formatting as a countErpoint.

mmarq 11 hours ago||||
https://www.gutenberg.org/ebooks/47342

https://fr.wikipedia.org/wiki/La_Disparition_(roman)

throwaway290 13 hours ago||||
Finally a human in this forum. Many moons did I long for this contact.

(Assuming you did actually hand craft that I thumbs-up both your humor and industry good sir)

Terr_ 13 hours ago|||
nice try but u used caps and punctuation lol bot /s
sqquima 4 hours ago|||
My AI generator loves to write ergo, concordantly, and vis-a-vis.
phire 9 hours ago|||
I actually avoid most YouTube channels that upload too frequently. Especially with consistent schedules.

Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.

grugagag 3 hours ago||
Content farms, whether AI generated or not their incentive is to pump out low quality high output. Most of their content even it involves a human narrator are heavily packed with AI generated media.
quantummagic 15 hours ago|||
As they say, the demand for racism far outstrips the supply. It's hard to spend all day outraged if you rely on reality to supply enough fodder.
InsideOutSanta 10 hours ago|||
This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).

In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.

Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

DrScientist 8 hours ago|||
> Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.

People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.

In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.

ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.

groundzeros2015 5 hours ago|||
> it's built to optimise for revenue generation

I think blaming it all on money ignores that this also serve political goals.

Groups spend money to manipulate public opinion. It’s a goal in and of itself that has value rather than a money making scheme.

DrScientist 4 hours ago||
Sure -it's a mix - but to be honest I think it's over-emphasized - in that in the US most of that kind of money driving politics operates in plain sight.

For example, the 'Russian interference' in the 2016 US election, was I suspect mostly people trying to make money, and more importantly, was completely dwarfed by US direct political spending.

There is also a potentially equally, if not larger problem, in the politicisation of the 'anti-disinformation' campaigns.

To be honest I'm not sure if there is much of a difference between a grifter being directly paid to promote a certain point of view, and somebody being paid indirectly ( by ads ).

In both cases neither really believes in the political point they are making they are just following the money.

These platforms are enabling both.

shadowgovt 5 hours ago|||
In social networks, revenue is enhanced by stickiness.

Anger increases stickiness. Once one discovers there are other people on the site, and they are guilty of being wrong on the internet, one is incentivized to correct them. It feels useful because it feels like you're generating content that will help other people.

I suspect the failure of the system that nobody necessarily predicted is that people seem to not only tolerate, but actually like being a little angry online all the time.

zahlman 9 hours ago||||
> In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).

groundzeros2015 5 hours ago||||
> outraged at people who think racism is a problem.

This is one level of abstraction more than I deal with on a normal day.

The fake video which plays into people’s indignation for racism, is actually about baiting people who are critical about being baited by racism?

InsideOutSanta 4 hours ago||
That's not what I said.
GorbachevyChase 1 hour ago||||
Peace should not be a goal. Bolsheviks cannot be reasoned or negotiated with.
blfr 10 hours ago|||
I agree with grandparent and think you have cause and effect backwards: people really do want to be outraged so Facebook and the like provide rage bait. Sometimes through algos tuning themselves to that need, sometimes deliberately.

But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.

I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.

The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.

[1] https://www.tiktok.com/@gossip.goblin

InsideOutSanta 10 hours ago|||
> people really do want to be outraged

No, they do not. Nobody[1] wants to be angry. Nobody wakes up in the morning and thinks to themselves, "today is going to be a good day because I'm going to be angry."

But given the correct input, everyone feels that they must be angry, that it is morally required to be angry. And this anger then requires them to seek out further information about the thing that made them angry. Not because they desire to be angry, but because they feel that there is something happening in the world that is wrong and that they must fight.

[1]: for approximate values of "nobody"

MontyCarloHall 7 hours ago|||
>Nobody wants to be angry.

I disagree. Why are some of the most popular subreddits things like r/AmITheAsshole, r/JustNoMIL, r/RaisedByNarcissists, r/EntitledPeople, etc.: forums full of (likely fake) stories of people behaving egregiously, with thousands of outraged comments throwing fuel on a burning pile of outrage: "wow, your boyfriend/girlfriend/husband/wife/father/mother/FIL/MIL/neighbor/boss/etc. is such an asshole!" Why are advice/gossip columns that provide outlets for similar stories so popular? Why is reality TV full of the same concocted situations so popular? Why is people's first reaction to outrageous news stories to bring out the torches and pitchforks, rather than trying to first validate the story? Why can an outrageous lie travel halfway around the world while the truth is still getting its boots on?

InsideOutSanta 4 hours ago||
As someone who used to read some of these subreddits before they became swamped in AI slop, I did not go there to be angry but to be amused and/or find like-minded people.
lazide 9 hours ago|||
If you think for a bit on what you just wrote, I’m pretty sure you’re agreeing with what they wrote.

You’re literally saying why people want to be angry.

quietbritishjim 9 hours ago|||
I suppose the subtlety is that people want to be angry if (and only if) reality demands it.

My uneducated feeling is that, in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed.

But that doesn't mean that people want to be angry in general, in the sense that if there's nothing in reality to be angry about then that's even better. But if someone is presented with something to be angry about, then that ship has sailed so the typical reaction is to feel the need to engage.

InsideOutSanta 8 hours ago|||
>in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed

Yes, I think this is exactly it. A reaction that may be reasonable in a personal, real-world context can become extremely problematic in a highly connected context.

It's both that, as an individual, you can be inundated with things that feel like you have a moral obligation to react. On the other side of the equation, if you say something stupid online, you can suddenly have thousands of people attacking you for it.

Every single action seems reasonable, or even necessary, to each individual person, but because everything is scaled up by all the connections, things immediately escalate.

lazide 6 hours ago||
The issue right now is that the only things you can do to protect yourself from certain kinds of predators is literally what will get you blown up on social media when taken out of context.
lazide 8 hours ago|||
If people are bored, they’ll definitely seek out things that make them less bored. It’s hard to be less bored than when you’re angry.
InsideOutSanta 8 hours ago|||
There's a difference between wanting to be angry and feeling that anger is the correct response to an outside stimulus.

I don't wake up thinking "today I want to be angry", but if I go outside and see somebody kicking a cat, I feel that anger is the correct response.

The problem is that social media is a cat-kicking machine that drags people into a vicious circle of anger-inducing stimuli. If people think that every day people are kicking cats on the Internet, they feel that they need to do something to stop the cat-kicking; given their agency, that "something" is usually angry responses and attacks, which feeds the machine.

Again, they do not do that because they want to be angry; most people would rather be happy than angry. They do it because they feel that cats are being kicked, and anger is the required moral response.

lazide 8 hours ago||
And if you seek out (and push ‘give me more’ buttons on) cat kicking videos?

At some point, I think it’s important to recognize the difference between revealed preferences and stated preferences. Social media seems adept at exposing revealed preferences.

If people seek out the thing that makes them angry, how can we not say that they want to be angry? Regardless of what words they use.

And for example, I never heard anyone who was a big Fox News, Rush Limbaugh, or Alex Jones fan who said they wanted to be angry or paranoid (to be fair, this was pre-Trump and awhile ago), yet every single one of them I saw got angry and paranoid after watching them, if you paid any attention at all.

InsideOutSanta 7 hours ago||
>If people seek out the thing that makes them angry, how can we not say that they want to be angry?

Because their purpose in seeking it out is not to get angry, it's to stop something from happening that they perceive as harmful.

I doubt most people watch Alex Jones because they love being angry. They watch him because they believe a global cabal of evildoers is attacking them. Anger is the logical consequence, not the desired outcome. The desired outcome is that the perceived problem is solved, i.e. that people stop kicking cats.

lazide 6 hours ago||
The reason they feel that way (more) is because of those videos. Just like most people who watch Alex Jones probably didn’t start by believing all the crazy things.

We can chicken/egg about it all day, but at some point if people didn’t want it - they wouldn’t be doing it.

Depending on the definition of ‘want’ of course. But what else can we use?

I don’t think anyone would disagree that smokers want cigarettes, eh? Or gamblers want to gamble?

InsideOutSanta 4 hours ago||
I think most people have experienced relatives of theirs falling down these rabbit holes. They didn't seek out a reason to be angry; they watched one or two episodes of these shows because they were on Fox, or because a friend sent it, or because they saw it recommended on Facebook. Then they became angry, which made them go back because now it became a moral imperative to learn more about how the government is making frogs gay.

None of these people said to themselves, "I want to be angry today, and I heard that Alex Jones makes people angry, therefore I will watch Alex Jones."

mikkupikku 3 hours ago||
> "They didn't seek out a reason to be angry"

A lot of people really do, and it predates any sort of media too. When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.

csnover 1 hour ago||
> When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.

But again in this situation the goal is not to be angry.

This sort of behaviour emerges as a consequence of unhealthy group dynamics (and to a lesser extent, plain boredom). By gossiping, a person expresses understanding of, and reinforces, their in-group’s values. This maintains their position in the in-group. By embellishing, the person attempts to actually increase their status within the group by being the holder of some “secret truth” which they feel makes them important, and therefore more essential, and therefore more secure in their position. The goal is not anger. The goal is security.

The emotion of anger is a high-intensity fear. So what you are perceiving as “seeking out a reason to be angry” is more a hypervigilant scanning for threats. Those threats may be to the dominance of the person’s in-group among wider society (Prohibition is a well-studied historical example), or the threats may be to the individual’s standing within the in-group.

In the latter case, the threat is frequently some forbidden internal desire, and so the would-be transgressor externalises that desire onto some out-group and then attacks them as a proxy for their own self-denial. But most often it is simply the threat of being wrong, and the subsequent perceived loss of safety, that leads people to feel angry, and then to double down. And in the world we live in today, that doubling down is more often than not rewarded with upvotes and algorithmic amplification.

mikkupikku 20 minutes ago||
I disagree. In these gossip circles they brush off anything that doesn't make them upset, eager to get to the outrageously stuff. They really do seek to be upset. It's a pattern of behavior which old people in particular commonly fall into, even in absence of commercialized media dynamics.
RGamma 10 hours ago|||
You may be vastly overestimating average media competence. This is one of those things where I'm glad my relatives are so timid about the digital world.
neilv 14 hours ago||||
I hadn't heard that saying.

Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.

Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.

In various forms, with various levels of harm, and with various levels of evidence available.

(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)

Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.

(Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )

self_awareness 12 hours ago||
Did you just justify generating racist videos as a good thing?
nkmnz 10 hours ago|||
Is a video documenting racist behavior a racist or an anti-racist video? Is faking a video documenting racist behavior (that never happened) a racist or an anti-racist video? Is the act of faking a video documenting racist behavior (that never happened) or anti-racist behavior?
garretraziel 10 hours ago|||
It doesn’t have to be either for it to be morally bad.
self_awareness 8 hours ago|||
Video showing racist behavior is racist and anti-racist at the same time. A racist will be happy watching it, and anti-racist will forward it to forward their anti-racist message.

Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.

If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.

Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!

amanaplanacanal 4 hours ago||
That's not necessarily just a leftist thing. Plenty of politicians are perfectly fine with saying things they know are lies for what they believe are good reasons. We see it daily with the current US administration.
mikkupikku 2 hours ago||
It's a general ideologue thing.
neilv 8 hours ago||||
I don't think so. I was trying to respond to a comment in a way that was diplomatic and constructive. I can see that came out unclear.
mxkopy 12 hours ago||||
Think they did the exact opposite

> Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered

self_awareness 11 hours ago||
Well yes, that's what he wrote, but that's like saying: stealing can be done for variety of reasons, including by someone who intends the theft to be discovered? Killing can be done for variety of reasons, including by someone who intends the killing to be discovered?

I read it as "producing racist videos can sometimes be used in good faith"?

pfg_ 9 hours ago|||
They're saying one example of a reason someone could fake a video is so it would get found out and discredit the position it showed. I read it as them saying that producing the fake video of a cop being racist could have been done to discredit the idea of cops being racist.
Nevermark 11 hours ago||||
There is significant differences between how the information world and the physical world operate.

Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.

But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.

mxkopy 11 hours ago|||
I think they’re just saying we should interpret this video in a way that’s consistent with known historical facts. On one hand, it’s not depicting events that are strictly untrue, so we shouldn’t discredit it. On the other hand, since the video itself is literally fake, when we discredit it we shouldn’t accidentally also discredit the events it’s depicting.
self_awareness 10 hours ago||
Are you saying that if there is 1 instance of a true event, then fake videos done in a similar way as this true event is rational and needed?
mxkopy 10 hours ago|||
The insinuation that racism in the US is not systemic reeks of ignorance

Edit: please, prove your illiteracy and lack of critical thinking skills in the comments below

lazide 9 hours ago|||
So make fake videos of events that never actually happened, because real events surely did that weren’t recorded? Or weren’t viral enough? Or something?

Do you realize how crazy this sounds?

self_awareness 9 hours ago|||
How do I know that most of racist indicents weren't simulated by you guys? Since you clearly say that it's OK to generate lies about it?

Edit: I literally demonstrate my ability to think critically.

thinkingemote 11 hours ago||||
How about this question: Can generating an anti-racist video be justified as a good thing?

I think many here would say "yes!" to this question, so can saying "no" be justified by an anti-racist?

Generally I prefer questions that do not lead to thoughts being terminated. Seek to keep a discussion not stop it.

On the subject of this thread, these questions are quite old and are related to propaganda: is it okay to use propaganda if we are the Good Guys, if, by doing so, it weakens our people to be more susceptible to propaganda from the Bad Guys. Every single one of our nations and governments think yes, it's good to use propaganda.

Because that's explicitly what happened during the rise of Nazi Germany; the USA had an official national programme of propaganda awareness and manipulation resistance which had to be shut down because the country needed to use propaganda on their own citizens and the enemy during WW2.

So back to the first question, its not the content (whether it's racist or not) it's the effect: would producing fake content reach a desired policy goal?

Philosophically it's truth vs lie, can we lie to do good? Theologically in the majority of religions, this has been answered: lying can never do good.

self_awareness 8 hours ago||
Game theory tells us that we should lie if someone else is lying, for some time. Then we should try trusting again. But we should generally tell the truth at the beginning; we sometimes lose to those who lie all the time, but we can gain more than the eternal liar if we encounter someone who behaves just like us. Assuming our strategy is in the majority, this works.

But this is game theory, a dead and amoral mechanism that is mostly used by the animal kingdom. I'm sure humanity is better than that?

Propaganda is war, and each time we use war measures, we're getting closer to it.

QuadmasterXLII 8 hours ago|||
The reading comprehension on this website is piss poor.
self_awareness 8 hours ago||
The quality of comments is also not that great.
hn_throwaway_99 13 hours ago||||
I like that saying. You can see it all the time on Reddit where, not even counting AI generated content, you see rage bait that is (re)posted literally years after the fact. It's like "yeah, OK this guy sucks, but why are you reposting this 5 years after it went viral?"
silisili 14 hours ago||||
Rage sells. Not long after EBT changes, there were a rash of videos of people playing the person people against welfare imagine in their head. Women, usually black, speaking improperly about how the taxpayers need to take care of their kids.

Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.

watwut 11 hours ago||||
Wut? If you listen to what real people say, racism is quite common has all the power right now.
Refreeze5224 14 hours ago||||
[flagged]
theteapot 14 hours ago||
I'm noticing more of these race baiting comments on YC too lately. AI?
ycombinator_acc 12 hours ago||
No, that’s a common cope.

Not AI. Not bots. Not Indians or Pakistanis. Not Kremlin or Hasbara agents. All the above might comprise a small percentage of it, but the vast majority of the rage bait and rage bait support we’ve seen over the past year+ on the Internet (including here) is just westerners being (allowed and encouraged by each other to be) racist toward non-whites in various ways.

theteapot 10 hours ago||
How's your day been?
actionfromafar 12 hours ago||||
That's why this administration is working hard to fill the demand.
verisimi 11 hours ago||
Political!
blks 11 hours ago||||
You sure about that? I think actions of the US administration together with ICE and police work provide quite enough
pjc50 9 hours ago|||
Wrong takeaway. There are plenty of real incidents. The reason for posting fake incidents is to discredit the real ones.
charles_f 5 hours ago|||
In reciprocity, my parents call "AI" anything they don't like or want to believe.

We truly live in wonderful times!

SilverSlash 16 hours ago|||
I really wish Google will flag videos with any AI content, that they detect.
zdc1 15 hours ago|||
It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.

Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.

hattmall 15 hours ago|||
It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.
Nevermark 10 hours ago|||
Well then email spam will never have an incentive. That is a relief! I was going to predict that someday people would start sending millions of misleading emails or texts!
esseph 15 hours ago|||
The Payoff from AI videos could get someone in the Whitehouse.
nottorp 11 hours ago||||
> eventually AI content will be indistinguishable from real-world content

You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.

esseph 15 hours ago||||
I said something to a friend about this years ago with AI... We're going to stretch the legal and political system to the point of breaking.
cubefox 9 hours ago|||
It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors could have an inherent advantage over fake-AI creators.
munificent 16 hours ago|||
Would be nice, but unlikely given that they are going in the opposite direction and having YouTube silently add AI to videos without the author even requesting it: https://www.bbc.com/future/article/20250822-youtube-is-using...
ruperthair 11 hours ago||
Wow! I hadn't seen this, thanks. Do you think they are doing it with relatively innocent motives?
munificent 3 hours ago||
I have no insight, but I assume they are doing it because they can use AI to make a few variations of a video and then automatically A/B test them to see which ones get more engagement, and then use that to make videos that are more engaging than what the author actually uploaded.

This is "innocent" if you accept that the author's goal is simplify to maximize engagement and YouTube is helping them do that. It's not if you assume the author wants users to see exactly what they authored.

josfredo 11 hours ago|||
I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.
acatton 10 hours ago|||
> “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?

Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.

It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.

But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.

I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.

[1] https://en.wikipedia.org/wiki/Brandolini%27s_law

[2] https://www.youtube.com/CaptainDisillusion

haxiomic 11 hours ago||||
The current situation is not as bad as it can get; this is accelerant on the fire and it can get a lot worse
troupo 10 hours ago||
I've been using "It will get worse before it gets worse" more and more lately
Nevermark 11 hours ago||||
It really isn’t that slop didn’t exist before.

It is that it is increasingly becoming indistinguishable from not-slop.

There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.

And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.

It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.

vladms 11 hours ago||
True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).

But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.

If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.

Nevermark 10 hours ago||
You are right that text had this problem.

But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.

And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.

The big problem isn’t the tech getting smarter though.

It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.

Incentives, as they say, matter.

Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.

Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.

BrtByte 8 hours ago|||
It's sad, yeah. And exhausting. The fact that you felt something was off and took the time to verify already puts you ahead of the curve, but it's depressing that this level of vigilance is becoming the baseline just to consume media safely
hshdhdhj4444 16 hours ago|||
The problem’s gonna be when Google as well is plastered with fake news articles about the same thing. There’s very little to no way you will know whether something is real or not.
ekianjo 8 hours ago||
That was already the case for anything printed or written. You have no way of telling if this is true or not.
Fr0styMatt88 16 hours ago|||
I find the sound is a dead giveaway for most AI videos — the voices all sound like a low bitrate MP3.

Which will eventually get worked around and can easily be masked by just having a backing track.

fsckboy 16 hours ago||
that sounds like one of the worst heuristics I've ever heard, worse than "em-dash=ai" (em-dash equals ai to the illiterate class, who don't know what they are talking about on any subject and who also don't use em-dashes, but literate people do use em-dashes and also know what they are talking about. this is called the Dunning-Em-Dash Effect, where "dunning" refers to the payback of intellectual deficit whereas the illiterate think it's a name)
Duanemclemore 15 hours ago|||
The em-dash=LLM thing is so crazy. For many years Microsoft Word has AUTOCORRECTED the typing of a single hyphen to the proper syntax for the context -- whether a hyphen, en-dash, or em-dash.

I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...

XorNot 14 hours ago||
Which would matter but the entry box in no major browser do was this.

The HN text area does not insert em-dashes for you and never has. On my phone keyboard it's a very lot deliberate action to add one (symbol mode, long press hyphen, slide my finger over to em-dash).

The entire point is it's contextual - emdashes where no accomodations make them likely.

bee_rider 13 hours ago|||
Is this—not an em-dash? On iOS I generated it by double tapping dash. I think there are more iOS users than AIs, although I could be wrong about that…
Duanemclemore 13 hours ago|||
Yeah, I get that. And I'm not saying the author is wrong, just commenting on that one often-commented-upon phenomenon. If text is being input to the field by copy-paste (from another browser tab) anyway, who's to say it's not (hypothetically) being copied and pasted from the word processor in which it's being written?
root_axis 16 hours ago||||
The audio artifacts of an AI generated video are a far more reliable heuristic than the presence of a single character in a body of text.
dorfsmay 14 hours ago|||
For now. A year ago they weren't even Gen AI videos. Give it a few months...
dragonwriter 10 hours ago|||
Well, its probably lower false positive than en-dash but higher false negative, especially since AI generated video, even when it has audio, may not have AI generated audio. (Generation conditioned on a text prompt, starting image, and audio track is among the common modes for AI video generation.)
D-Machine 14 hours ago||||
Thank you for saving me the time writing this. Nothing screams midwit like "Em-dash = AI". If AI detection was this easy, we wouldn't have the issues we have today.
kelvie 14 hours ago||||
Of note is theother terrible heuristic I've seen thrown around, where "emojis = AI", and now the "if you use not X, but Y = AI".
bhaak 13 hours ago|||
With the right context both are pretty good actually.

I think the emoji one is most pronounced in bullet point lists. AI loves to add an emoji to bullet points. I guess they got it from lists in hip GitHub projects.

The other one is not as strong but if the "not X but Y" is somewhat nonsensical or unnecessary this is very strong indicator it's AI.

zahlman 9 hours ago||
>I guess they got it from lists in hip GitHub projects.

I see this way more often on GitHub now than I did before, though.

wjholden 13 hours ago||||
Similarly: "The indication for machine-generated text isn't symbolic. It's structural." I always liked this writing device, but I've seen people label it artificial.
bee_rider 13 hours ago||||
Em-dashes are completely innocent. “Not X but Y” is some lame rhetorical device, I’m glad it is catching strays.
icedchai 3 hours ago|||
When I see emojis in code, especially log statements, it is 100% giveaway AI was involved. Worse, it is an indicator the developer was lazy and didn't even try to clean up the most basic slop.
fuzzer371 16 hours ago|||
No one uses em dashes
dragonwriter 15 hours ago|||
If nobody used em-dashes, they wouldn’t have featured heavily in the training set for LLMs. It is used somewhat rarely (so e people use it a lot, others not at all) in informal digital prose, but that’s not the same as being entirely unused generally.
crimony 15 hours ago||||
Microsoft Word automatically converts dashes to em dashes as soon as you hit space at the end of the next word after the dash.
BLKNSLVR 15 hours ago||
That's the only way I know how to get an em dash. That's how I create them. I sometimes have to re-write something to force the "dash space <word> space" sequence in order for Word to create it, and then I copy and paste the em dash into the thing I'm working on.
robin_reala 14 hours ago|||
Option shift - in macOS (option - gives you an en dash).
Terr_ 11 hours ago||||
Alt-0151 on the numpad in Windows.

Long-press on the hyphen on most Android keyboards.

Or open whenever "Character Map" application that usually comes with any desktop OS, and copy it from there.

leoc 14 hours ago||||
Windows 10/11’s clipboard stack lets you pin selections into the clipboard, so — and a variety of other characters live in mine. And on iOS you just hold down -, of course.
dboreham 14 hours ago||||
You can Google search "em-dash" then copy/paste from the resulting page.
cwnyth 14 hours ago|||
Ctrl+Shit+U + 2014 (em dash) or 2013 (en dash) in Linux. Former academic here, and I use the things all the time. You can find them all over my pre-LLM publications.
schrodinger 15 hours ago||||
I do—all the time. Why not?

I also use en dashes when referring to number ranges, e.g., 1–9

dboreham 14 hours ago||
I didn't know these fancy dashes existed until I read Knuth's first book on typesetting. So probably 1984. Since then I've used them whenever appropriate.
rmunn 15 hours ago||||
Except for Emily Dickenson, who is an outlier and should not be counted.

Seriously, she used dashes all the time. Here is a direct copy and paste of the first two stanzas of her poem "Because I count not stop for Death" from the first source I found, https://www.poetryfoundation.org/poems/47652/because-i-could...

  Because I could not stop for Death –
  He kindly stopped for me –
  The Carriage held but just Ourselves –
  And Immortality.

  We slowly drove – He knew no haste
  And I had put away
  My labor and my leisure too,
  For His Civility –
Her dashes have been rendered as en dashes in this particular case rather than em dashes, but unless you're a typography enthusiast you might not notice the difference (I certainly didn't and thought they were em dashes at first). I would bet if I hunted I would find some places where her poems have been transcribed with em dashes. (It's what I would have typed if I were transcribing them).
Finnucane 4 hours ago||
Here is an image of the original manuscript page:

https://www.edickinson.org/editions/1/image_sets/12174893

Dickinson's dashes tended to vary over time, and were not typeset during her lifetime (mostly). Also, mid-19th century usage was different—the em-dash was a relatively new thing.

awakeasleep 15 hours ago||||
Except for highly literate people, and people who care about typography.

Think about it— the robots didn’t invent the em-dash. They’re copying it from somewhere.

amrocha 14 hours ago||
My impression of people that say they’re em dash users is that they’re laundering their dunning kruger through AI.
DocTomoe 14 hours ago|||
Tell me you never worked with LaTeX and an university style guide without telling me you never worked with LaTeX and an university style guide.
account42 11 hours ago||
Approximately no one writes internet comments or even articles in LaTeX.
theptip 2 hours ago|||
> Will create

As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)

Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.

mlrtime 8 hours ago|||
There are top posts daily of either 100% false images or very doctored images to portray a narrative (usually political or social) on reddit.

Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!

alex1138 16 hours ago|||
Next step: find out whether Youtube will remove it if you point it out

Answer? Probably "of course not"

They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably

eudamoniac 5 hours ago|||
I think people will eventually learn to not trust any videos or stories they see online. I think the much bigger issue will be what happens when the LLM providers encode "alignment" into the models to insist on certain worldviews, opinions, or even falsehoods. Trust in LLMs and usage of them is increasing.

"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"

NoMoreNicksLeft 4 hours ago||
"Eventually" does alot of heavy lifting in your prediction. This is like saying that if you feed poison to panda bears, they will eventually become immune to poison. On what timescale though? 8 million years from now, if the species survived, and if I've been feeding that poison to each and every generation... sure.

If I just feed it to 10 pandas, today, they're all dead.

And I suspect that humanity's position in this analogy is far closer to the latter than the former.

eudamoniac 4 hours ago||
People stopped falling for photoshopped pictures and staged Chinese reels pretty quickly. I think people will pretty quickly decide anything outrageous is probably AI. And by people I mean the right half of the bell curve, which is all you can hope for. The left half will have problems in the world as they always have.
TiredOfLife 13 hours ago|||
You don't need AI for that.

https://youtu.be/xiYZ__Ww02c

phatfish 9 hours ago||
Google is complicit in this sort of content by hosting it no questions asked. They will happily see society tear itself apart if they are getting some as revenue. Same as the other social media companies.

And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.

armcat 5 hours ago||
What's changed significantly is the "intent" behind the content. The intent today is mostly the actual "content generation", i.e. "I write a blog post or a public github repo for the sake of having a digital footprint". This is vastly different when I used the internet in mid/late 90s. The intent back then was more like an "online journal" or something you hope someone else will find useful one day. Back then we had geocities, lycos, altavista, and we browsed with netscape and IE. For example, i began my programming foray with Delphi/Pascal, and I remember browsing these "crappy" [1] websites that were mostly just text, that explained how to convert C programs to Delphi (i was working on some plugins). The content was "genuine" content - someone laboriously documenting what worked for them, knowing very well that no one may ever actually read this. This is the issue today - it's all just for show.

[1] Those "crappy websites" with a maze of iframes are actually considered surprisingly refreshing today.

viccis 18 hours ago||
>which is not a social network, but I’m tired of arguing with people online about it

I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.

LexiMax 17 hours ago||
> the former type is all but obliterated now.

Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.

While it stinks that it is controlled by one big company, it's quite nice that its communities are invite-only by default and largely moderated by actual flesh-and-blood users. There's no single public shared social space, which means there's no one shared social feed to get hooked on.

Pretty much all of my former IRC/Forum buddies have migrated to Discord, and when the site goes south (not if, it's going to go public eventually, we all know how this story plays out), we expect that we'll be using an alternative that is shaped very much like it, such as Matrix.

PaulDavisThe1st 16 hours ago|||
> Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.

The "former type" had to do with online socializing with people you know IRL.

I have never seen anything on Discord that matches this description.

LexiMax 16 hours ago|||
I'm in multiple Discord servers with people I know IRL.

In fact, I'd say it's probably the easiest way to bootstrap a community around a friend-group.

gwd 9 hours ago||
Is this a generational thing? All my groups of this type are on WhatsApp (unfortunately).
fullstop 6 hours ago|||
Yes. Whatsapp requires a phone number and Discord does not. The tweens who do not have a phone yet can join Discord with their siblings / friends.

The other part of this is that Discord has official University hubs, so the college kids are all in there. You need an email address from that Univeristy to join: https://support.discord.com/hc/en-us/articles/4406046651927-...

patja 5 hours ago|||
Are you using the word tweens in some sense other than its usual definition of pre-teen? My understanding is that discord, like most online services, requires registered users to be 13 years old.
fullstop 4 hours ago||
Nope, that's exactly what I meant. That requirement just means that they have to check a box which says that they're 13 or older. Surely no child would ever break the rules, right?
vlachen 5 hours ago|||
Having been out of university since before Discord was much of a thing, that's news to me. It also is eerily reminiscent of Facebook's beginning sign up requirements.
fullstop 4 hours ago||
I guess that depends on the University and whether or not you get to keep your email address after you graduate. From what I understand from my college-aged kids, most people get kicked out of the hub after they graduate.

It's similar in Apple's strategy of trying to get Macintosh into the classrooms (in the 80s/90s), and student discounts on Adobe products.

I am not a huge fan of Discord, although I do use it. It's very good at what it does, and the communities it houses are well moderated, at least the ones that I have joined. I dislike that they've taken over communities and walled them off from the "searchable" internet.

StrauXX 4 hours ago||
That is actually something I quite like about Discord. Whatever I write and post, while not "private" is not indexed or searchable by anyone other tgan those people that have been vetted (invited) by the respective community. Not that I'm mostly on small friendgroup Discords with 10 - 100 members.
fullstop 3 hours ago||
Right, and they're blending two things together -- group chats and public forums. I'm sad about losing the public forums.
wongarsu 7 hours ago||||
Discord's initial core demographic was online gaming. From there it has radiated outwards due to being the best group messaging (and voice chat) solution out there. The more overlap your friend group has with gaming and adjacent groups the more likely they are to use Discord
moregrist 4 hours ago||
When Bloomberg’s podcasts have a Discord channel (eg: Odd Lots), you know it has broken free of its gaming origins.
SirHumphrey 8 hours ago||||
Maybe, but at least in my circles it’s a structure thing- until the group actually can be organised in a single chat sanely something else will be used- but as soon as multiple chats are required the thing is moved on discord.
midius 8 hours ago|||
might be a regional thing instead, i don't know many americans with whatsapp -- all of my friends are on discord.
nitwit005 14 hours ago||||
You're essentially saying you haven't seen anyone's private chats.

I'm in a friend Discord server. It's naturally invisible unless someone sends you an invite.

bee_rider 4 hours ago||||
The split where social networking is mostly for people you “know” and social media is… some other thing, mostly for scrolling videos, definitely is significant.

But, the “know IRL” split is a bit artificial I think. For example my discord is full of people I knew in college: I knew them IRL for four years, and then we all moved around and now we’ve known each other online for decades. Or childhood friends. By now, my childhood friend and college friend circles are partially merged on discord, and they absolutely know each other (unfortunately there’s no way for you to evaluate this but I know them all quite well and it would be absurd to me, to consider them anything other than friends).

The internet is part of the real world now. People socialize on it. I can definitely see a distinction between actually knowing somebody, and just being in a discord channel with them. But it is a fuzzy social thing I think, hard to nail down exactly where the transition is (also worth noting that we have acquaintances that we don’t really “know” offline, the cashier at our favorite shops for example).

thot_experiment 14 hours ago||||
Yeah same as sibling comments, I'm in multiple discord servers for IRL friend groups. I personally run one with ~50 people that sees a hundreds of messages a day. By far my most used form of social media. Also as OP said, I'll be migrating to Matrix (probably) when they IPO, we've already started an archival project just in case.
cosignal 42 minutes ago||||
I'm sorry but what?! 'Socializing with people you know IRL' is almost exclusively what I've seen Discord used for, and almost solely what I personally use it for. There are vastly more Discord servers set up among IRL friend groups (or among classmates, as another popular use case) than there are Discord servers for fandoms of people who have never met IRL.
jjice 5 hours ago||||
While it's also used to socialize with people you don't know IRL, most of my experience with Discord (mostly in uni) was to aggregate people IRL together. We had discords for clubs, classes, groups of friends, etc. The only reason I use discord now is for the same reason. Private space for a group of people to interact asynchronously in a way that's more structured than a text group chat.
andyouwont 10 hours ago||||
And you won't. I will NOT invite anyone from "social media" to any of the 3 very-private, yet outrageously active, servers, and that's why they have less than 40 users collectively. They're basically for playing games and re-streaming movies among people on first name basis or close to it. And I know those 40 people have others of their own, and I know I'll never ever have access to them either. Because I dont know those other people in them.

And I know server like these are in the top tier of engagement for discord on the whole because they keep being picked for AB testing new features. Like, we had activities some half a year early. We actually had the voice modifiers on two of them, and most people don't even know that was a thing.

esseph 15 hours ago|||
Idk most of the people I "met" on the internet happened originally on IRC. I didn't know them till a decade or more later.
dartharva 17 hours ago||||
I'd say WhatsApp is a better example
Ekaros 11 hours ago||
WhatsApp really feels to me more like group chat. Not really breaking barrier of social media. But then again I am not in any mass chats.

Discord is many things. Private chat groups, medium communities and then larger communities with tens of thousands of users.

nottorp 10 hours ago||
> WhatsApp really feels to me more like group chat.

So what's wrong with that?

alex1138 5 hours ago|||
900 lb. Gorillas don't weigh 9000 lbs
munificent 16 hours ago|||
> "internet based medium for socializing with people you know IRL"

"Social media" never meant that. We've forgotten already, but the original term was "social network" and the way sites worked back then is that everyone was contributing more or less original content. It would then be shared automatically to your network of friends. It was like texting but automatically broadcast to your contact list.

Then Facebook and others pivoted towards "resharing" content and it became less "what are my friends doing" and more "I want to watch random media" and your friends sharing it just became an input into the popularity algorithm. At that point, it became "social media".

HN is neither since there's no way to friend people or broadcast comments. It's just a forum where most threads are links, like Reddit.

hnbad 7 hours ago||
I think most people only recall becoming aware of Facebook when it was already so widespread that people talked about it as "the site you go to to find out what extended family members and people you haven't spoken to in years are up to".

Let's remember that the original idea was to connect with people in your college/university. I faintly recall this time period because I tried to sign up for it only to find out that while there had been an announcement that it was opened up internationally, it still only let you sign up with a dot EDU email address, which none of the universities in my country had.

In the early years "social media" was a lot more about having a place to express yourself or share your ideas and opinions so other people you know could check up on them. Many remember the GIF anarchy and crimes against HTML of Geocities but that aesthetic also carried over to MySpace while sites like Live Journal or Tumblr more heavily emphasized prose. This was all also in the context of a more open "blogosphere" where (mostly) tech nerds would run their own blogs and connect intentionally much like "webrings" did in the earlier days for private homepages and such before search engine indexing mostly obliterated their main use.

Facebook pretty much created modern "social media" by creating the global "timeline", forcing users to compete with each other (and corporate brands) for each other's attention while also focusing the experience more on consumption and "reaction" than creation and self-expression. This in turn resulted in more "engagement" which eventually led to algorithmic timelines trying to optimize for engagement and ad placement / "suggested content".

HN actually follows the "link aggregator" or "news aggregator" lineage of sites like Reddit, Digg, Fark, etc (there were also "bookmark aggregators" like stumbleupon but most of those died out to). In terms of social interactions it's more like e.g. the Slashdot comment section even though the "feed" is somewhat "engagement driven" like on social media sites. But as you said, it lacks all the features that would normally be expected like the ability to "curate" your timeline (or in fact, having a personalized view of the timeline at all) or being able to "follow" specific people. You can't even block people.

roywiggins 16 hours ago|||
It's even worse than that, TikTok & Instagram are labeled "social media" despite, I'd wager, most users never actually posting anything anymore. Nobody really socializes on short form video platforms any more than they do YouTube. It's just media. At least forums are social, sort of.
bandrami 13 hours ago|||
I'll come clean and say I've still never tried Discord and I feel like I must not be understanding the concept. It really looks like it's IRC but hosted by some commercial company and requiring their client to use and with extremely tenuous privacy guarantees. I figure I must be missing something because I can't understand why that's so popular when IRC is still there.
lmm 12 hours ago|||
IRC has many many usability problems which I'm sure you're about to give a "quite trivial curlftpfs" explaination for why they're unimportant - missing messages if you're offline, inconsistent standards for user accounts/authentication, no consensus on how even basic rich text should work much less sending images, inconsistent standards for voice calls that tend to break in the presence of NAT, same thing for file transfers...
Ekaros 11 hours ago||||
It is IRC, but with modern features and no channel splits. It also adds voice chats and video sharing. Trade off is that privacy and commercial platform. On other hand it is very much simpler to use. IRC is a mess of usability really. Discord has much better user experience for new users.
doublerabbit 7 hours ago||
> Discord has much better user experience for new users.

Until you join a server that gives you a whole essay of what you can and cannot do with extra verification. This then requiring you to post in some random channel waiting for the moderator to see your message.

You're then forced to assign roles to yourself to please a bot that will continue to spam you with notifications announcing to the community you've leveled up for every second sentence. Finally, everyone glaring at you in channel or leaving you on read because you're a newbie with a leaf above your username. Each to their own, I guess.

/server irc.someserver.net

/join #hello

/me says Hello

I think I'll stick with that.

At least Discord and IRC are interchangeable in the sake of idling.

iceflinger 7 hours ago||||
I was a heavy IRC user in 2015 before Discord and even though I personally prefer using IRC, it was obvious it would take over the communities I was for a few reasons:

1. People don't understand or want to setup a client that isn't just loading some page in their browser 2. People want to post images and see the images they posted without clicking through a link, in some communities images might be shared more than text. 3. People want a persistent chat history they can easily access from multiple devices/notifications etc 4. Voice chat, many IRC communities would run a tandem mumble server too.

All of these are solvable for a tech-savvy enough IRC user, but Discord gets you all of this out of the box with barely more than an email account.

There are probably more, but these are the biggest reasons why it felt like within a year I was idling in channels by myself. You might not want discord but the friction vs irc was so low that the network effect pretty much killed most of IRC.

qludes 10 hours ago||||
Because it's the equivalent to running a private irc server plus logging with forum features, voice comms, image hosting, authentication and bouncers for all your users. With a working client on multiple platforms (unlike IRC and jabber that never really took off on mobile).
krawcu 12 hours ago|||
it's very easy to make a friend server that has all you basically need: sending messages, images/files and being able to talk with voice channels.

you can also invite a music bot or host your own that will join the voice channel with a song that you requested

bandrami 12 hours ago||
Right.... how is that different from IRC other than being controlled by a big company with no exit ability and (again) extremely tenuous privacy promises?
petu 10 hours ago|||
IRC doesn't offer voice/video, which is unimaginable for Discord alternative.

When we get to alternative proposals with functioning calls I'd say having them as voice channels that just exist 24/7 is a big thing too. It's a tiny thing from technical perspective, but makes something like Teams unsuitable alternative for Discord.

In Teams you start a call and everyone phone rings, you distract everyone from whatever they were doing -- you better have a good reason for doing so.

In Discord you just join empty voice channel (on your private server with friends) w/o any particular reason and go on with your day. Maybe someone sees that you're there and joins, maybe not. No need to think of anyone's schedule, you don't annoy people that don't have time right now.

trinix912 11 hours ago|||
For the text chat, it's different in the way that it lets one make their own 'servers' without having to run the actual hardware server 24/7, free of charge, no need to battle with NATs and weird nonstandard ways of sending images, etc.

The big thing is the voice/videoconferencing channels which are actually optimized insanely well, Discord calls work fine even on crappy connections that Teams and Zoom struggle with.

Simply put it's Skype x MSN Messenger with a global user directory, but with gamers in mind.

PurpleRamen 9 hours ago|||
> I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall

5 minutes after the first social network became famous. It never really has been just about knowing people IRL, that was only in the beginning, until people started connecting with everyone and their mother.

Now it's about people and them connecting and socializing. If there are persons, then it's social. HN has profiles where you can "follow" people, thus, it's social on a minimal level. Though, we could dispute whether it's just media or a mature network. Because there obviously are notable differences in terms of social-related features between HN or Facebook.

flomo 15 hours ago|||
You know Meta, the "social media company" came out and said their users spend less than 10% of the time interacting with people they actually know?

"Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)

alex1138 5 hours ago||
It would be nice if the social media company actually showed their posts in the feed
dpkirchner 2 hours ago||
I'm hopeful that one day engineers at Meta will crack the chronological sort code. It's a tough algorithm but I bet Llama can help 'em out.

Better back button handling and fixing the location bugs in event creation may well be entirely beyond Llama, sadly.

ianburrell 18 hours ago||
The social networks have all added public media and algorithms. I read explanation that because friends don't produce enough content to keep engaged so they added public feeds. I'm disappointed that there isn't a private Bluesky/Mastodon. I also want an algorithm that shows the best of what following posted since last checked so I can keep up.
makingstuffs 17 hours ago||
Think the notion that ‘no one’ uses em dashes is a bit misguided. I’ve personally used them in text for as long as I can remember.

Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.

And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.

Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”

oxguy3 15 hours ago||
It is so irritating that people now think you've used an LLM just because you use nice typography. I've been using en dashes a ton (and em dashes sporadically) since long before ChatGPT came around. My writing style belonged to me first—why should I have to change?

If you have the Compose key [1] enabled on your computer, the keyboard sequence is pretty easy: `Compose - - -` (and for en dash, it's `Compose - - .`). Those two are probably my most-used Compose combos.

[1]: https://en.wikipedia.org/wiki/Compose_key

zahlman 9 hours ago|||
I configured my system to treat caps lock as compose, and also set up a bunch of custom compose sequences that better suit how I think about the fancy characters I most often want to type. My em-dash is `Compose m d`.
Ericson2314 14 hours ago||||
Also on phones it is really easy to use em dashes. It's quite out in the open whether I posted from desktop or phone because the use of "---" vs "—" is the dead give-away.
thfuran 9 hours ago||||
How do you find yourself using en dashes more than em dashes?
wongarsu 6 hours ago|||
Not OP, but I find the space-en-space convention easier to read than the nospace-em-nospace convention. American style guides prefer the latter – in my eyes they are wrong about that
rkomorn 6 hours ago||
They're wrong about preferring the style you find easier to read?

Did you mean American style guides prefer the latter?

wongarsu 6 hours ago||
brain fart, fixed
acidburnNSA 9 hours ago||||
For me I use en dashes a lot for ranges like 1–N
yoz-y 9 hours ago|||
Maybe they write out a lot of ranges?
HaZeust 13 hours ago|||
Hot take, but a character that demands zero-space between the letters at the end and the beginning of 2 words - that ISN'T a hyphenated compound - is NOT nice typography. I don't care how prevalent it is, or once was.
reddalo 11 hours ago|||
I don't know if my language grammar rules (Italian) are different than English, but I've always seen spaces before and after em-dashes. I don't like the em-dash being stuck to two unrelated words.
xorcist 9 hours ago|||
That's because in Italian, like in many other European languages, you use en-dashes to separate parenthetical clauses. The en-dash is used with space, the em-dash (mostly) without space and that's why it's longer. On old typewriters they were frequently written as "--" and "---" respectively. So yes, it's mostly an English thing. Stick to your trattinos, they're nice!
mr_mitm 10 hours ago|||
It's a US thing
vurudlxtyt 12 hours ago||||
That sounds like a strongly held opinion rather than a fact.

I like em-dashes and will continue to use them.

HaZeust 1 hour ago|||
That'll show em'.
zahlman 9 hours ago|||
>That sounds like a strongly held opinion rather than a fact.

Yes, that is more or less what "hot take" means.

imafish 12 hours ago|||
agree. it implies a strong relationship between the two words it is inserted between - not the sentences.
kimixa 15 hours ago|||
As a brit I'd say we tend to use "en-dashes", slightly shorter versions - so more similar to a hyphen and so often typed like that - with spaces either side.

I never saw em-dashes—the longer version with no space—outside of published books and now AI.

dang 15 hours ago|||
The en-dash is also highly worthy!

Just to say, though, we em-dashers do have pre-GPT receipts:

https://news.ycombinator.com/item?id=46673869

dragonwriter 9 hours ago||||
There are British style manuals (e.g., the Guardian’s) that prefer em-dashes for roughly the same set of uses they tend to perferred for in US style guides, but it is mixed between em-dashes and en-dashes (both usually set open), while all the influential American style guides prefer em-dashes (but split, for digressive/parenthetical use, between setting them closed [e.g., Chicago Manual] and open [e.g., AP Style].)
rmunn 14 hours ago||||
Besides the LaTeX use, on Linux if you have gone into your keyboard options and configured a rarely-used key to be your Compose key (I like to use the "menu" key for this purpose, or right Alt if on a keyboard with no "menu" key), you can type Compose sequences as follows (note how they closely resemble the LaTeX -- or --- sequences):

Compose, hyphen, hyphen, period: produces – (en dash) Compose, hyphen, hyphen, hyphen: produces — (em dash)

And many other useful sequences too, like Compose, lowercase o, lowercase o to produce the ° (degree) symbol. If you're running Linux, look into your keyboard settings and dig into the advanced settings until you find the Compose key, it's super handy.

P.S. If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.

dragonwriter 9 hours ago||
> If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.

There are compose key implementations for Windows, too.

Ericson2314 14 hours ago||||
I think that's just incorrect. There are varying conventions for spaces vs no spaces around em dashes, but all English manuals of style confine to en dashes just to things like "0–10" and "Louisville–Calgary" — at least to my knowledge.
kimixa 14 hours ago||
The Oxford style guide page 18 https://www.ox.ac.uk/public-affairs/style-guide

> m-dash (—)

> Do not use; use an n-dash instead.

> n-dash (–)

> Use in a pair in place of round brackets or commas, surrounded by spaces.

Remember I'm specifically speaking about british english.

azangru 9 hours ago||
HMRC style guide: "Avoid the shorter en dash as they are treated differently by different screen readers" [0].

But I see what you mean. There used to be a distinction between a shorter dash that is used for numerical ranges, or for things named after multiple people, and a longer dash used to connect independent clauses in a sentence [1]. I am shocked to hear that this distinction is being eroded.

[0] https://design.tax.service.gov.uk/hmrc-content-style-guide/

[1] https://www.cl.cam.ac.uk/~tmj32/styleguide/

kimixa 10 minutes ago||
That guy's style guide seems to conflict with the Cambridge editorial services guidelines - though that is for books rather than papers:

> Spaced en rules (or ‘en dashes’) must be used for parenthetical dashes. Hyphens or em rules (‘em dashes’) will not be accepted for either UK or US style books. En rules (–) are longer than hyphens (-) but shorter than em rules (—).

Section 2.1, "Editorial services style guide for academic books" https://www.cambridge.org/authorhub/resources/publishing-gui...

eru 15 hours ago||||
It's also easy to get them in LaTeX: just type --- and they will appear as an em-dash in your output.
susam 15 hours ago|||
Came here to confirm this. I grew up learning BrE and indeed in BrE, we were taught to use en-dash. I don't think we were ever taught em-dash at all. My first encounter with em-dash was with LaTeX's '---' as an adult.
Anthony-G 4 hours ago|||
On my side of the Atlantic using en-dashes with spaces on either side of the dash is acceptable writing style so that’s what I use (instead of em-dashes). However, many people can’t tell the difference between the two so some might confuse my writing from that of an LLM. But I’m not going to let that dictate my writing style.

For the past 15 years, I’ve used the Unicycle Vim plugin¹ which makes it very easy to add proper typographic quotes and dashes in Insert mode. As something of a typography nerd, I’ve extended it to include other Unicode characters, e.g., prime and double-prime characters to represent minutes and seconds.

At the same time, I’ve always used a Firefox extension that launches GVim when editing a text box; currently, I’m using Tridactyl for this purpose.

¹ https://www.vim.org/scripts/script.php?script_id=1384

karim79 17 hours ago|||
I would add that a lot of us who were born or grew up in the UK are quite comfortable saying stuff like "you're right, but...", or even "I agree with you, but...". The British politeness thing, presumably.
PaulDavisThe1st 16 hours ago||
0-24 in the UK, 24-62 in the USA, am now comfortable saying "I could be wrong, but I doubt it" quite a lot of the time :)
topaz0 5 hours ago|||
I'm in your camp, that both of these are appropriate for some situations. In particular I like starting with a variation on "you're absolutely right" when it appears my interlocutor has identified the wrong disagreement, before realigning the conversation to a more useful direction (though of course there are many phrases that accomplish that).

It's still frequently identifiable in (current-generation) LLM text by the glossy superficiality that comes along with these usages. For example, in "It's not just X, it's Y", when a human does this it will be because Y materially adds something that's not captured by X, but in LLM output X and Y tend to be very close in meaning, maybe different in intensity, such that saying them both really adds nothing. Or when I use "You're absolutely right" I'll clarify what they are right about, whereas for the LLM it's just an empty affirmation.

bee_rider 4 hours ago||
You’re absolutely right. It’s not just a writing style, it’s how the ideas are expressed.
babymetal 16 hours ago|||
Just my two cents: We use em-dashes in our bookstore newsletter. It's more visually appealing than than semi-colons and more versatile as it can be used to block off both ends of a clause. I even use en-dashes between numbers in a range though, so I may be an outlier.
embedding-shape 5 hours ago|||
Yeah, I mean, ultimately, aren't the LLMs actually trained to look like human language? So whatever particular "quirk" you have as a writer, there is probably an LLM emulating that either wholesale, or like 50% of the times.

LLMs use em-dash because people (in their training data) used em-dash. They use "You're absolutely right" because that's a common human phrase. It's not "You write like an LLM", it's "The LLMs write kind of like you", and for good reasons, that's exactly what people been training them to do.

And yes, "pun" intended for extra effect, that also comes from humans doing it.

forgotpwd16 1 hour ago||
The LLM output isn't an unfiltered result of an unbiased model. Rather, some texts may be classified high-quality (where the em-dash, curly quotes, a more sophisticated/less-everyday vocabulary are more expected to appear), some low-quality, and some choices are driven by human feedback (aka fine-tuning), either to improve quality (OpenAI employs Kenyans, Kenyan/Nigerian English considered more colonial) or engagement through affirmative/reinforcing responses ("You're absolutely right. Universe is indeed a donut. Want me to write down an abstract? Want me to write down the equations?"). Some nice relevant articles are [1],[2].

[1]: https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li... [2]: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...

mc3301 17 hours ago|||
Also, I've seen people edit, one-by-one, each m-dash. And then they copy-paste the entire LLM output, thinking it looks less AI-like or something.
zahlman 9 hours ago||
Oof. I don't know what's worse there: that they don't know a conventional way to find-and-replace, or that they didn't try asking the LLM not to use them. (Or to fix it afterwards.)
skwee357 11 hours ago|||
The thing with em-dashes is not the em-dash itself. I use em-dashes, because when I started to blog, I was curious about improving my English writing skills (English is not my native language, and although I have learned English in school, most of my English is coming from playing RPGs and watching movies in English).

According to what I know, the correct way to use em-dash is to not surround it by spaces, so words look connected like--this. And indeed, when I started to use em-dashes in my blog(s), that's how I did it. But I found it rather ugly, so I started to put spaces around it. And there were periods where I stopped using em-dash all together.

I guess what I'm trying to say is that unless you write as a profession, most people are inconsistent. Sometimes, I use em-dashes. Sometimes I don't. In some cases I capitalize my words where needed, and sometimes not, depending on how in a hurry I am, or whether I type from a phone (which does a lot of heaving lifting for me).

If you see someone who consistently uses the "proper" grammar in every single post on the internet, it might be a sign that they use AI.

patrickmay 4 hours ago||
I am a native English speaker and I agree with you completely that em-dashes look better when surrounded by spaces rather than connected directly to the words.
jasonhansel 17 hours ago|||
Em-dashes may be hard to type on a laptop, but they're extremely easy to type on iOS—you just hold down the "-" key, as with many other special characters—so I use them fairly frequently when typing on that platform.
carbocation 16 hours ago|||
Em-dashes are easy to type on a macos laptop for what it's worth: option-shift-minus.
sltkr 14 hours ago|||
Also on Linux when you enable the compose key: alt-dash-dash-dash (--- → —) and for the en-dash: alt-dash-dash-dot (--. → –)
bigstrat2003 14 hours ago|||
That's not as easy as just hitting the hyphen key, nor are most people going to be aware that even exists. I think it's fair to say that the hyphen is far easier to use than an em dash.
wk_end 16 hours ago||||
But why when the “-“ works just as well and doesn’t require holding the key down?

You’re not the first person I’ve seen say that FWIW, but I just don’t recall seeing the full proper em-dash in informal contexts before ChatGPT (not that I was paying attention). I can’t help but wonder if ChatGPT has caused some people - not necessarily you! - to gaslight themselves into believing that they used the em-dash themselves, in the before time.

MarkusQ 16 hours ago||
No. En-dash doesn't work "just as well" as an em-dash, anymore than a comma works as an apostrophe. They are different punctuation marks.

Also, I was a curmudgeon with strong opinions about punctuation before ChatGPT—heck, even before the internet. And I can produce witnesses.

kimixa 15 hours ago|||
In British English you'd be wrong for using an em-dash in those places, with most grammar recommendations being for an en-dash, often with spaces.

It's be just as wrong as using an apostrophe instead of a comma.

Grammar is often wooly in a widely used language with no single centralised authority. Many of the "Hard Rules" some people thing are fundamental truths are often more local style guides, and often a lot more recent than some people seem to believe.

optimalquiet 14 hours ago||
Interesting, I’m an American English speaker but that’s how it feels natural to me to use dashes. Em-dashes with no spaces feels wrong for reasons I can’t articulate. This first usage—in this meandering sentence—feels bossy, like I can’t have a moment to read each word individually. But this second one — which feels more natural — lets the words and the punctuation breathe. I don’t actually know where I picked up this habit. Probably from the web.
evanelias 13 hours ago||
It can also depend on the medium. Typically, newspapers (e.g. the AP style guide) use spaces around em-dashes, but books / Chicago style guide does not.
fuzzer371 15 hours ago|||
They mean the same thing to 99.999% of the population.
0xmattf 5 hours ago|||
> I’ve personally used them in text for as long as I can remember.

Likewise. I used to copy/paste them when I couldn't figure out how to actually type them, lol. Or use the HTML char code `&mdash;` It sucks that good grammar now makes people assume you used AI.

skipants 13 hours ago|||
I'm pretty sure the OP is talking about this thread. I have it top of mind because I participated and was extremely frustrated about, not just the AI slop, but how much the author claimed not to use AI when they obviously used it.

You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386

It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.

fresh_broccoli 7 hours ago||
Did you paste the wrong link? While the OP of that thread was accussed of using LLMs, the thread doesn't really match what the article describes.

I think this one is a much closer fit: https://news.ycombinator.com/item?id=46661308

amrocha 14 hours ago|||
You’re absolutely right—lots of very smart people use em dashes. Thank you for correcting me on that!
forgotpwd16 10 hours ago|||
If you want next, I can:

- Tell you what makes em dashes appealing.

- Help you use em dashes more.

- Give you other grammatical quirks smart people have.

Just tell me.

(If bots RP as humans, it’s only natural we start RP as bots. And yes, I did use a curly quote there.)

zahlman 8 hours ago|||
No problem! But it's also important to consider your image online. Here are some reasons not to use em-dashes in Internet forum posts:

* **Veneer of authenticity**: because of the difficulty of typing em-dashes in typical form-submission environments, many human posters tend to forgo them.

* **Social pressure**: even if you take strides to make em-dashes easier to type, including them can have negative repercussions. A large fraction of human audiences have internalized a heuristic that "em-dash == LLM" (which could perhaps be dubbed the "LLM-dash hypothesis"). Using em-dashes may risk false accusations, degradation of community trust, and long-winded meta discussion.

* **Unicode support**: some older forums may struggle with encoding for characters beyond the standard US-ASCII range, leading to [mojibake](https://en.wikipedia.org/wiki/Mojibake).

anon_anon12 13 hours ago|||
Well the dialogue there involves two or more people, when commenting, why would you use that.. Even if you have collaborators, you wouldn't very likely be discussing stuff through code comments..
tonymet 2 hours ago|||
Most of my Wikipedia edits are converting tacky hyphens to em-dashes. I also autocomplete into em-dashes.

hyphens are so hideous that I can't stand them.

BrtByte 8 hours ago|||
When that baseline erodes, even normal human quirks start looking suspicious
postexitus 10 hours ago||
found the LLM bot guys!
GMoromisato 16 hours ago||
Most of this is caused by incentives:

YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.

LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.

Even HN has the incentive of promoting people's startups.

Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.

The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?

skwee357 11 hours ago||
I remember participating on *free* phpBB forums, or IRC channels. I was amazed that I could chat with people smarter than me, on a wide range of topics, all for the cost of having an internet subscription.

It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.

I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.

rhines 4 hours ago|||
Something that's been on my mind for a while now is shared moderation - instead of having a few moderators who deal with everything, distribute the moderation load across all users. Every user might only have to review a couple posts a day or whatever, so it should be a negligible burden, and send each post that requires moderation to multiple users so that if there's disagreement it can be pushed to more senior/trusted users.

This is specifically in the context of a niche hobby website where the rules are simple and identifying rule-breaking content is easy. I'm not sure it would work on something with universal scope like Reddit or Facebook, but I'd rather we see more focused communities anyway.

namrog84 3 hours ago||
I dont know if it's true or not. But I remember reading about this person who would do the community reports for cheating for a game like cs or something. They had numerous bot accounts and spent a hour a day on it. Set up in a way that when they reviewed a video the bots would do the same.

But all the while they were doing legitimate reporting, when they came across their real cheating account they'd report not cheating. And supposedly this person got away with it for years for having good reputable community reporting with high alignment scores.

I know 1 exception doesnt mean it's not worth it. But we must acknowledge the potential abuse. Id still rather have 1 occasionally ambitious abuser over countless low effort ones.

rhines 3 hours ago||
Yeah I can definitely see that being a threat model. In the gaming case I think it's harder because it's more of a general reputation system and it's based on how people feel while playing with you, whereas for a website every post can be reviewed by multiple parties and the evidence is right there. But certainly I would still expect some people to try to maximize their reputation and use that to push through content that should be more heavily moderated, and in the degenerate case the bad actors comprise so much of the userbase that they peer review their own content.
nake89 7 hours ago|||
I really miss the phpBB forum days. Early 2000s. It's not just nostalgia. It truly was a better experience.
GorbachevyChase 1 hour ago|||
I’m actually happy with the way things are going. Content Mills, fake testimonials, and clandestine marketing aren’t anything new at all. But now it’s so painfully obvious that the only reason reasonable response is complete distrust of all these medium. We might actually just start taking what our neighbors more seriously than fake internet people.
ricardo81 15 hours ago|||
>incentives

spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.

None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.

alex1138 5 hours ago||
They removed the dislike count, by the way

Makes it harder to see if you're watching an ad

Completely coincidental

AutumnsGarden 1 hour ago|||
I do use AI internally for content moderation but I’m building a platform like this at https://grove.place
cjs_ac 11 hours ago|||
> Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.

Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.

[0] https://en.wikipedia.org/wiki/Dunbar%27s_number

psychoslave 10 hours ago||
I don’t think this is a hard limit. It’s also a matter of interest and opportunity to meet people, consolidate relationship through common endeavor, and so greatly influenced by the social super-structure and how they push individual to interact with each other.

To take a different cognitive domain, think about color. Wikitionary gives around 300 of them for English[1]. I doubt many English speakers would be able to use all of them with relevant accuracy. And obviously even RGB encoding allows to express far more nuances. And obviously most people can fathom far more nuances than what could verbalize.

[1] https://en.wiktionary.org/wiki/Appendix:Colors

BrtByte 8 hours ago|||
Private groups work because reputation is local and memory is long. You can't farm engagement from people you'll talk to again next week. That might be the key
trinix912 11 hours ago|||
I don't think it's doable with the current model of social media but:

1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.

2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.

DudeOpotomus 5 hours ago|||
There is no definitive reason for creators to be paid. Zero. These platforms can and should stop paying people for their content. Without the platforms, the creators are dead. Make them pay for access to the audience and this whole problem disappears and makes the platforms far more profits.

Kill the influencer, kill the creator. Its all bullshit.

FeteCommuniste 2 hours ago||
I miss the days when most people uploading things were doing it just for "love of the game" or to find likeminded enthusiasts. Not because it was their "hustle" or something to put on a resume. Those times are sadly long gone.
cal_dent 14 hours ago|||
Exactly. People spend less time thinking about the underlying structure at play here. Scratch enough at the surface and the problem is always the ads model of internet. Until that is broken or is economically pointless the existing problem will persist.

Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective

We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get

wewxjfq 12 hours ago||
Scratch further and beneath the ad business you'll find more incentives to allow fake engagement. Man is a simple animal and likes to see numbers go up. Internet folklore says the Reddit founders used multiple accounts to get their platform going at the start? If they did, they didn't do that with ad fraud in mind. The incentives are plenty and from the people running the platform to the users to the investors - everyone likes to be fooled. Take the money out and you still have reasons to turn a blind eye to it.

The biggest problem I see is that the Internet has become a brainwashing machine, and even if you have someone running the platform with the integrity of a saint, if the platform can influence public opinion, it's probably impossible to tell how many real users there actually are.

abluecloud 5 hours ago|||
i had the idea of starting a forum based social network that used domain validation (e.g. work domain) as part of your registration and then displayed that as part of your profile.

the idea being that you'd somewhat ensure the person is a human that _may well_ know what they're talking about e.g. `abluecloud from @meta.com`.

8organicbits 15 hours ago|||
I think incentives is the right way to think about it. Authentic interactions are not monetized. So where are people writing online without expecting payment?

Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.

sznio 12 hours ago|||
I've been thinking recently about a search engine that filters away any sites that contain advertising. Just that would filter away most of the crap.

Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.

account42 11 hours ago|||
Monetization isn't the only possible incentive for non-genuine content though CV-stuffing is another that is likely to affect blogs - and there have been plenty obviously AI-generated/"enhanced" blogs posted here.
AdrianB1 9 hours ago|||
>> Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction?

Yes, it is possible. Like anything worth, it is not easy. I am a member of a small forum of around 20-25 active users for 20 years. We talk all kind of stuff, it was initially just IT-related, but we also touch motorcycles (at least 5 of us do or did ride, I used to go ride with a couple of them in the past), some social aspects, tend to avoid politics (too divisive) and religion (I think none is religious enough to debate). We were initially in the same country and some were meeting IRL from time to time, but now we are spread in many places around Europe (one in US), so the forum is what keeps us in contact. Even the ones in the same country, probably a minority these days, are spread too thin, but the forum is there.

If human interaction involves IRL, I met less that 10 forum members and I met frequently just 3 (2 on motorcycle trips, one worked for a few years in the same place as I), but that is not a metric that means much. It is the false sense of being close over internet while being geographically far, which works in a way but not really. For example my best friends all emigrated, most were childhood friends, communicating to them on the phone or Internet makes me never feel lonely, but seeing them every few years makes grows the distance between us. That is impacting human to human interaction, there is no way around it.

intended 11 hours ago||
Filtering out bots is prohibitive, as bots are currently so close to human text that the false positive rate will curtail human participation.

Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.

A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.

meander_water 13 hours ago||
Not foolproof, but a couple of easy ways to verify if images were AI generated:

- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]

- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]

[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...

[1] https://verify.contentauthenticity.org/

[2] https://deepmind.google/models/synthid/

[3] https://github.com/google-deepmind/synthid-text

adrian17 12 hours ago||
I just went to a random OpenAI blog post ("The new ChatGPT Images is here"), right-click saved one of the images (the one from "Text rendering" section), and pasted it to your [1] link - no metadata.

I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.

meander_water 12 hours ago||
Yeah that's been my experience as well. I think most uploads strip the metadata unfortunately
danielbln 12 hours ago|||
Synth id can be removed, run it through an image 2 image model with a reasonably high denoising value or add artificial noise and use another model to denoise and voila. It's effort that probably most aren't doing, but it's certainly possible.
cubefox 8 hours ago|||
> The watermark can be removed, but SynthId cannot as it is part of the image.

That's not quite right. SynthID is a digital watermark, so it's hard to remove, while metadata can be easily removed.

hayinneedles 10 hours ago||
Reminder that provenance exists to prove something as REAL, not to prove something is fake.

AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.

maieuticagent 8 hours ago||
Ground News's approach is worth modeling because it treats authenticity not as a binary detection problem but as a transparency and comparative analysis problem. It assumes bad actors exist and makes them visible rather than trying to achieve perfect filtering. The shift from "trust this AI detection algorithm" to "here are multiple independent signals for you to evaluate" is philosophically aligned with how we should handle the Dead Internet problem. It's less about building perfect walls and more about giving people the tools to navigate an already-compromised space intelligently.
peteforde 13 hours ago||
I enjoyed this post, but I do find myself disagreeing that someone sharing their source code is somehow morally or ethically obligated to post some kind of AI-involvement statement on their work.

Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.

People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.

sublimefire 9 hours ago||
IMO the idea of providing more in OSS usually stems from various third parties who use that code in production but do not really contribute back to it. The only sensible thing the person publishing code online needs to do is to protect their copyright and add a license. This weird idea that somehow you become responsible for the code to the point that you need to patch every vulnerability and bug, and now identify the use of AI is wrong on so many levels. For the record I’ve been publishing OSS for years.
culebron21 5 hours ago|||
The author of that show hn post wrote that it's a project that grew to "production ready". Reading her source code, I see a bunch of wrappers that, upon any error, will panic and crash the whole process.

This is equal to projects where guys from high school took Ubuntu, changed the logo in couple of places, and then made statements that they'd make a new OS.

Anybody minimally competent can see the childish exaggeration in both cases.

The most logical request is to grow up and be transparent of what you did, and stop lying.

skwee357 11 hours ago|||
Fine, I accept your point. You don't have an obligation to disclose the tools you've used. But what struck me in that particular thread, is that the author kept claiming they did not use AI, nothing at all, while there were give away signs that the code was, _at least partly_, AI generated.

It honestly felt like being gaslighted. You see one thing, but they keep claiming you are wrong.

peteforde 10 hours ago||
I admit that I got the gist of the concern and didn't actually look at the original thread.

I'd feel the same way you did, for sure.

You are absolutely right! ;)

BrtByte 8 hours ago||
Maybe the healthier framing is cultural rather than ethical
cannonpalms 7 hours ago||
I believe this was the original thread.

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) | https://news.ycombinator.com/item?id=46661308

stephenlf 7 hours ago||
Reading through the thread right now.

Commenter: > What % of the code is written by you and what % is written by ai

OP: > Good question!

>

> All the code, architecture, logic, and design in minikv were written by me, 100% by hand. I did use AI tools only for a small part of the documentation—specifically the README, LEARNING.md, and RAM_COMMUNITY.md files—to help structure the content and improve clarity. >

> But for all the source code (Rust), tests, and implementation, I wrote everything myself, reviewing and designing every part. >

> Let me know if you want details or want to look at a specific part of the code!

Oof. That is pretty damning.

———

It’s unfortunate that em-dashes have become a shibboleth for AI-generated text. I love em-dashes, and iPhones automatically turn a double dash ( -- ) into an em dash.

culebron21 5 hours ago||
I took a look at the source code -- well, what if it's a working thing -- and of course, 2 files in, there are instructions that panic (i.e. crash the host process), all over the code, just everywhere, in every API method. That's nowhere near "production ready" as the author's LLM says.

I've seen, 17 year ago, a schoolboy make "his own OS", which was simply Ubuntu with replaced logos. He got on TV with it, IIRC he was promoting it on the internets (forums back then), and kept insisting that this was his own work. He was bullied in response and in a few weeks disappeared from the nets.

What has it to do with me personally, if I'm not the author, nor a bully? Today I learned that I can't trust the new libraries posted in official repos. They can be just wrapper code slop. In 2012, Jack Diedrich in his speech "Stop Writing Classes" said that he'd read every library source code to find if there was anything stinky. I used to think it's a luxury of time and his qualification to read into what you use. Now it became a necessity, at least for new projects.

nikeee 16 hours ago|
I hope that when all online content is entirely AI generated, humanity will put their phone aside and re-discover reality because we realize that the social networks have become entirely worthless.
mr_00ff00 15 hours ago||
To some degree there’s something like this happening. The old saying “pics or it didn’t happen” used to mean young people needed to take their phones out for everything.

Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.

account42 11 hours ago||
That's not what that saying means/meant.
OvbiousError 11 hours ago|||
What's more likely is that a significant number of people will start having most/all of their meaningful interactions with AI instead of with other people.
schrodinger 15 hours ago|||
What a nice thought :)
Davidzheng 14 hours ago||
lol if they don't put the phone down now, then how can AI generated content specifically optimized to get people to stay be any better.
More comments...