divisiveness this kind of stuff will create
I'm pretty sure we're already decades in to the world of "has created".Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.
It's just been too much for too long and you can tell.
Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.
In fact, it's easier than ever to see the intended benefit of such a lifestyle.
They didn't say to avoid certain tech. They said to avoid takes and news headlines.
Your conflation of those two is like someone saying "injecting bleach into your skin is bad" and you responding with "oh, so you oppose cleaning bathrooms [with bleach]?"
It really isn't that hard, if I'm looking at my experience. Maybe a little stuff on here counts. I get my news from the FT, it's relatively benign by all accounts. I'm not sure that opting out of classical social media is particularly luddite-y, I suspect it's closer to becoming vogue than not?
Being led around by the nose is a choice still, for now at least.
I mostly get gaming and entertainment news for shows I watch, but even between those I get CNN and Fox News, both which I view as "opinion masquerading as news" outlets.
My mom shares so many articles from her FB feed that are both mainstream (CNN, etc) nonsense and "influencer" nonsense.
I have no news feed on my phone. I doubt on android it is any harder to evade. Social media itself is gone. The closest I get to click-bait is when my mother spouts something gleaned from the Daily Mail. That vector is harder to shift I concede!
Simulacra and Simulation came out in '81, for an example of how long this has been a recognized phenomenon
Then I am very proudly one. I don't do TikTok, FB, IG, LinkedIn or any of this crap. I do a bit of NH here and there. I follow a curated list of RSS feeds. And I twice a day look at a curated/grouped list of headlines from around the world, built from a multitude of sources.
Whenever I see a yellow press headline from the German bullshit print medium "BILD" when paying for gas or out shopping, I can't help but smile. That people pay money for that shit is - nowadays - beyond me.
To be fair. This was a long process. And I still regress sometimes. I started my working life at an editorial team for an email portal. Our job was to generate headlines that would stop people from logging in to read their mail and read our crap instead - because ads embedded within content were way better paid than around emails.
So I actually learned the trade. And learned that outrage (or sex) sells. This was some 18 or so years ago - the world changed since then. It became even more flammable. And more people seem to be playing with their matches. I changed - and changed jobs and industries a few times.
So over time I reduced my news intake. And during the pandemic learned to definitely reduce my social media usage - it is just not healthy for my state of mind. Because I am way to easily dopamine addicted and trigger-able. I am a classic xkcd.com/386 case.
So we emotionally convince ourselves that we have solved the problem so we can act appropriately and continue doing things that are important to us.
The founders recognized this problem and attempted to setup a Republic as an answer to it; so that each voter didn't have to ask "do I know everything about everything so I can select the best person" and instead were asked "of this finite, smaller group, who do I think is best to represent me at the next level"? We've basically bypassed that; every voter knows who ran for President last election, hardly anyone can identify their party's local representative in the party itself (which is where candidates are selected, after all).
Most people I know who have strong political opinions (as well as those who don't) can't name their own city council members or state assemblyman, and that's a real problem for functioning representative democracy. Not only for their direct influence on local policy, but also because these levels of government also serve as the farm team or proving grounds for higher levels of office.
By the time candidates are running with the money and media of a national campaign, in some sense it's too late to evaluate them on matters of their specific policies and temperaments, and you kind of just have to assume they're going to follow the general contours of their party. By and large, it seems the entrenched political parties (and, perhaps, parties in general) are impediments to good governance.
The accidents that let it occur may no longer be present - there are arguments that "democracy" as we understand it was impossible before rapid communication, and perhaps it won't survive the modern world.
We're living in a world where a swing voter in Ohio may have more effect/impact on Iran than a person living there - or even more effect on Europe than a citizen of Germany.
Voting on principles is fine and good.
The issue is the disconnect between professed principles and action. And the fact that nowadays there are not many ways to pick and choose principles except two big preset options.
A luddite would refuse the covid vaccine. They'd refuse improved trains. They'd refuse EVs. etc. This is because ludditism is the blanket opposition to technological improvements.
That’s a very unfair accusation to throw at someone off the cuff. Anyway, what you wrote is not what a Luddite is at all, especially not the anti-vaccine accusation. I don’t think you’re being deliberately deceptive here, I think you just don’t know what a Luddite is (was).
For starters: They were not anti-science/medicine/all technology. They did not have “blanket opposition to all technological improvement.” You’re expressing a common and simplistic misunderstanding of the movement and likely conflating it with (an also flawed understanding of) the Amish.
They were, at their core, a response against industrialization that didn’t account for the human cost. This was at the start of the 19th century. They wanted better working conditions and more thoughtful for consideration for industrialization took place. They were not anti-technology and certainly not anti-vaccine.
The technology they were talking about was mostly related to automation in factories, which coupled with anti-collective bargaining initiatives, led to further dehumanization of the workforce as well as all sorts novel and horrific workplace accidents for adults and children alike. Their call for “common sense laws” and “guardrails” are echoed today with how many of us talk about AI/LLM’s.
Great comic on this: https://thenib.com/im-a-luddite/
Case in point: if you ask for expertise verification on HN you get downvoted. People would rather argue their point, regardless of validity. This site’s culture is part of the problem and it predates AI.
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
Maybe i should setup a Pi-Hole business...
Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.
Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.
... which doesn't sound impossible. It's also entirely possible that the value of Section 230 has run its course and it should generally be remarkably curtailed (its intent was to make online forums and user-generated-content networks, of which ad networks are a kind, possible, but one could make the case that it has been demonstrated that operators of online forums have immense responsibility and need to be held accountable for more of the harms done via the online spaces they set up).
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.
There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.
So do it. Forums haven't gone away, you just stopped going to them. Search for your special interest followed by "Powered by phpbb" (or Invision Community, or your preferred software) and you'll find plenty of surprisingly active communities out there.
IME young people use Discord, and those servers often require permission to even join. Nearly all my fandom communications happen on a few Discord servers, most of which you cannot join without an invitation, and if you're kicked (bad actors will be kicked), you cannot re-join (without permission).
It would certainly be fun to trick people I dislike into posting AI content unknowingly. Maybe it has to be so low-key that they aren't even banned on the first try, but that just seems ripe for abuse.
I want a solution to this problem too, but I don't think this is reasonable or practical. I do wonder what it would mean if, philosophically, there were a way to differentiate between "free speech" and commercial speech such that one could be respected and the other regulated. But if there is such a distinction I've never been able to figure it out well enough to make the argument.
People left and never came back.
But those bots were certainly around in the 90s
You're training yourself with a very unreliable source of truth.
Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.
I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.
If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?
Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?
Personally, I don't think that behavior is very healthy, and the other parent comment suggested an easy "get out of jail free" way of not thinking about it anymore while also limiting anxiety: they're unreliable subreddits. I'd say take that advice and move on.
If bots reference real sources it's still a valid argument.
Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)
Any day now… right?
If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.
Now that photos and videos can be faked, we'll have to go back to the older system.
I am no big fan of AI but misinformation is a tale as old as time.
Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?
I do the same on my websites. It's embedded into my static site generator.
Very related: https://practicaltypography.com/
Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.
Down with that foul fifth glyph! Down, I say!
I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".
It's not oft that you run into such alluring confirmation of your point.
My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.
No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)
I rEgrEt that I havE not donE thE samE, but plEase accEpt bad formatting as a countErpoint.
(Assuming you did actually hand craft that I thumbs-up both your humor and industry good sir)
Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.
In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.
Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.
People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.
In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.
ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.
I think blaming it all on money ignores that this also serve political goals.
Groups spend money to manipulate public opinion. It’s a goal in and of itself that has value rather than a money making scheme.
For example, the 'Russian interference' in the 2016 US election, was I suspect mostly people trying to make money, and more importantly, was completely dwarfed by US direct political spending.
There is also a potentially equally, if not larger problem, in the politicisation of the 'anti-disinformation' campaigns.
To be honest I'm not sure if there is much of a difference between a grifter being directly paid to promote a certain point of view, and somebody being paid indirectly ( by ads ).
In both cases neither really believes in the political point they are making they are just following the money.
These platforms are enabling both.
Anger increases stickiness. Once one discovers there are other people on the site, and they are guilty of being wrong on the internet, one is incentivized to correct them. It feels useful because it feels like you're generating content that will help other people.
I suspect the failure of the system that nobody necessarily predicted is that people seem to not only tolerate, but actually like being a little angry online all the time.
I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).
This is one level of abstraction more than I deal with on a normal day.
The fake video which plays into people’s indignation for racism, is actually about baiting people who are critical about being baited by racism?
But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.
I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.
The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.
No, they do not. Nobody[1] wants to be angry. Nobody wakes up in the morning and thinks to themselves, "today is going to be a good day because I'm going to be angry."
But given the correct input, everyone feels that they must be angry, that it is morally required to be angry. And this anger then requires them to seek out further information about the thing that made them angry. Not because they desire to be angry, but because they feel that there is something happening in the world that is wrong and that they must fight.
[1]: for approximate values of "nobody"
I disagree. Why are some of the most popular subreddits things like r/AmITheAsshole, r/JustNoMIL, r/RaisedByNarcissists, r/EntitledPeople, etc.: forums full of (likely fake) stories of people behaving egregiously, with thousands of outraged comments throwing fuel on a burning pile of outrage: "wow, your boyfriend/girlfriend/husband/wife/father/mother/FIL/MIL/neighbor/boss/etc. is such an asshole!" Why are advice/gossip columns that provide outlets for similar stories so popular? Why is reality TV full of the same concocted situations so popular? Why is people's first reaction to outrageous news stories to bring out the torches and pitchforks, rather than trying to first validate the story? Why can an outrageous lie travel halfway around the world while the truth is still getting its boots on?
You’re literally saying why people want to be angry.
My uneducated feeling is that, in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed.
But that doesn't mean that people want to be angry in general, in the sense that if there's nothing in reality to be angry about then that's even better. But if someone is presented with something to be angry about, then that ship has sailed so the typical reaction is to feel the need to engage.
Yes, I think this is exactly it. A reaction that may be reasonable in a personal, real-world context can become extremely problematic in a highly connected context.
It's both that, as an individual, you can be inundated with things that feel like you have a moral obligation to react. On the other side of the equation, if you say something stupid online, you can suddenly have thousands of people attacking you for it.
Every single action seems reasonable, or even necessary, to each individual person, but because everything is scaled up by all the connections, things immediately escalate.
I don't wake up thinking "today I want to be angry", but if I go outside and see somebody kicking a cat, I feel that anger is the correct response.
The problem is that social media is a cat-kicking machine that drags people into a vicious circle of anger-inducing stimuli. If people think that every day people are kicking cats on the Internet, they feel that they need to do something to stop the cat-kicking; given their agency, that "something" is usually angry responses and attacks, which feeds the machine.
Again, they do not do that because they want to be angry; most people would rather be happy than angry. They do it because they feel that cats are being kicked, and anger is the required moral response.
At some point, I think it’s important to recognize the difference between revealed preferences and stated preferences. Social media seems adept at exposing revealed preferences.
If people seek out the thing that makes them angry, how can we not say that they want to be angry? Regardless of what words they use.
And for example, I never heard anyone who was a big Fox News, Rush Limbaugh, or Alex Jones fan who said they wanted to be angry or paranoid (to be fair, this was pre-Trump and awhile ago), yet every single one of them I saw got angry and paranoid after watching them, if you paid any attention at all.
Because their purpose in seeking it out is not to get angry, it's to stop something from happening that they perceive as harmful.
I doubt most people watch Alex Jones because they love being angry. They watch him because they believe a global cabal of evildoers is attacking them. Anger is the logical consequence, not the desired outcome. The desired outcome is that the perceived problem is solved, i.e. that people stop kicking cats.
We can chicken/egg about it all day, but at some point if people didn’t want it - they wouldn’t be doing it.
Depending on the definition of ‘want’ of course. But what else can we use?
I don’t think anyone would disagree that smokers want cigarettes, eh? Or gamblers want to gamble?
None of these people said to themselves, "I want to be angry today, and I heard that Alex Jones makes people angry, therefore I will watch Alex Jones."
A lot of people really do, and it predates any sort of media too. When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.
But again in this situation the goal is not to be angry.
This sort of behaviour emerges as a consequence of unhealthy group dynamics (and to a lesser extent, plain boredom). By gossiping, a person expresses understanding of, and reinforces, their in-group’s values. This maintains their position in the in-group. By embellishing, the person attempts to actually increase their status within the group by being the holder of some “secret truth” which they feel makes them important, and therefore more essential, and therefore more secure in their position. The goal is not anger. The goal is security.
The emotion of anger is a high-intensity fear. So what you are perceiving as “seeking out a reason to be angry” is more a hypervigilant scanning for threats. Those threats may be to the dominance of the person’s in-group among wider society (Prohibition is a well-studied historical example), or the threats may be to the individual’s standing within the in-group.
In the latter case, the threat is frequently some forbidden internal desire, and so the would-be transgressor externalises that desire onto some out-group and then attacks them as a proxy for their own self-denial. But most often it is simply the threat of being wrong, and the subsequent perceived loss of safety, that leads people to feel angry, and then to double down. And in the world we live in today, that doubling down is more often than not rewarded with upvotes and algorithmic amplification.
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
(Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )
Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.
If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.
Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!
> Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered
I read it as "producing racist videos can sometimes be used in good faith"?
Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.
But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.
Edit: please, prove your illiteracy and lack of critical thinking skills in the comments below
Do you realize how crazy this sounds?
Edit: I literally demonstrate my ability to think critically.
I think many here would say "yes!" to this question, so can saying "no" be justified by an anti-racist?
Generally I prefer questions that do not lead to thoughts being terminated. Seek to keep a discussion not stop it.
On the subject of this thread, these questions are quite old and are related to propaganda: is it okay to use propaganda if we are the Good Guys, if, by doing so, it weakens our people to be more susceptible to propaganda from the Bad Guys. Every single one of our nations and governments think yes, it's good to use propaganda.
Because that's explicitly what happened during the rise of Nazi Germany; the USA had an official national programme of propaganda awareness and manipulation resistance which had to be shut down because the country needed to use propaganda on their own citizens and the enemy during WW2.
So back to the first question, its not the content (whether it's racist or not) it's the effect: would producing fake content reach a desired policy goal?
Philosophically it's truth vs lie, can we lie to do good? Theologically in the majority of religions, this has been answered: lying can never do good.
But this is game theory, a dead and amoral mechanism that is mostly used by the animal kingdom. I'm sure humanity is better than that?
Propaganda is war, and each time we use war measures, we're getting closer to it.
Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.
Not AI. Not bots. Not Indians or Pakistanis. Not Kremlin or Hasbara agents. All the above might comprise a small percentage of it, but the vast majority of the rage bait and rage bait support we’ve seen over the past year+ on the Internet (including here) is just westerners being (allowed and encouraged by each other to be) racist toward non-whites in various ways.
We truly live in wonderful times!
Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.
You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.
This is "innocent" if you accept that the author's goal is simplify to maximize engagement and YouTube is helping them do that. It's not if you assume the author wants users to see exactly what they authored.
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
It is that it is increasingly becoming indistinguishable from not-slop.
There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.
And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.
It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.
But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.
If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.
But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.
And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.
The big problem isn’t the tech getting smarter though.
It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.
Incentives, as they say, matter.
Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.
Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.
Which will eventually get worked around and can easily be masked by just having a backing track.
I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...
The HN text area does not insert em-dashes for you and never has. On my phone keyboard it's a very lot deliberate action to add one (symbol mode, long press hyphen, slide my finger over to em-dash).
The entire point is it's contextual - emdashes where no accomodations make them likely.
I think the emoji one is most pronounced in bullet point lists. AI loves to add an emoji to bullet points. I guess they got it from lists in hip GitHub projects.
The other one is not as strong but if the "not X but Y" is somewhat nonsensical or unnecessary this is very strong indicator it's AI.
I see this way more often on GitHub now than I did before, though.
Long-press on the hyphen on most Android keyboards.
Or open whenever "Character Map" application that usually comes with any desktop OS, and copy it from there.
I also use en dashes when referring to number ranges, e.g., 1–9
Seriously, she used dashes all the time. Here is a direct copy and paste of the first two stanzas of her poem "Because I count not stop for Death" from the first source I found, https://www.poetryfoundation.org/poems/47652/because-i-could...
Because I could not stop for Death –
He kindly stopped for me –
The Carriage held but just Ourselves –
And Immortality.
We slowly drove – He knew no haste
And I had put away
My labor and my leisure too,
For His Civility –
Her dashes have been rendered as en dashes in this particular case rather than em dashes, but unless you're a typography enthusiast you might not notice the difference (I certainly didn't and thought they were em dashes at first). I would bet if I hunted I would find some places where her poems have been transcribed with em dashes. (It's what I would have typed if I were transcribing them).https://www.edickinson.org/editions/1/image_sets/12174893
Dickinson's dashes tended to vary over time, and were not typeset during her lifetime (mostly). Also, mid-19th century usage was different—the em-dash was a relatively new thing.
Think about it— the robots didn’t invent the em-dash. They’re copying it from somewhere.
As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)
Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.
Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!
Answer? Probably "of course not"
They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably
"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"
If I just feed it to 10 pandas, today, they're all dead.
And I suspect that humanity's position in this analogy is far closer to the latter than the former.
And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.
[1] Those "crappy websites" with a maze of iframes are actually considered surprisingly refreshing today.
I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.
Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.
While it stinks that it is controlled by one big company, it's quite nice that its communities are invite-only by default and largely moderated by actual flesh-and-blood users. There's no single public shared social space, which means there's no one shared social feed to get hooked on.
Pretty much all of my former IRC/Forum buddies have migrated to Discord, and when the site goes south (not if, it's going to go public eventually, we all know how this story plays out), we expect that we'll be using an alternative that is shaped very much like it, such as Matrix.
The "former type" had to do with online socializing with people you know IRL.
I have never seen anything on Discord that matches this description.
In fact, I'd say it's probably the easiest way to bootstrap a community around a friend-group.
The other part of this is that Discord has official University hubs, so the college kids are all in there. You need an email address from that Univeristy to join: https://support.discord.com/hc/en-us/articles/4406046651927-...
It's similar in Apple's strategy of trying to get Macintosh into the classrooms (in the 80s/90s), and student discounts on Adobe products.
I am not a huge fan of Discord, although I do use it. It's very good at what it does, and the communities it houses are well moderated, at least the ones that I have joined. I dislike that they've taken over communities and walled them off from the "searchable" internet.
I'm in a friend Discord server. It's naturally invisible unless someone sends you an invite.
But, the “know IRL” split is a bit artificial I think. For example my discord is full of people I knew in college: I knew them IRL for four years, and then we all moved around and now we’ve known each other online for decades. Or childhood friends. By now, my childhood friend and college friend circles are partially merged on discord, and they absolutely know each other (unfortunately there’s no way for you to evaluate this but I know them all quite well and it would be absurd to me, to consider them anything other than friends).
The internet is part of the real world now. People socialize on it. I can definitely see a distinction between actually knowing somebody, and just being in a discord channel with them. But it is a fuzzy social thing I think, hard to nail down exactly where the transition is (also worth noting that we have acquaintances that we don’t really “know” offline, the cashier at our favorite shops for example).
And I know server like these are in the top tier of engagement for discord on the whole because they keep being picked for AB testing new features. Like, we had activities some half a year early. We actually had the voice modifiers on two of them, and most people don't even know that was a thing.
Discord is many things. Private chat groups, medium communities and then larger communities with tens of thousands of users.
So what's wrong with that?
"Social media" never meant that. We've forgotten already, but the original term was "social network" and the way sites worked back then is that everyone was contributing more or less original content. It would then be shared automatically to your network of friends. It was like texting but automatically broadcast to your contact list.
Then Facebook and others pivoted towards "resharing" content and it became less "what are my friends doing" and more "I want to watch random media" and your friends sharing it just became an input into the popularity algorithm. At that point, it became "social media".
HN is neither since there's no way to friend people or broadcast comments. It's just a forum where most threads are links, like Reddit.
Let's remember that the original idea was to connect with people in your college/university. I faintly recall this time period because I tried to sign up for it only to find out that while there had been an announcement that it was opened up internationally, it still only let you sign up with a dot EDU email address, which none of the universities in my country had.
In the early years "social media" was a lot more about having a place to express yourself or share your ideas and opinions so other people you know could check up on them. Many remember the GIF anarchy and crimes against HTML of Geocities but that aesthetic also carried over to MySpace while sites like Live Journal or Tumblr more heavily emphasized prose. This was all also in the context of a more open "blogosphere" where (mostly) tech nerds would run their own blogs and connect intentionally much like "webrings" did in the earlier days for private homepages and such before search engine indexing mostly obliterated their main use.
Facebook pretty much created modern "social media" by creating the global "timeline", forcing users to compete with each other (and corporate brands) for each other's attention while also focusing the experience more on consumption and "reaction" than creation and self-expression. This in turn resulted in more "engagement" which eventually led to algorithmic timelines trying to optimize for engagement and ad placement / "suggested content".
HN actually follows the "link aggregator" or "news aggregator" lineage of sites like Reddit, Digg, Fark, etc (there were also "bookmark aggregators" like stumbleupon but most of those died out to). In terms of social interactions it's more like e.g. the Slashdot comment section even though the "feed" is somewhat "engagement driven" like on social media sites. But as you said, it lacks all the features that would normally be expected like the ability to "curate" your timeline (or in fact, having a personalized view of the timeline at all) or being able to "follow" specific people. You can't even block people.
Until you join a server that gives you a whole essay of what you can and cannot do with extra verification. This then requiring you to post in some random channel waiting for the moderator to see your message.
You're then forced to assign roles to yourself to please a bot that will continue to spam you with notifications announcing to the community you've leveled up for every second sentence. Finally, everyone glaring at you in channel or leaving you on read because you're a newbie with a leaf above your username. Each to their own, I guess.
/server irc.someserver.net
/join #hello
/me says Hello
I think I'll stick with that.
At least Discord and IRC are interchangeable in the sake of idling.
1. People don't understand or want to setup a client that isn't just loading some page in their browser 2. People want to post images and see the images they posted without clicking through a link, in some communities images might be shared more than text. 3. People want a persistent chat history they can easily access from multiple devices/notifications etc 4. Voice chat, many IRC communities would run a tandem mumble server too.
All of these are solvable for a tech-savvy enough IRC user, but Discord gets you all of this out of the box with barely more than an email account.
There are probably more, but these are the biggest reasons why it felt like within a year I was idling in channels by myself. You might not want discord but the friction vs irc was so low that the network effect pretty much killed most of IRC.
you can also invite a music bot or host your own that will join the voice channel with a song that you requested
When we get to alternative proposals with functioning calls I'd say having them as voice channels that just exist 24/7 is a big thing too. It's a tiny thing from technical perspective, but makes something like Teams unsuitable alternative for Discord.
In Teams you start a call and everyone phone rings, you distract everyone from whatever they were doing -- you better have a good reason for doing so.
In Discord you just join empty voice channel (on your private server with friends) w/o any particular reason and go on with your day. Maybe someone sees that you're there and joins, maybe not. No need to think of anyone's schedule, you don't annoy people that don't have time right now.
The big thing is the voice/videoconferencing channels which are actually optimized insanely well, Discord calls work fine even on crappy connections that Teams and Zoom struggle with.
Simply put it's Skype x MSN Messenger with a global user directory, but with gamers in mind.
5 minutes after the first social network became famous. It never really has been just about knowing people IRL, that was only in the beginning, until people started connecting with everyone and their mother.
Now it's about people and them connecting and socializing. If there are persons, then it's social. HN has profiles where you can "follow" people, thus, it's social on a minimal level. Though, we could dispute whether it's just media or a mature network. Because there obviously are notable differences in terms of social-related features between HN or Facebook.
"Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)
Better back button handling and fixing the location bugs in event creation may well be entirely beyond Llama, sadly.
Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.
And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.
Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”
If you have the Compose key [1] enabled on your computer, the keyboard sequence is pretty easy: `Compose - - -` (and for en dash, it's `Compose - - .`). Those two are probably my most-used Compose combos.
Did you mean American style guides prefer the latter?
I like em-dashes and will continue to use them.
Yes, that is more or less what "hot take" means.
I never saw em-dashes—the longer version with no space—outside of published books and now AI.
Just to say, though, we em-dashers do have pre-GPT receipts:
Compose, hyphen, hyphen, period: produces – (en dash) Compose, hyphen, hyphen, hyphen: produces — (em dash)
And many other useful sequences too, like Compose, lowercase o, lowercase o to produce the ° (degree) symbol. If you're running Linux, look into your keyboard settings and dig into the advanced settings until you find the Compose key, it's super handy.
P.S. If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.
There are compose key implementations for Windows, too.
> m-dash (—)
> Do not use; use an n-dash instead.
> n-dash (–)
> Use in a pair in place of round brackets or commas, surrounded by spaces.
Remember I'm specifically speaking about british english.
But I see what you mean. There used to be a distinction between a shorter dash that is used for numerical ranges, or for things named after multiple people, and a longer dash used to connect independent clauses in a sentence [1]. I am shocked to hear that this distinction is being eroded.
[0] https://design.tax.service.gov.uk/hmrc-content-style-guide/
> Spaced en rules (or ‘en dashes’) must be used for parenthetical dashes. Hyphens or em rules (‘em dashes’) will not be accepted for either UK or US style books. En rules (–) are longer than hyphens (-) but shorter than em rules (—).
Section 2.1, "Editorial services style guide for academic books" https://www.cambridge.org/authorhub/resources/publishing-gui...
For the past 15 years, I’ve used the Unicycle Vim plugin¹ which makes it very easy to add proper typographic quotes and dashes in Insert mode. As something of a typography nerd, I’ve extended it to include other Unicode characters, e.g., prime and double-prime characters to represent minutes and seconds.
At the same time, I’ve always used a Firefox extension that launches GVim when editing a text box; currently, I’m using Tridactyl for this purpose.
It's still frequently identifiable in (current-generation) LLM text by the glossy superficiality that comes along with these usages. For example, in "It's not just X, it's Y", when a human does this it will be because Y materially adds something that's not captured by X, but in LLM output X and Y tend to be very close in meaning, maybe different in intensity, such that saying them both really adds nothing. Or when I use "You're absolutely right" I'll clarify what they are right about, whereas for the LLM it's just an empty affirmation.
LLMs use em-dash because people (in their training data) used em-dash. They use "You're absolutely right" because that's a common human phrase. It's not "You write like an LLM", it's "The LLMs write kind of like you", and for good reasons, that's exactly what people been training them to do.
And yes, "pun" intended for extra effect, that also comes from humans doing it.
[1]: https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li... [2]: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...
According to what I know, the correct way to use em-dash is to not surround it by spaces, so words look connected like--this. And indeed, when I started to use em-dashes in my blog(s), that's how I did it. But I found it rather ugly, so I started to put spaces around it. And there were periods where I stopped using em-dash all together.
I guess what I'm trying to say is that unless you write as a profession, most people are inconsistent. Sometimes, I use em-dashes. Sometimes I don't. In some cases I capitalize my words where needed, and sometimes not, depending on how in a hurry I am, or whether I type from a phone (which does a lot of heaving lifting for me).
If you see someone who consistently uses the "proper" grammar in every single post on the internet, it might be a sign that they use AI.
You’re not the first person I’ve seen say that FWIW, but I just don’t recall seeing the full proper em-dash in informal contexts before ChatGPT (not that I was paying attention). I can’t help but wonder if ChatGPT has caused some people - not necessarily you! - to gaslight themselves into believing that they used the em-dash themselves, in the before time.
Also, I was a curmudgeon with strong opinions about punctuation before ChatGPT—heck, even before the internet. And I can produce witnesses.
It's be just as wrong as using an apostrophe instead of a comma.
Grammar is often wooly in a widely used language with no single centralised authority. Many of the "Hard Rules" some people thing are fundamental truths are often more local style guides, and often a lot more recent than some people seem to believe.
Likewise. I used to copy/paste them when I couldn't figure out how to actually type them, lol. Or use the HTML char code `—` It sucks that good grammar now makes people assume you used AI.
You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386
It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.
I think this one is a much closer fit: https://news.ycombinator.com/item?id=46661308
- Tell you what makes em dashes appealing.
- Help you use em dashes more.
- Give you other grammatical quirks smart people have.
Just tell me.
(If bots RP as humans, it’s only natural we start RP as bots. And yes, I did use a curly quote there.)
* **Veneer of authenticity**: because of the difficulty of typing em-dashes in typical form-submission environments, many human posters tend to forgo them.
* **Social pressure**: even if you take strides to make em-dashes easier to type, including them can have negative repercussions. A large fraction of human audiences have internalized a heuristic that "em-dash == LLM" (which could perhaps be dubbed the "LLM-dash hypothesis"). Using em-dashes may risk false accusations, degradation of community trust, and long-winded meta discussion.
* **Unicode support**: some older forums may struggle with encoding for characters beyond the standard US-ASCII range, leading to [mojibake](https://en.wikipedia.org/wiki/Mojibake).
hyphens are so hideous that I can't stand them.
YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.
LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.
Even HN has the incentive of promoting people's startups.
Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?
It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.
I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.
This is specifically in the context of a niche hobby website where the rules are simple and identifying rule-breaking content is easy. I'm not sure it would work on something with universal scope like Reddit or Facebook, but I'd rather we see more focused communities anyway.
But all the while they were doing legitimate reporting, when they came across their real cheating account they'd report not cheating. And supposedly this person got away with it for years for having good reputable community reporting with high alignment scores.
I know 1 exception doesnt mean it's not worth it. But we must acknowledge the potential abuse. Id still rather have 1 occasionally ambitious abuser over countless low effort ones.
spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.
None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.
Makes it harder to see if you're watching an ad
Completely coincidental
Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.
To take a different cognitive domain, think about color. Wikitionary gives around 300 of them for English[1]. I doubt many English speakers would be able to use all of them with relevant accuracy. And obviously even RGB encoding allows to express far more nuances. And obviously most people can fathom far more nuances than what could verbalize.
1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.
2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.
Kill the influencer, kill the creator. Its all bullshit.
Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective
We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get
The biggest problem I see is that the Internet has become a brainwashing machine, and even if you have someone running the platform with the integrity of a saint, if the platform can influence public opinion, it's probably impossible to tell how many real users there actually are.
the idea being that you'd somewhat ensure the person is a human that _may well_ know what they're talking about e.g. `abluecloud from @meta.com`.
Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.
Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.
Yes, it is possible. Like anything worth, it is not easy. I am a member of a small forum of around 20-25 active users for 20 years. We talk all kind of stuff, it was initially just IT-related, but we also touch motorcycles (at least 5 of us do or did ride, I used to go ride with a couple of them in the past), some social aspects, tend to avoid politics (too divisive) and religion (I think none is religious enough to debate). We were initially in the same country and some were meeting IRL from time to time, but now we are spread in many places around Europe (one in US), so the forum is what keeps us in contact. Even the ones in the same country, probably a minority these days, are spread too thin, but the forum is there.
If human interaction involves IRL, I met less that 10 forum members and I met frequently just 3 (2 on motorcycle trips, one worked for a few years in the same place as I), but that is not a metric that means much. It is the false sense of being close over internet while being geographically far, which works in a way but not really. For example my best friends all emigrated, most were childhood friends, communicating to them on the phone or Internet makes me never feel lonely, but seeing them every few years makes grows the distance between us. That is impacting human to human interaction, there is no way around it.
Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.
A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.
- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]
- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]
[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...
[1] https://verify.contentauthenticity.org/
I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.
That's not quite right. SynthID is a digital watermark, so it's hard to remove, while metadata can be easily removed.
AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.
Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.
People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.
This is equal to projects where guys from high school took Ubuntu, changed the logo in couple of places, and then made statements that they'd make a new OS.
Anybody minimally competent can see the childish exaggeration in both cases.
The most logical request is to grow up and be transparent of what you did, and stop lying.
It honestly felt like being gaslighted. You see one thing, but they keep claiming you are wrong.
I'd feel the same way you did, for sure.
You are absolutely right! ;)
Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) | https://news.ycombinator.com/item?id=46661308
Commenter: > What % of the code is written by you and what % is written by ai
OP: > Good question!
>
> All the code, architecture, logic, and design in minikv were written by me, 100% by hand. I did use AI tools only for a small part of the documentation—specifically the README, LEARNING.md, and RAM_COMMUNITY.md files—to help structure the content and improve clarity. >
> But for all the source code (Rust), tests, and implementation, I wrote everything myself, reviewing and designing every part. >
> Let me know if you want details or want to look at a specific part of the code!
Oof. That is pretty damning.
———
It’s unfortunate that em-dashes have become a shibboleth for AI-generated text. I love em-dashes, and iPhones automatically turn a double dash ( -- ) into an em dash.
I've seen, 17 year ago, a schoolboy make "his own OS", which was simply Ubuntu with replaced logos. He got on TV with it, IIRC he was promoting it on the internets (forums back then), and kept insisting that this was his own work. He was bullied in response and in a few weeks disappeared from the nets.
What has it to do with me personally, if I'm not the author, nor a bully? Today I learned that I can't trust the new libraries posted in official repos. They can be just wrapper code slop. In 2012, Jack Diedrich in his speech "Stop Writing Classes" said that he'd read every library source code to find if there was anything stinky. I used to think it's a luxury of time and his qualification to read into what you use. Now it became a necessity, at least for new projects.
Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.