1) to satisfy investors, companies require continual growth in engagement and users
2) the population isn't rocketing upwards on a year-over-year basis
3) the % of the population that is online has saturated
4) there are only so many hours in the day
Inevitably, in order to maintain growth in engagement (comments, posts, likes, etc.), it will have to become automated. Are we there already? Maybe. Regardless, any system which requires continual growth has to automate, and the investor expectations for the internet economy require it, and therefore it has or soon will automate.
Not saying it's not bad, just that it's not surprising.
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).
No LLM/AI system today can pass the Turing test.
Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.
1. Text is a very compressed / low information method of communication.
2. Text inherently has some “authority” and “validity”, because:
3. We’ve grown up to internalize that text is written by a human. Someone spend the effort to think and write down their thoughts, and probably put some effort into making sure what they said is not obviously incorrect.
Intimately this ties into LLMs on text being an easier problem to trick us into thinking that they are intelligent than an AI system in a physical robot that needs to speak and articulate physically. We give it the benefit of the doubt.
I’ve already had some odd phone calls recently where I have a really hard time distinguishing if I’m talking to a robot or a human…
One consequence, IMHO, is that we won't value long papers anymore. Instead, we will want very dense, high-bandwidth writing that the author stakes consequences (monetary, reputational, etc.) on its validity.
on the other hand
I think old school meetups, user groups, etc, will come back again, and then, more private communication channels between these groups (due to geographic distance).
About 10 years ago we had a scenario where bots probably were only 2-5% of the conversation and they absolutely dominated all discussion. Having a tiny coordinated minority in a vast sea of uncoordinated people is 100x more manipulative than having a dead internet. If you ever pointed out that we were being botted, everyone would ignore you or pretend you were crazy. It didn’t even matter that the Head of the FBI came out and said we were being manipulated by bots. Everyone laughed at him the same way.
This was definitively not the case on HackerNews.
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
Might as well be yourself.
It lets users type all sorts of ‡s, (*´ڡ`●)s, 2026/01/19s, by name, on Windows, Mac, Linux, through pc101, standard dvorak, your custom qmk config, anywhere without much prior knowledge. All it takes is to have a little proto-AI that can range from floppy sizes to at most few hundred MBs in size, rewriting your input somewhere between the physical keyboard and text input API.
If I wanted em–dashes, I can do just that instantly – I'm on Windows and I don't know what are the key combinations. Doesn't matter. I say "emdash" and here be an em-dash. There should be the equivalent to this thing for everybody.
That said I always use -- myself. I don't think about pressing some keyboard combo to emphasise a point.
Hyphen (-) — the one on your keyboard. For compound words like “well-known.”
En dash (–) — medium length, for ranges like 2020–2024. Mac: Option + hyphen. Windows: Alt + 0150.
Em dash (—) — the long one, for breaks in thought. Mac: Option + Shift + hyphen. Windows: Alt + 0151.
And now I also understand why having plenty of actual em-dashes (not double hyphens) is an “AI tell”.
En dash is compose --.
You can type other fun things like section symbol (compose So) and fractions like ⅐ with compose 17, degree symbol (compose oo) etc.
https://itsfoss.com/compose-key-gnome-linux/
On phones you merely long press hyphen to get the longer dash options.
Show HN: Hacker News em dash user leaderboard pre-ChatGPT - https://news.ycombinator.com/item?id=45071722 - Aug 2025 (266 comments)
... which I'm proud to say originated here: https://news.ycombinator.com/item?id=45046883.
I'm safe. It must be one of you that are the LLM!
(Hey, I'm #21 on the leaderboard!).
There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.
It super sticks out because who the hell ever said that X part of the program wires into y?
It may grate but to me, it grates less than "correct" which is a major sign of arrogant "I decide what is right or wrong" and when I hear it, outside of a context where somebody is the arbiter or teacher, I switch off.
But you're absolutely wrong about youre absolutely right.
It's a bit hokey, but it's not a machine made signifier.