In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
1. Text is a very compressed / low information method of communication.
2. Text inherently has some “authority” and “validity”, because:
3. We’ve grown up to internalize that text is written by a human. Someone spend the effort to think and write down their thoughts, and probably put some effort into making sure what they said is not obviously incorrect.
Intimately this ties into LLMs on text being an easier problem to trick us into thinking that they are intelligent than an AI system in a physical robot that needs to speak and articulate physically. We give it the benefit of the doubt.
I’ve already had some odd phone calls recently where I have a really hard time distinguishing if I’m talking to a robot or a human…
One consequence, IMHO, is that we won't value long papers anymore. Instead, we will want very dense, high-bandwidth writing that the author stakes consequences (monetary, reputational, etc.) on its validity.
To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).
No LLM/AI system today can pass the Turing test.
Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.
That is, the main thing that makes it possible to tell LLM bots apart from humans is that lots of us have over the past 3 years become highly attuned to specific foibles and text patterns which signal LLM generated text - much like how I can tell my close friends' writing apart by their use of vocabulary, punctuation, typical conversation topics, and evidence (or lack) of knowledge in certain domains.
on the other hand
I cant do reddit anymore, it does my head in. Lemmy has been far more pleasant as there is still good posting etiquette.
For licensed professions, they have registries where you can look people up and confirm their status. The bot might need to carry out a somewhat involved fraud if they're checking.
Also on subreddits functioning as support groups for certain diseases, you'll see posts that just don't quite add up, at least if you know somewhat about the disease (because you or a loved one have it). Maybe they're "zebras" with a highly atypical presentation (e.g., very early age of onset), or maybe they're "Munchies." Or maybe LLMs are posting their spurious accounts of their cancer or neurdegenerative disease diagnosis, to which well-meaning humans actually afflicted with the condition respond (probably along side bots) with their sympathy and suggestions.
Social Media is become the internet and/or vice-versa.
Also, I think you're objectively wrong in this statement:
"the actual function of this website is, which is to promote the views of a small in crowd"
Which I don't think was the actual function of (original) social media either.
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
Might as well be yourself.
It lets users type all sorts of ‡s, (*´ڡ`●)s, 2026/01/19s, by name, on Windows, Mac, Linux, through pc101, standard dvorak, your custom qmk config, anywhere without much prior knowledge. All it takes is to have a little proto-AI that can range from floppy sizes to at most few hundred MBs in size, rewriting your input somewhere between the physical keyboard and text input API.
If I wanted em–dashes, I can do just that instantly – I'm on Windows and I don't know what are the key combinations. Doesn't matter. I say "emdash" and here be an em-dash. There should be the equivalent to this thing for everybody.
Show HN: Hacker News em dash user leaderboard pre-ChatGPT - https://news.ycombinator.com/item?id=45071722 - Aug 2025 (266 comments)
... which I'm proud to say originated here: https://news.ycombinator.com/item?id=45046883.
I'm safe. It must be one of you that are the LLM!
(Hey, I'm #21 on the leaderboard!).
Hyphen (-) — the one on your keyboard. For compound words like “well-known.”
En dash (–) — medium length, for ranges like 2020–2024. Mac: Option + hyphen. Windows: Alt + 0150.
Em dash (—) — the long one, for breaks in thought. Mac: Option + Shift + hyphen. Windows: Alt + 0151.
And now I also understand why having plenty of actual em-dashes (not double hyphens) is an “AI tell”.
En dash is compose --.
You can type other fun things like section symbol (compose So) and fractions like ⅐ with compose 17, degree symbol (compose oo) etc.
https://itsfoss.com/compose-key-gnome-linux/
On phones you merely long press hyphen to get the longer dash options.
That said I always use -- myself. I don't think about pressing some keyboard combo to emphasise a point.
In X amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.
I look forward to the various security mechanisms required of this new paradigm (in the way that someone looks forward to the tightening spiral into dystopia).
Nah. That's assuming most cars today, with literal, not figurative, humans are delivering goods and food. But they're not: most cars during traffic hours and by very very very far are just delivering groceries-less people from point A to point B. In the morning: delivering human (usually by said human) to work. Delivering human to school. Delivering human back to home. Delivering human back from school.
Drivers are literally the biggest cause of deaths of young people. We should start applying the same safety standards we do to every other part of life.
Accidents Georg, who lives in a windowless car ans hits someone over 10,000 times each day, is an outlier and should not have been counted
By the way, I don't bike but I walk about everywhere lately. So to hyperbolize as it's the custom on the internets, i live in constant fear not of cars, but of super holier than you eco cyclists running me over. (Yea, I'm not in NL.)
Anyway, a fix that should work fine for both of you is to take a lane from cars and devote it to cyclists. Nobody actually wants to bike where people walk, some places just have bad infrastructure.
They think they're martyrs or something. What am I then if i take a backpack and do my shopping on foot? I'm even more eco because I didn't spend manufacturing resources on a bike, and even more of a martyr because walking is slow.
> to take a lane from cars and devote it to cyclists. Nobody actually wants to bike where people walk
Yep, see my NL reference :)
I actually prefer to work in the office, it's easier for me to have separate physical spaces to represent the separate roles in my life and thus conduct those roles. It's extra effort for me to apply role X where I would normally be applying role Y.
Having said that, some of the most productive developers I work with I barely see in the office. It works for them to not have to go through that whole ... ceremoniality ... required of coming into the office. They would quit on the spot if they were forced to come back into the office even only twice a week, and the company would be so much worse off without them. By not forcing them to come into the office, they come in on their own volition and therefore do not resent it and therefore do not (or are slower to) resent their company of employment.
What should we conclude from those two extraneous dashes....
Nice article, though. Thanks.
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
Here's my list of current Claude (I assume) tics:
There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.
It super sticks out because who the hell ever said that X part of the program wires into y?
It may grate but to me, it grates less than "correct" which is a major sign of arrogant "I decide what is right or wrong" and when I hear it, outside of a context where somebody is the arbiter or teacher, I switch off.
But you're absolutely wrong about youre absolutely right.
It's a bit hokey, but it's not a machine made signifier.