Posted by usefulposter 2 days ago
Though I note it didn't say "read comments by other humans", only "read comments by humans", so confirmed AI.
I think the guidelines here work quite well, and expect a good-faith interpretation, which they mostly receive.
I think you're asking for some sort of empirical verification of "this is / is not LLM text" (which seems impossible), but there's no real reason to expect the existence of LLMs to change that this website is, generally, interacted with in a good-faith way. People are really good at calling others out on here -- I doubt that will change.
Coding is writing though.
Somehow, HN can say that "code is written once and read many times", and insist that code isn't writing at the same time.
All programming languages were created with the express purpose of allowing humans to express their ideas in a way that other humans can understand while simultaneously being convertible into machine code in a precise enough way.
Code has style, code has readability, and when it comes to algorithms, code is often the best way to communicate them (I haven't seen a CS book without at least some pseudocode in it).
Code is supposed to tell what a program does, and what it's for— to a human that wants to understand or change that behavior.
A human who doesn't have this need has no need for the code.
Programming languages make coding less tedious and more efficient (compared to writing assembly) as a side effect.
The primary purpose is facilitating communication about what the machine should do from humans and to humans.
Sure, the scope of ideas computer languages are tailored to facilitate expression in is not universally broad. But that doesn't mean we're not writing when we write code. Lawyers writing a legal argument are still writing, even when they are doing so in very specific, formal language. Mathematicians are still writing papers.
It takes extreme mental gymnastics to consider coding (which is universally an act of producing text) to not be a form of writing.
To that end, having a negative view towards LLM writing while cheering on LLM coding seems (to me) to be borderline schizophrenic.
The people that advocate AI coding for throwaway projects, or using LLMs as a tool to get more insight into codebases make points that I can understand.
But a day or two ago I've responded to a person that argued that Open Source is no longer necessary because you can just vibe code anything. Many others advocate for using agentic coding in production religiously.
Apparently, this is not incompatible with rejecting AI writing at the same time.
I'd be very curious to hear about how people are overcoming this sort of cognitive dissonance.
It’s not difficult:
Drafting AI-assisted programming of computers is fine.
Drafting AI-assisted communications to other humans is not fine.
If your program is written for the express purpose of communicating a specific written message then the message itself must not be AI-assisted but, here anyways, it’s fine if the executable code is AI-assisted. If your personal views conflate those two points, then you’ll have difficulty coping with the distinction here, and may end up exiting HN if you’re unable to coexist with the cognitive dissonance that separation creates.
> It takes extreme mental gymnastics to consider coding […] to not be a form of writing
It does not: coding is generally a form of writing whose primary audience is non-humans. That other humans may read your code and appreciate it is not related to its primary purpose: to direct the operation of a technological device in a programmatic way. Separately, the primary purpose of human-to-human communications is to convey something from your mind to another’s; the mechanism by which that occurs is secondary and has largely shown to be swappable across all possible substrates that can support communication.
So, then: if your marriage proposal to an imagined lover were in the form of code as poetry, it would be offensive to post that here if you wrote the poem with AI — and since the primary purpose of such a program is human-to-human taking precedence over human-to-machine, that’s an obvious case where AI assistance is unwelcome.
Yes, one can adopt a definition of ‘language’ that incorporates both English and Perl into one bucket; but the poem point still applies. Regardless of what dialect your writing is in, if the foremost audience of the written words is humans, then AI-assisted writing isn’t welcome here.
If you’re unable to judge whether code is foremost intended for a computer or for a human, then that’s an area where you’ll need to invest much more consideration if you wish to adhere to the guidelines.
> which is universally an act of producing text
Brainfuck is not in any way classifiable as ‘text’, nor is Renesas SH-2A assembly code. It may be possible to represent them in an ASCII file, but they are not interpretable through human linguistic processes. TIS-100 programs are representable as ASCII text, but without their shape and structure in a 4x3 visual grid, lose all cohesion and functionality. People who program music synthesizers using knobs and wires aren’t writing text, but are creating communications for a human audience, which is why the outcome (AI-assisted music) is disgusting while the process (AI-assisted synthesizer implementation) would not be. And so on, et cetera.
I can understand why you think this is true, but it is false.
In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.
And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".
The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)
> "Error: Reached max turns (1)"
Or. You know... Not at all. I mean, their argument happened to be good. But I have doubts they're telling the truth here.
(flagging the comment makes it dead, but that also hides the substantive discussion that came afer, I'm genuinely not sure what the best move is here)
By the looks of it, I don't even think I'm replying to a human.
By the looks of it, I don't even think I'm replying to a human.
They didn't even bother to remove any of the signals. Perhaps this post is actually a honeypot for these bots.Claude's output it _totally different_ from pasting a quote from Wikipedia.
The latter has the potential to be edited and reviewed by global subject experts.
Claude's output totally depends on what priors you gave it and while you can have high confidence in the context no third party should have.
If you feel like it sure chat with claude to build your insight. Then write what you think _yourself_.
If you want to introduce references use urls to non-ai generated contexts.
I means as a HN protocol.
HN is supposed to be interesting.
LLM output specifically is not interesting because everyone else can generate roughly the same output.
"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."
This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.
(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )
Like, I'm sure that AIs technically can write non-crap HN comments, but they rarely do. Even if it was less rare, the community that resulted from fostering AI-generated content would be unappealing to a lot of people, myself included. The fact that information here is the result of real people with real human opinions conversing is at least as important to me as the content being posted.
It'd be silly if the rule gets interpreted such that people aren't allowed to do research with modern tools, and only gut takes are permitted.
I'm sure that's not the intent!
I think the important part is to have the human voice come through, rather than -say- force humans to run their text through an ai-detector first. (Itself an ai editing tool!)
See also : https://news.ycombinator.com/item?id=47290457 "Training students to prove they're not robots is pushing them to use more AI"
The real point isn't stopping bad grammar, it's preserving the vibe. HN feels different because it's messy humans arguing, not optimized algorithms trying to be helpful.
Once we allow "good enough" AI content, the community stops feeling like a town square and starts feeling like a customer service chatbot. We need real people with actual stakes in their opinions, not just perfect outputs. Let's keep it human or leave it.
This comment may or may not have been generated with an LLM, but I won't tell and you can't prove it either way.