Posted by usefulposter 2 days ago
It's just a tool ffs! there are many issues with LLM abuse, but this sort of over-compensation is exactly the sort of stuff that makes it hard to get abuse under control.
You're still talking with a human!, there is no actual "AI" you're not talking to an actual artificial intelligence. "don't message me unless you've written it with ink, on papyrus". There is a world of difference between grammarly and an autonomous agent creating comments on its own. Specifics, context, and nuance matter.
https://reddit.com/r/tea/comments/1rqwy31/i_am_a_former_guid...
https://arxiv.org/html/1706.03762v7 (Attention is all you need) "Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."
Ok, looking that up, that was quite literally one of the main design goals.
And they're really quite good at translating between the languages I use. They're the best tool for the job.
I think that Google initially came up with transformer architecture to use it for translation, so...
## Opposing the Ban on AI-Generated/Edited Comments on HN
*The value of a comment should be judged by its content, not its origin.*
Here are key arguments against this policy:
- *Ideas matter more than authorship.* If a comment is insightful, well-reasoned, and contributes meaningfully to a discussion, dismissing it solely because AI assisted in its creation is a genetic fallacy — judging an argument by its source rather than its merit.
- *We already accept tool-assisted thinking.* People routinely use calculators, search engines, spell-checkers, and reference materials before posting. AI assistance exists on a spectrum with these tools. Drawing a bright line specifically at "AI-edited" is arbitrary when someone could use a thesaurus, Grammarly, or have a friend proofread their comment without objection.
- *It disadvantages non-native speakers.* Many HN users are brilliant engineers and thinkers who don't write fluently in English. AI editing can level the playing field, allowing their ideas to be judged on substance rather than prose quality. This policy inadvertently privileges native English speakers.
- *It's effectively unenforceable.* There is no reliable way to distinguish a lightly AI-polished comment from a naturally well-written one. Unenforceable rules erode respect for the rules that are enforceable and important.
- *The real problem is low-effort content, not the tool used.* What HN actually wants to prevent is shallow, generic, or spammy comments. A policy targeting quality directly (which HN already has) addresses the actual concern better than a blanket tool prohibition.
- *Human intent still drives the conversation.* A person who uses AI to articulate their own idea more clearly is still participating in a human conversation — they're just communicating more effectively. The thought, the intent to engage, and the underlying perspective remain human.
*In short:* This rule conflates the medium with the message and risks excluding valuable contributions in pursuit of an authenticity standard that is both philosophically fuzzy and practically unenforceable.
What I could just do is obfuscate it a little bit and you can't tell whether it is AI-generated or not. If I just read that AI-generated snippet, and wrote a "human" version of it, would that still count as "AI-generated"
The idea of that rule is that we don't want HN to be Moltbook, not that it actually wanted to ban AI-comments.
My analysis could lead to "it's doomed" or "it's a gateway drug that expands the crypto market".
I strongly doubt it. My AIs can generate infinite HN comments for me. I don’t do that because it isn’t interesting. But if the day arises where it is, I want that personalized content. Not something someone else copy pasted.
(I say this as someone who finds Moltbook fascinating and push myself to use AI more in my work and day-to-day life. The fact that it’s borderline trivial to figure out which HN comments are AI generated speaks to the motivation behind this guideline.)
And despite what people say, the way you write is very much judged as an indication of your education and intelligence.
People who don't like the use of AI to help you write really don't want those signals to go away.
They want to be able to continue to judge others based on their English grammar instead of on the content of their writing.
Good argument for it but I think 80/20 split applies here. It is likely that 80% of the time it is used to farm for upvotes and add noise.
> And despite what people say, the way you write is very much judged as an indication of your education and intelligence.
I have come across plenty of content and online interactions in English where English was the Author's 2nd or even 3rd language and I find that putting a small disclaimer about this fact is more than enough to bypass such judgement.
Edit for amichail, since I'm rate-limited at the moment: I don't want flawless English writing. I want real ideas from real people. If I wanted flawless English writing, I'd be reading The New Yorker, not HN.
Pretty soon we're gonna see arguments that its discriminatory.
Humans write a bit messier — commas, short sentences, abrupt turns.
Forum mechanics have always shaped discourse more than policies. Voting changed everything. The response to LLMs should be mechanical not moral — soft, invisible weighting against signals correlated with generated text. Imperfect but worth the tradeoff, just like voting.
https://claude.ai/share/9fcdcba8-726b-4190-b728-bb4246ff82cf