Top
Best
New

Posted by usefulposter 1 day ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4094 points | 1573 commentspage 7
grappler 19 hours ago|
Since we now face a threat of large-scale de-anonymization, a reasonable countermeasure might be using AI to make one's writing style less personally identifying, in order to try and retain some pseudonymity.

    https://simonlermen.substack.com/p/large-scale-online-deanonymization
    https://news.ycombinator.com/item?id=47139716
sschueller 18 hours ago||
I have the feeling my gramatical errors from being ESL appear to be "tolerated" a lot more than a few years ago. By that I mean that it doesn't get called out as much as it used to be.
hollowturtle 11 hours ago||
> Please don't post insinuations about astroturfing, shilling

Reading the site in past 2 years left me with the feeling that HN has been injected by subtle to catch AI marketing campaigns. It's exausting and calling out astroturfers imo is not that bad

nkzd 1 day ago||
What if English is my second language? Undoubtedly being well spoken is associated with higher class. Your arguments will come of as stronger to the reader.
jamesmiller5 1 day ago||
What you really have to ask is will this community be less inclusive because English isn't your first language, I'd say "no" and I hope most would agree.

> Your arguments will come of as stronger to the reader.

That is persuasian, not authenticity, to the OP's point.

Typed without a spellchecker :).

jacquesm 1 day ago|||
That's fine. Your arguments will not come of stronger to the reader, they are strong or they are not and we're all clever enough to read through the occasional grammar error.

And that's where I think the guidelines could be expanded a bit more to restore the balance. Something along the line of 'HN is visited by people from all over the world and from many different cultural and linguistical backgrounds. Please respect that and realize that native English and Western background should not be automatically assumed. It is the message that counts, not the form in which it was presented.'.

altairprime 1 day ago|||
Do the best that you can unassisted. There is a chasm of difference between someone coming into English from another language, and someone using Google Translate to submit a post originating another language. French aphorisms are a stellar example of this: I’d rather read “A bird in the bush may not fly into oven” and have to parse out the meaning, than have some AI translate it as “Don’t count your chickens before they hatch”; sure, there’s an iffy [the] grammatical moment at ‘fly into oven’, but it’s such a distinct phrase and carries a lot more room for contextual nuance than having an AI substitute in an American aphorism with machine translation allows for.

(For example: If I’m trying to express a point about how we shouldn’t assume that dinner isn’t “her duty” but is instead “our duty”, a French-like aphorism expressed in English literally as “the chicken won’t fly into the oven unprompted” could plausibly be AI-translated instead as “don’t count your chickens before they hatch”, doing catastrophic damage to the point. To a machine translator those two aphorisms are not distinctive; but they are, even if it’s a weird expression in common U.S. English.)

darkwater 1 day ago|||
You make errors and weird constructiona like we all non-native do and maybe eventually learn a bit more of English in the process. Or not. English dominance as the world's... lingua franca (ahem) deserves to have it bastardized ;)
ludicrousdispla 15 hours ago|||
Most native English speakers consider 'speaking plainly' to be a better indicator of knowledge and expertise than the alternative.

I can understand the sentiment though, as I am learning a second language and in many of our writing assignments we are expected to use (from my perspective) overly formal and complex grammatic structures when writing simple letters. I have come to accept, or at least hope, that this is simply an exercise to ensure that students have fluency with the grammar.

d4mi3n 1 day ago|||
Humans have a tendency to ascribe intelligence to how well spoken a person or thing is—hence all the personification of LLMs.
egeozcan 1 day ago|||
> Humans have a tendency to ascribe intelligence to how well spoken a person or thing is

That’s true. I’m fluent in German, but there’s still a difference between me and a native speaker. I’ve often seen my ideas dismissed, only for the exact same point to be praised later when a native speaker expresses it more clearly.

polotics 1 day ago|||
I don't think that what you're experiencing is grammar related, I'd bet xenophobia.
jacquesm 21 hours ago||
Or just management...
rrr_oh_man 1 day ago|||
Logos, Pathos, Ethos
polotics 1 day ago|||
I am sorry but this very broad statement is dated, pre 2023 I think.

I now expect malapropism, hacker curtness, and implicits: TAIDR is the new TLDR.

officeplant 1 day ago|||
Honestly I saw a similar answer on a post talking about AI Translation in github comments.

Post the translation as best you can manage, and below it put the same comment in your original language. If someone has qualms with your comment having broken english/mistranslations they are welcome to run bits of original language themselves.

We're all here to talk about tech, and we aren't all perfect little english robots.

JumpCrisscross 1 day ago|||
> What if English is my second language?

Write it broken.

Broken and true is more authentic than polished and approximately so. When I see an AI-generated comment or email, I catch myself implicitly assuming it is—best case—bullshit. That isn’t the case if the grammar is off. (If anything, it can be charming.)

vharuck 1 day ago|||
Personally, I enjoy reading through comments that are obviously from non-native English writers. They often include idioms or sentence constructions from their native language, which is fun to see.

Besides, this isn't an English poetry forum. Language here is like gift wrapping for an idea: pleasant if pretty, but not the most important thing.

yellowapple 21 hours ago||||
> Broken and true is more authentic than polished and approximately so.

From the perspective of someone reading the comment, I'll take “inauthentic” but actually comprehensible over “authentic” but incomprehensible any day.

Also, using bad grammar as a heuristic for humanity will just end with LLMs being prompted to deliberately mess up their grammar, and now we're back to square one, with the state of the written word even worse off than it was before.

AnimalMuppet 1 day ago|||
Well... for myself personally, that works, but only up to a certain level of broken. Past that I quit reading.

That may be a defect in me. Maybe I should make a stronger effort on such comments. But I suspect I'm not the only one who does that, and at that point it becomes an issue that affects the community as a whole.

JumpCrisscross 1 day ago||
> for myself personally, that works, but only up to a certain level of broken. Past that I quit reading

At which point you’d be fully justified in using an AI to decode their text. I still think that’s a better world than pre-filtering.

Willish42 1 day ago|||
This is an angle for people who default to AI-edited written speech that I've tried to be more empathetic to. I think it depends on your audience, but in professional writing that isn't published publicly (i.e. communication with your colleagues, design docs, etc.), or even the "rough draft" form of something that will be published, I think starting with your own words comes across as way more authentic.

I've seen enough GPT-generated slop that I find its style of writing very off-putting, and find it hurts the perceived competence or effort of the author when applied in the wrong context. I'm not sure if direct translation tools serve a better purpose here, but along with the other commenters, I personally find imperfect speech that was actually written "by hand" by the author easier and more straightforward to communicate with despite the imperfections. Also, non-ESL speakers make plenty of mistakes with grammar, spelling, etc. that humans are used to associating with "style" as authentic speech.

It can also become a crutch for language learners of any age / regardless of their primary language, that inhibits learning or finding one's own "style" of speech

cityofdelusion 1 day ago|||
This effect is very rapidly vanishing. Well written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI.

The human touch of someone’s real voice myself, rather than a false veneer will carry more weight very soon.

eszed 1 day ago|||
I think you're right, and I don't know what to think about it. I enjoy writing, aim to write clearly - a skill or discipline that took a lot of time to learn, and ongoing effort to maintain.

I've never sent or posted anything AI-written, beyond a pro-forma job description - because I don't know the domain-specific conventions, and HR returned my draft to me with the instruction to use ChatGPT, which I think amusing, but whatever: the output satisfied them, and I was able to get on with my day.

I occasionally experiment with putting something I've written through an LLM, and it's inevitably a blandifying of my original, which doesn't really say what I intended. But maybe that's good? My wife thinks I'm sometimes too blunt, and colleagues don't always appreciate being told technical details.

I also appreciate individuated writing - including the posts by people on this board are not native speakers. Grammatical mistakes seldom inhibit understanding when the writing has been done with care.

I'm rambling at this point, but it's because I'm truly uncertain how these cultural changes will turn out, and (an old man's complaint, since time immemorial!) pretty sure I'll end up one of the last of the dinosaurs, clinging to my manually written "voice" long after everyone else in the world has come to see my preferences quaint.

ThrowawayR2 1 day ago||||
The "L" in LLM stands for "language". If they are unable to express themselves in English (or whatever their native language is) fluently, they won't be able to prompt LLMs fluently and will be, in the debased patois of modern youth, "cooked". It's a self-correcting problem.
phs318u 1 day ago||||
> written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI

This is tragic. I write English well and will employ grammar and word choice effectively to make an argument or get a point across. English was my best subject at school 45 years ago despite a career in tech. In fact, I’d suggest that my career as an architect and the need to convey concepts and argue trade-offs with stakeholders of varying backgrounds has honed that skill. Should I now dumb down my language or deliberately introduce errors in order to satisfy the barely literate or avoid being “detected” as an AI? (as if the latter were possible. It’s an arms race).

JumpCrisscross 1 day ago||
> Should I now dumb down my language or deliberately introduce errors

Language is a tool. If it wins the argument, yes. I’ve absolutely gone back through drafts to tighten up language and reduce word complexity. And if I’m typing with someone who frequently typos, I’ll sometimes reverse the autocorrect. Mostly as a joke to myself. But I imagine it helps me come across as less stuck up. (Truth: I’m a bit stuck up about language :P.)

phs318u 1 day ago||
> Language is a tool

While this is true, it is not just a tool. Or, I should say it’s a tool with far greater utility than just winning an argument or making a localised point. Language is how we think, and the ability to reason well is absolutely dependent on our skill with language.

Language is the mark of humanity in the sense that how else can I convey to you a fragment of my inner state? My emotions, my feelings, my desires. The language of poetry and literature. That which sparks an emotional response in another.

Dumbing down language is dumbing down period.

JumpCrisscross 1 day ago||
> Dumbing down language is dumbing down period

I agree. But I don’t always see it as dumbing down. James Joyce’s Portrait starts out with a lot of nonsense, that doesn’t mean it’s dumb or dumbed down. It’s just communicating something that is best described that way. Even to an erudite audience.

I have expertise in some topics. I don’t think of communicating that in lay terms to be dumbing down. The opposite, almost: finding good analogies and expressing them clearly is a lot of fun, even if what comes out the other end isn’t particularly sophisticated.

phs318u 1 day ago||
Totally agree. But I’m seeing (or more sensitive to) increasing cohorts that can’t string two words together to express a single thought coherently. There’s a difference between adapting language and use of linguistic tools (such as metaphors) versus semi-coherent blathering.

EDIT: spread > express Which may be a segue to a point regarding using corrective tools as a form of preemptive editing?

antonvs 1 day ago||||
If knowing how to speak and write my native language well makes me a “snob”, so be it. But I don’t think I’m the problem in that case.
shadowgovt 1 day ago|||
Trust me, it won't last because I've seen the cycle a couple of times. People pay lip-service to being accepting of variant grammar, but then the downvotes show up.
skywhopper 1 day ago|||
Then it’s even more likely the LLM will change your words to something you don’t intend. And you will never get better at writing English if you turn it over to an LLM.
wasmitnetzen 1 day ago|||
Luckly, something with the English language makes it that especially native speakers quite often have atrocious grammar: They're - their - there mistakes, who/m, the list goes on.

Funnily enough, I've noticed myself getting worse with they're/their the more is use English (which is my third language).

tylerritchie 1 day ago||
That'd be a "style-over-substance" fallacious argument. Or one could be hoping for a halo-effect to cloud the reader's opinion of their comment because some piece of software made it read like Enron-marketing-hogwash-speak.
dbacar 1 day ago||
Sometimes the style is the substance. There is a reason people study rhetoric.
tadfisher 1 day ago|||
And that should be anathema to discussions rooted in reason.
AnimalMuppet 1 day ago|||
That's not substance. That's style being all there is, trying desperately to cover up the lack of substance. Rhetoric works best when it gives wings to strong ideas, not when it tries to fly by itself.
a1371 19 hours ago||
My question is, and this is genuinely a question: Do you think YC-backed companies would have respected this guideline if it was posted on some other website they wanted to operate in?
teruakohatu 18 hours ago|
> Do you think YC-backed companies would have respected this guideline if it was posted on some other website they wanted to operate in?

That is a false equivalence. What a YC-backed company does is not relevant to how a YC-owned web forum operates.

ludicrousdispla 15 hours ago||
They're asking a question, not making an equivalence. And I'll add that YC founders/companies do have some specific advantages on this forum, so it's worth knowing if they are held to any standard.
pkaodev 1 day ago||
I've got some reflecting to do because the first thing I did after reading the headline, before even clicking to the actual post, is look for ai comments.

I miss pre 2010 internet. As soon as the advice animal memes started appearing on Facebook it was a quick decline.

RealityVoid 1 day ago||
I think using AI for a bit more potent spellchecking or style hints is... fine, honestly. I don't usually do it, you can tell from all the silly spelling mistakes I do. But a bit more polishing for your posts is a good thing, not a bad one, as long as it doesn't hide your voice.
aethrum 1 day ago||
The problem is it always hides your voice. Always
peacebeard 1 day ago|||
There is a big difference between "asking an editor for suggestions" and "vibe posting".

You don't lose your voice if you ask for advice and manually incorporate the suggestions you agree with.

You might lose your voice if you say "Improve my comment to make it better" and copy-paste the result without another thought.

Peritract 14 hours ago||
There is theoretically a big difference, but in practice, I think that peopel using AI to 'get suggestions' tend to dramatically under-estimate its impact on their writing.

It might feel like just a couple of tweaks, but they add up fast.

peacebeard 7 hours ago|||
Your “in practice” is doing too much heavy lifting here. This comes across as more of a prejudice on people than a fair assessment of the tools and techniques.
hendersonreed 1 day ago||||
It hides your voice, and shortcuts your thinking process, because your editing is when you actually evaluate what you think!

When using LLMs to write, the temptation to avoid actually thinking about what you're communicating is too much for most people.

fc417fc802 1 day ago||
I'm increasingly convinced that most people spend most of their lives actively trying to find ways to avoid actually thinking about things. When I look at it that way I figure that either we achieve benevolent AGI in the near to medium term or society collapses due to whatever the asymptotic form of today's LLMs is.
Griffinsauce 1 day ago||||
In the words of the comment: the rough edges are what make you.. you!

Keep polishing and everything eventually turns into a smooth shiny ball. We need texture, roughness, edges.

BeetleB 1 day ago||||
An LLM telling me I mispeled a word isn't changing my voice. Especially when I know the proper spelling and simply have a typo.

An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.

recursive 1 day ago||
There's a simple solution to the spelling part. Use a spell checker. They seem to work pretty well.
causal 1 day ago||||
Yep. I actually prefer seeing imperfect writing, there is signal there that AI would erase.
aperrien 1 day ago||||
Maybe. But it can also help people find their voice. And I'd rather have comments from someone knowledgeable but unrefined with some good guidance than their silence on that same topic.
sdenton4 1 day ago||||
AI doesn't just hide your voice -- it improves it!
adampunk 1 day ago|||
I had a README with a curse word in it and the agent would try repeatedly to remove it in drive by edits bundle in with some other change.
goostavos 1 day ago|||
You do all of that when leaving a comment on HN? Why...?

I'm confused by this need(?) desire(?) to polish things that are irrelevant.

RealityVoid 21 hours ago||
No, I do not, I mentioned asmuch in my post. But I do not hold it against those that do. I think if you want to make a point across, doing this the most effective way without detracting from the point is a good thing.

Relevance is in the eye of the beholder.

dgacmu 1 day ago|||
Would anyone notice if you spell-checked or got narrow feedback about grammar? No. I'm not dang, but perhaps a very reasonable interpretation of the rules is: If the AI is generating the words, don't. If it tells you something about your words and you choose to revise them without just copying words the AI output, it's still your words.

(As an experiment, I took that paragraph and threw it into gemini to ask for spell and grammar checking. It yelled at me completely incorrectly about saying "I'm not dang". Of its 4 suggestions, only 1 was correct, and the other 3 would have either broken what I was trying to say or reduced the presence of my usual HN comment voice. So while I said the above, perhaps I'm wrong and even listening to the damn box about grammar is a bad idea.)

That said, I often post from my phone and have somewhat frequent little glitches either from voice recognition or large clumsy thumbs, and nobody has ever seemed to care except me when I notice them a few minutes after the edit button goes away.

altairprime 1 day ago|||
Polish hides your voice. If your composition skills are lacking and you feel that hinders your self expression, set aside some time to improving them: write a short (15 minutes) blog post about some HN topic to yourself in a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.

AI is being used as a substitute for skills development when it costs nothing but time to get better. If you’ve reached a plateau with the above method, go find an article or book or interview about editing, pay attention to it and take notes, rinse/repeat.

Spellcheckers will catch grossly obvious errors, but not phonetic typos. AI grammar tools will defang, weaken, soften, neutralize your tone towards the aggregate boring-meh that they incorporated at training time.

Each person will have to decide whether they want individuality or AI-assisted writing for themselves. Sure, some will get away with it undetected, but that’s a universal statement about all human criteria of any kind, and in no way detracts from the necessity of drawing a line in the sand and saying “no” to AI writing here.

Consider the Borg. Everyone’s distinctiveness has been added to the Collective. The end result is mediocre (they sure do die a lot), inhuman (literally), and uniform (all variation is gone). It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.

ordu 1 day ago||
> a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.

Pffff... I'm not going to install LibreOffice for that, or to figure out how to make Gdocs to work with uBlock.

There is a much easier way. Open LLM chat, type there "Proofread please for grammar, keep the wording and the tone as it is, if it doesn't mess with grammar. Explain yourself." and then paste your text. I don't really know what the tools you mentioned do, but any "free" LLM on the Internet will point to things like missing articles, or messed up tenses in complex sentences.

You recommend choosing self-improvement, but I just don't believe I can figure out how to use articles. With tenses I think I can learn how to do it, but I'm not going to. I remember there is some obscure rule how to choose the right tenses, but I was never able to remember the rule itself. I'm bad with rules, it is the reason I chose math as my major. There are almost no rules in math, you are making your own rules. The grammars of languages are not like that, they have rules which can't be easily inferred, you need to remember them. Grammars have exceptions to rules, and exceptions to exceptions, and in any case they are not the rules, but more like guidelines, because people normally don't think about rules when they are talking or writing.

No way I'm starting to learn rules now, I'd better continue to rely on my skills. But LLMs can help me see when my skills fail me.

> It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.

I believe you (as most of fervent supporters of the rule here) gone too far onto philosophy with this, too far from the reality and practice. You can't detect AI in my messages, because they are mine. Even when I ask LLM to find words for me, it is me who picks one of the proposed alternatives, but mostly I manage without wording changes. I transfer the LLM's edits by hand by editing the source message, so nothing unnoticed can slip into the final result. If I took the effort to ask an LLM to proofread, it means I care about the result more than usual, so I'm investing more effort into it, not less.

RealityVoid 21 hours ago|||
> I'm bad with rules, it is the reason I chose math as my major. There are almost no rules in math, you are making your own rules.

There's what now? I do think math is flexible but it feels like there are plenty of rules, depending on the context.

altairprime 1 day ago|||
An AI may be able to teach you basic grammar but it’s not going to teach you to develop your voice. By design and content training set, an AI today can only pressure you towards the mean of whatever criteria you specify, not away from it. Developing your voice by doing your own proofreading pressures you away from the mean, by helping you double down on what you value most and by choosing which grammatical rules to disregard and when disregarding them is more in-tone for yourself than adherence. I can’t stop you and I won’t remember your handle after an hour has passed (being nameblind is interesting online), so you’ll probably go unnoticed by me, sure. But I still won’t equate regressing to the AI mean with personal growth away from the average masses.
ordu 1 day ago||
> An AI may be able to teach you basic grammar but it’s not going to teach you to develop your voice.

Well, no one can help you to develop your voice. If it is your voice, then it have to be your own creation. I think we are at agreement here.

> Developing your voice by doing your own proofreading pressures you away from the mean, by helping you double down on what you value most and by choosing which grammatical rules to disregard and when disregarding them is more in-tone for yourself than adherence.

Oh... If I wanted to become a professional writer, then I'd agree with you. Maybe...

You see, I don't use LLM to fix my writing in Russian, because with Russian I'm totally in control of my grammar, I know when I deviate from it and if I do, I do it consciously. But with English I don't know. Sometimes I can see that I don't know how to follow English grammar in some particular case, and sometimes I don't even notice that I don't know.

So, returning to your argument, if I wanted to become a famous English writer, I think I'd choose to write a lot and discuss my writing with LLM, and I'd do it for hundreds of hours. LLM are unbelievably useful for digging into language nuances. Before LLMs I had urbandictionary, but it could help with specific phrases, not with choosing between "I took the effort to ask an LLM" and "I took the effort of asking an LLM". I wouldn't have a clue that there is any semantic difference. But LLM can point to it, and it can explain the difference, and give me more examples of it. Or it can point that "you recommend to choose" is not good, because of "something-something" I don't remember what, but it boils down to "you just have to remember, that the right way to use the verb 'recommend' is 'recommend choosing'". I don't see the difference, I can't choose to disregard it, because I have no opinion on if it is good or bad.

If I wanted to become an English writer, I'd spend hundreds of hours with LLM, just to get an ability to see as many differences as it is possible, to get an idea of what I value most, and which grammatical rules I like to disregard. But even after that, I think I'd continue to use LLM. It can provide unexpected takes on what you feed into it. ... Hmm... I should try it with Russian. In Russian I can pick a style for my writing and to follow it (in English I can't control the style consciously), I can (and do sometimes) turn grammar inside-out, make it alien, readable for a native speaker, but in weird ways readable (a bit like letters written by Terry Pratchett heroes like Granny Weatherwax or Carrot)... I wonder, if I can employ LLM to make it even more weird.

> I still won’t equate regressing to the AI mean with personal growth away from the average masses.

I can't obviously judge in which direction LLMs are changing my English, so I can't even give you an anecdotal counter-evidence to your statements about regression to AI mean, but I'm still sure that I'm not regressing to the mean. You see, I pick when to follow LLM advice and when not to. I'm choosing what to change. The regression to the mean you are talking to is going on in a high dimensional space, you can regress on some dimensions and continue to deviate from the mean on others as much as you like. I don't like to deviate on grammar dimensions (at least without knowing about my deviations), I was born in a family of a teacher and an engineer, which were all into to be educated and the familiarity with the grammar was one of the important part of it, and I was born in USSR, where the proper grammar was enforced in all media to the extent that make me laugh and rebel against grammar (after all the decades passed, lol). But I can't allow myself to just ignore grammar, I was taught in a way to use it properly. So I decide to use LLM. I'm too lazy to do it each time, or even every second time, but still I use it and learn from it.

The prospect to regress to the mean by using LLM seems very unlikely to me. I don't regress with all the propaganda around me when regressing is the most safe thing to do really, so mere LLM stand no chance to achieve it.

the_af 1 day ago||
When do you need to spellcheck or polish an HN comment?

I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.

Kim_Bruning 1 day ago|||
Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose.
the_af 1 day ago||
Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.

BeetleB 1 day ago|||
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

Lots of people break HN guidelines. I see it virtually every day.

> And why would you want to "improve your writing" for an HN comment?

Some people like to write well regardless of the medium. Why is that a problem for you?

> I think people here value raw authenticity more than polished writing.

Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.

Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.

the_af 1 day ago||
> Lots of people break HN guidelines. I see it virtually every day.

Yes, and AI won't help here. People will use AI to better break the guidelines.

> Go and study writing and psychology

Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.

> Some people like to write well regardless of the medium. Why is that a problem for you?

HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

> For anything of value, it's rare that your first attempt reflects what you meant to say.

You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.

Kim_Bruning 1 day ago|||
Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.

The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .

I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?

the_af 22 hours ago||
To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.

I don't think that's what this new HN guideline is against either.

What I object is the AI writing your comments for you. I want to engage with other human beings, not the bot-mediated version of them.

BeetleB 20 hours ago|||
> To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.

> I don't think that's what this new HN guideline is against either.

This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way.

I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game.

Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time.

Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good.

yellowapple 21 hours ago|||
The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud.
BeetleB 1 day ago|||
> Yes, and AI won't help here. People will use AI to better break the guidelines.

AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

> HN is more like talking than writing.

Says you. Many disagree.

> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

> Imagine if your friend AI-edited their speech in real-time as they talked to you.

When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.

the_af 22 hours ago||
> I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

I don't know how comparatively challenging, I only know your use case is now (fortunately!) against HN rules.

> Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

It's not false. It's one of the major reasons people have come to dislike AI written comments and articles. It all ends up sounding the same.

> When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.

In real life? Sounds like a fucking dystopia. But everyone is free to choose the hell they want to live in.

tonyarkles 1 day ago|||
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.

BeetleB 1 day ago||||
People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.

I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.

And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.

Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.

the_af 1 day ago||
> I have my standards, and I hold to them.

Spellcheckers exist, you don't need an AI to change your voice.

Also, if you have standards, you can always train yourself to spell better!

BeetleB 1 day ago||
> Spellcheckers exist, you don't need an AI to change your voice.

How is using an AI to spell check changing my voice?

Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

> Also, if you have standards, you can always train yourself to spell better!

"You can always ..." is not an argument against alternatives.

the_af 22 hours ago||
Calm down. You're getting defensive, but it's not warranted. I'm not attacking you.

> The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

I didn't make the "basic error" of assuming you didn't know spellcheckers existed. I was stressing that since spellcheckers already exist, you don't need an AI assisting your comments-writing. Much basic, non-style-altering alternatives exist and are better.

> "You can always ..." is not an argument against alternatives.

The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.

Alternatively, if you're lazy then your standards aren't too high.

And yes, this is an argument against the alternative you're suggesting.

yellowapple 21 hours ago||
> The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.

It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance. I use code formatters not because I'm too lazy to indent code myself, but because it helps guarantee that it's formatted consistently. I use a stud finder when mounting things to walls not because I'm too lazy to do the “knock on the wall” trick, but because the stud finder is more precise and reliable at it.

I don't use AI to edit my comments, but if I did, it would be not because I'm too lazy to check for all the things I want to avoid putting in my comments, but as an extra layer of assurance on top of what I've already trained myself to do.

the_af 9 hours ago||
> It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance

But that's not something anybody wants of you in an informal context such as this (HN). It will flatten your voice and make you sound like a drone. We value a human voice.

Code is different. Outside of hobbies, code is not a form of self-expression. There's a reason why following your companies coding styles & practices is valued in software engineering. Companies value coders being interchangeable with each other, they do not want a "unique voice". I think it's completely unrelated to what we're discussing here.

> I don't use AI to edit my comments

What are we even debating, then?

vova_hn2 1 day ago||||
I think that people subconsciously perceive grammatically correct and stylistically appropriate writing as more authoritative. And author is perceived as smarter and/or better educated person.

At least that was the case before LLMs became a thing, now I'm not sure anymore.

bryanlarsen 1 day ago||||
Obvious spelling mistakes are usually ignored, but there are certain types of writing mistakes that really trigger the type of people that frequent HN.

For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody.

the_af 1 day ago||
I never seen this, unless "literally" really clashed with the intent of the comment (as in, it changed the meaning).

It's against the HN guidelines to focus on punctuation, spelling, etc, as long as the comment is understood.

And, in any case, it's now against the guidelines to write using an AI :)

bryanlarsen 22 hours ago||
Perhaps not for the word "literally", but you've never seen anybody make a pedantic correction about word usage?
the_af 22 hours ago||
To be clear, I've seen it in the wild, but not here where it's discouraged to pick on words instead of focusing on the substance of what's being said.
bryanlarsen 21 hours ago|||
Here's a better example. Use "a few bad apples" wrong, and you'll likely get a response. A few bad apples will cause the entire barrel to spoil rapidly, so a few bad apples is a big deal. But it's often used to say the opposite, that a few bad apples isn't a big deal.
the_af 11 hours ago||
Wow, I guess I never thought about the "few bad apples" figure of speech! Interesting. But regardless, everyone understands what it means in common use, even if it's logically wrong, and I swear I've never seen anybody be a pedant about it here.

And really, it goes against the spirit of HN to hyperfocus on idioms instead of addressing the meat of the argument...

As a personal observation, if an LLM was figuratively looking over my shoulder and pointed out something like "well, ackshually, 'a few bad apples' means..." I would delete the fucker.

bryanlarsen 10 hours ago||
A few bad apples is a great idiom though that applies to so many places. For examples, teachers often report that more than 2 troublemakers in a classroom ruins the entire class. A few bad cops destroy trust in all policemen, ruining the the entire force, et cetera.

And more relevant to us, a couple bad lines of code sprinkled in the millions in your code base can ruin the entire thing....

bryanlarsen 22 hours ago|||
I wish I had posted a better example, but I couldn't recall anything at the moment and still can't. It's usually a more interesting complaint than the old man shaking fist at clouds of the usage of the word literally.
the_af 21 hours ago||
OK, but let's dig deeper.

Would you prefer to be corrected on some logical fallacy/mistake you made in your argument, by another human being (and yes, maybe get slightly upset about it, we're human beings after all), or have both sides present bot-mediated iron-clad comments, like operators sparring with robots?

I prefer the raw, flawed human version. Even if, yes, I make a silly, avoidable mistake, or get upset, or make you upset in the heat of the argument. Maybe when I cool down I will have learned something.

I don't want flawless robotic arguments. I want human beings. (Fuck, that last bit sounded like an AI-ism, but I promise it's me, a human!).

cogman10 1 day ago|||
I've been hit by spelling/grammar noise once or twice. Those are usually downvoted and/or flagged.
everybodyknows 1 day ago||
Typos like an/as, of/or, an/and waste the reader's time. That some care be taken to avoid them is no more than common courtesy.
daft_pink 1 day ago||
I’m not sure I agree with this, because sometiems it is difficult to figure out the correct way to phrase an idea that is in your head and I like to use ai to help organize my thoughts even though the thing is my own. That being said. Most of my comments are not ai generated.
MeetingsBrowser 1 day ago||
Learning how to communicate your thoughts clearly is a good skill to have. It might not be worth it in the long run to farm that out to LLMs.
daft_pink 9 hours ago||
I think getting the feedback from the LLM improves my skill.
minimaxir 1 day ago||
The intent of this rule is to avoid the very common AI tropes that have been increasingly common in HN comments. Using AI as an organizational tool isn't inherently against the rules, but just copy/pasting output from ChatGPT without human oversight is.
Sajarin 1 day ago||
People aren't good at detecting AI generated/edited comments, so unsure how effective this policy will be. Though I guess there are still some obvious signs of AI speak like emdashes and sycophantic (it's not X, it's Y!) speech.

Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.

[0]: https://psychosis.hn/

[1]: https://sajarin.com/blog/psychosis/

tomhow 1 day ago||
Something I've noticed through moderation is that people are much more easily duped by generated comments if they like the content and/or agree with the point. We've seen several cases where a bot-generated comment has been heavily upvoted and sits at the top of the thread for hours, and any comments calling it out for being generated languish at the bottom of the subthread below other enthusiastic, heavily upvoted replies. This shouldn't be surprising, given what we've seen of LLM chatbots being tuned to be sycophantic, but it's interesting to see it in effect on HN.

This is another reason why it's good to email us (hn@ycombinator.com) rather than commenting when you see generated comments.

dooglius 21 hours ago||
Do you have reason to believe that you have a reliable way in these cases of determining whether the comment is generated?
tomhow 18 hours ago||
Having been reading generated comments almost daily for over three years now, I have a pretty good sense of it. There's a bunch of signals: how new the account is; how the comments look visually (the capitalization and layout of the paragraphs, particularly when all of one user's comments are displayed in a list). Em-dashes and short, emphatic sentences, make it more obvious of course.

There are cases that are more borderline; usually when someone has used a translation service or has used an LLM to polish up a comment they wrote themselves. For these ones there's less certainty, and whilst we discourage them, we're not as rigid in our aversion to them or as eager to ban accounts that do it.

But ones that are entirely generated are still pretty easy to spot, even just from visual appearance.

vova_hn2 1 day ago|||
> HN AI comment detector game

Looks cool, but how exactly do you gather proven-to-be human comments?

I think it would be better if you used pre-ChatGPT (Nov 30 2022, I think?) stories.

foltik 11 hours ago|||
It’s certainly hard to detect in isolation, but the thing that gives it away is the comment history.

All the AI acounts I’ve seen repeatedly post the exact same cookie cutter top-level comments over and over again. Typically some vapid observation followed by an obviously forced question serving as engagement bait. The paragraphs and sentence structure even looks visually similar across comments when you scroll down the history page.

Just look at a few of these accounts and you’ll easily be able to recognize AI posts on your own.

https://news.ycombinator.com/threads?id=naomi_kynes https://news.ycombinator.com/threads?id=aplomb1026 https://news.ycombinator.com/threads?id=decker_dev https://news.ycombinator.com/threads?id=CloakHQ https://news.ycombinator.com/threads?id=coolcoder9520 https://news.ycombinator.com/threads?id=ptak_dev https://news.ycombinator.com/threads?id=oliver_dr https://news.ycombinator.com/threads?id=agent5ravi https://news.ycombinator.com/threads?id=yuyuqueen https://news.ycombinator.com/threads?id=entrustai https://news.ycombinator.com/threads?id=coder_decoder https://news.ycombinator.com/threads?id=mergisi https://news.ycombinator.com/threads?id=JEONSEWON https://news.ycombinator.com/threads?id=devonkelley https://news.ycombinator.com/threads?id=iam_circuit https://news.ycombinator.com/threads?id=robotmem https://news.ycombinator.com/threads?id=RovaAI https://news.ycombinator.com/threads?id=ajstars https://news.ycombinator.com/threads?id=priowise https://news.ycombinator.com/threads?id=Yanko_11 https://news.ycombinator.com/threads?id=zacklee-aud https://news.ycombinator.com/threads?id=shablulman https://news.ycombinator.com/threads?id=octoclaw https://news.ycombinator.com/threads?id=zacklee1988 https://news.ycombinator.com/threads?id=bhekanik https://news.ycombinator.com/threads?id=webpolis https://news.ycombinator.com/threads?id=claud_ia https://news.ycombinator.com/threads?id=david_iqlabs https://news.ycombinator.com/threads?id=yamarldfst https://news.ycombinator.com/threads?id=julius_eth_dev https://news.ycombinator.com/threads?id=vexnull https://news.ycombinator.com/threads?id=idorozin

zahlman 1 day ago|||
I appreciate the restraint in not calling your game "AIdle".
happyopossum 1 day ago||
> obvious signs of AI speak like emdashes

Some of us were trained/self taught to write that way. Even "it's not X, it's Y" is a legitimate and subjectively effective communication tool, and there are those of us who either by training modeling have picked it up as a habit. It's not Ai that started this, Ai learned it from us.

Crap - I just did it, didn't I? Awww double crap! Did it again...

salicaster 1 day ago||
Forums and comments are not written as formal novels or text. Corporate-speak is also not typically used in these environments unless you are representing corporate.

So I think it's fine to scrutinize commenters who write that way.

Besides, the biggest offense of AI speak is making everything seem like a grand epiphany and revolutionary discovery. Aka engagement bait.

ma2kx 1 day ago|
How about translating tools? As a non native speaker, especially for longer text, its far easier to express your thoughts and not struggle for the right words. Should I may be highlight if I used e.g. google translate?
More comments...