Top
Best
New

Posted by usefulposter 20 hours ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
3887 points | 1451 commentspage 2
redbell 2 hours ago|
First of all, I suggest that moderators add this to the comments' section in the linked guidelines. It should clearly states that pasting AI-generated replies is discouraged and does not fit within the community spirit.

Second, I have to confess that I did this sin a couple of times now, but I came to realize that this is neither good for me nor for the HN community. Although I used AI just for rephrasing, I decided to not do this ever and I'd rather write my own words with mistakes than post generated words based on my thoughts.

It happened for me once and it strikes me like a nuke and I felt truly embarrassed. A couple of months I wrote that comment (https://news.ycombinator.com/item?id=42264786) then I asked ChatGPT to rephrase it and then mistakenly, pasted both comments, the original above and the generate one below and I hit submit. Shortly after, a user comes, read my comment and replied with that embarrassing reply and honestly, I deserve it. From that moment I realized how things can got messed up quickly when you rely heavily on that AI.

tzs 17 hours ago||
How about comments that include AI output if labeled?

Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).

I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.

I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.

I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.

Would that be OK or would that count as an AI written comment?

I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:

1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).

2. Use too many commas.

3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.

I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.

[1] https://news.ycombinator.com/item?id=46867167

altairprime 17 hours ago||
You were correct not to post the summary. HN tends to expect readers to invest time in reading and understanding long form content and for community to step into discussions and offer context and explanations when necessary. One of the most important context statements on this site has been “in mice”, posted as a two word comment, elevated to top comment on the post. An AI summary will miss that context altogether while busily calculating a cliffsnote no one wants to read (and could often get you flagged and potentially banned, even before today’s guideline update). If a reader wants an AI summary, they have the same tools you do to generate it by their own hand.

If you have domain familiarity with it, have some personal insight to offer a lens through, or care about the topic deeply enough to write a summary yourself, then go ahead! I almost never post about AI given my loathing of generative ML, but I posted a critical summary in a recent “underlying shared structure” post because it was a truly exciting mathematical insight and the paper made that difficult to see for some people.

Please don’t use AI to reduce the distinctiveness of your writing style. Run on sentences are how humans speak to each other. Excess commas are only excess when you consider neurotypicals. I’m learning French and I have already started to fuck up some English spelling because of it. None of that matters in the grand scheme of things. Just add -er suffix checks to your mental proofreading list and move on with being you.

ASalazarMX 17 hours ago||
I've done research using AI, it does work better than a search engine (when it doesn't hallucinate); but I find copy-pasting verbatim distasteful, and disrespectful of the time of others.

What I do is copy the URLs for reference, and summarize the issue myself in as few sentences as possible. Anyone who wants to learn more can follow the reference.

Cthulhu_ 46 minutes ago|||
Yeah same, just like reading out a wiki page or other resource (for too long) instead of reading it to yourself and summarizing it for other people.
altairprime 17 hours ago|||
That’s fine, then! A summary handcrafted for HN is of course fine, though you might find more value in citing what you consider most distinctive about it as a higher priority than a summary if not different than its own opening paragraph / abstract / etc.
topaz0 17 hours ago|||
It sounds like you already know how to improve your comments, how about just doing those things.
tzs 16 hours ago|||
Well, I keep missing the "serve"/"server" thing because spell checkers think "server" is a real word so don't flag it. :-)
topaz0 2 hours ago|||
I'm happy to forgive that kind of small typo in a hacker news comment, but generally it's easy to catch these things by just reading over the thing one time. If you're putting any amount of thought into your contribution it should be much faster to read it over one time than it was to write in the first place.
Hnrobert42 14 hours ago|||
Getting that wrong is a small price to pay. Plus, people know what you mean.
raincole 17 hours ago|||
Too much effort, bruh.
Cthulhu_ 45 minutes ago|||
IMO, if it's too much effort to improve one's comments, then it's too much effort to write them in the first place.
lucumo 5 hours ago||||
There's something viscerally distasteful about a one-liner comment berating the author of a long thoughful comment for exerting too little effort.
verdverm 17 hours ago|||
Capitalization is apparently too much effort for some now. Who would have thought the Ai would make us so lazy so quickly?

Who cares about people with reading disabilities, let's shift burden onto the reader. My time is better spent managing my Ais.

ASalazarMX 17 hours ago|||
This started years before LLMs, as a way of signaling unconventional thinking. Maybe influenced by the UX of instant messaging.
verdverm 17 hours ago||
That's my general understanding too. More recently people have adopted it as a way to not look like Ai, I've had several cite that as their rationale. There has been a notable uptick since the Ai step function change at the end of last year, along with all the other patterns we see, such as the one that underlies this new HN rule.
charcircuit 17 hours ago|||
>onto the reader

Or the reader's AI who is able to format or translate the text to make it easier to read for the reader.

verdverm 16 hours ago||
I shouldn't have to burn tokens to read. Most input boxes and editors will handle the capitalization for you during auto-correct. It seems like people go out of their way to drop the caps.
duskdozer 12 hours ago||
On mobile, maybe? I haven't had anything like that on any PC I've worked on.
notatoad 14 hours ago|||
Before chatbots, people used to link to Google search result pages as a passive-agressive way to say “the information is out there, go find it, I don’t care about you enough to explain it to you”

Pasting a chatGPT response into a comment, and labeling it as such, feels the same to me.

It is more, not less, insulting than trying to pass an AI response off as your own.

Cthulhu_ 44 minutes ago||
Ah, good old lmgtfy links. I googled it just now and it seems to have broken.
nunez 16 hours ago|||
I'd be fine with treating this like snippets from Wikipedia with citations back to the article. This way, people can manually verify the sources if they so choose.
computomatic 17 hours ago|||
> I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.

> Would that be OK or would that count as an AI written comment?

The rule seems written to answer this directly.

Absolutely nobody cares what Perplexity has to say about the case - summary or otherwise. If you mention what the case is, I can ask claude myself if I’m interested.

Better yet, post a link to an authoritative source on the case (helpful but not required).

At minimum, verify your info via another source. The community deserves that much at least.

An AI-generated summary adds nothing positive and actually detracts from the conversation.

tzs 16 hours ago||
I did post a link to the Supreme Court's decision at Cornell Law School's Legal Information Institute's archive of Supreme Court decisions.

I looked at the decision itself sufficiently to see that it was the case I remembered and that my recollection of the facts and the decision was correct.

I just didn't include a summary because I didn't find a good one I could link to. Normally I'd write a brief one myself but I found that hard to do when Perplexity's summary was sitting right there in the next window and it was embarrassingly better than what I would have written.

bsimpson 17 hours ago|||
This is how I would use/expect AI to be used in HN. I would also like this clarified.
altairprime 17 hours ago||
AI-edited comments are not welcome here. If you’re not able to see and make those changes in your HN writing without AI editing, then you’ll either have to post on HN without those changes, or you’ll have to strive to apply them yourself.
bsimpson 15 hours ago||
This sounds like you're chastising me for something totally distinct from what I was supporting the request for clarity on.

I'm not asking or advocating for using AI as a copy editor.

The post I replied to asked about using Gemini as if it's Wikipedia - that is, saying "according to Gemini" when citing a fact where one might have once wrote "according to Wikipedia" or even "according to Google."

This is a forum people hang out in part-time. It's nobody's job to go spend an hour researching primary sources to post a comment. Shallow searches and citations are common and often helpful in pointing someone in the right direction. As AI becomes commonplace, a lot of that is being done with AI.

"Can I have AI write a reply for me?"

is a very different question than

"Can I cite an AI search result?"

This rule change is clear about the former. There's room to clarify the latter.

duskdozer 12 hours ago|||
I don't see how an AI response would have any value. If you aren't familiar enough with the material to make a statement yourself, you aren't familiar enough to validate the response. If you use it as a pointer to verifiable sources, you should instead post the sources themselves and why you think they're relevant.
altairprime 15 hours ago|||
> This sounds like you're chastising me

Nope. (For an example of that, see any comment I posted to this discussion that starts with “Please don’t”.)

> "Can I cite an AI search result?"

Ah. An AI response is neither a primary source nor a reference source, and HN tends to strongly prefer those. Linking to a Google /search?q= isn’t any more welcome here than linking to an AI /search?q=; neither are stable over time and may vary wildly based on algorithmic changes. Wikipedia, as a curated reference source, is not classifiable as equivalent to either a search engine or an AI response at this time, and evidences much stronger stability, striving towards that of a classical print encyclopedia (but never reaching it).

Perhaps someday Britannica will release an AI that only provides fully factual replies that are derived in whole from the Britannica encyclopedia, but as of today, AI has not demonstrated the general veracity and reliability that even Wikipedia, the very worst of possible reference sources, has met over the years.

(Note that an Ask-A-Librarian response would be more credible than a Wikipedia page and much more credible than today’s AI attempts to replace that function; but linking such a response would still be quite problematic, not the least of which because the primary value of that response is either directly quotable and/or is citations that should be incorporated into the post itself. But if that veracity differential changes someday once the AI hallucination problem is solved at the underlying level rather than in post-filters, I’m happy to revise my position.)

verdverm 17 hours ago|||
I would still say no, there is something about finding the words for yourself, even if they aren't as elegant as an Ai can make. It's fine, most humans prefer imperfection.

The point is we don't want to read Ai summaries, we can make one ourselves if we want. Personally, with certainty, I don't want to read one from Perplexity on the basis that they do the Ai for Trump Social. (reverse-kyc if you are not aware)

For some inspiration on why this is meaningful: https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...

tzs 16 hours ago||
> I would still say no, there is something about finding the words for yourself, even if they aren't as elegant as an Ai can make. It's fine, most humans prefer imperfection.

In this instance the only reason I considered using the AI summary was that there was no Wikipedia article about the case (which surprised me as it is one of the foundational cases in Commerce Clause law...although maybe all the points in it are covered in later cases that do get their own Wikipedia articles?).

Normally I'd just copy Wikipedia's summary into my comment and link to Wikipedia and to the decision itself for people that want the details.

> The point is we don't want to read Ai summaries, we can make one ourselves if we want.

How would you know if you wanted one? Someone mentioned they would like to see a case on this subject but they didn't think it would ever happen. I knew of a case on the subject, found the reference, and posted the link. At that point we are already on a tangent from what most of the thread is about and from what most people reading it care about.

The point of the summary would be to let you know if the case might actually be relevant to anything you cared about in the thread. (The answer would probably be "no" for 95+% of the people reading the comment).

verdverm 13 hours ago||
I have some peer comments that temper and add color to my opinions on this

All of this Ai stuff is new for society and we have a lot to work through. Here on HN, we want to err to the side of keeping as much humanity as possible. It's good to have a place like that, for fresh air and stretching our minds differently and regularly as Ai becomes more ubiquitous in our lives.

ex: https://news.ycombinator.com/item?id=47344064

all: https://news.ycombinator.com/threads?id=verdverm

rzmmm 17 hours ago||
Perplexity supports sharing URL to the thread. I think it's quite natural to link AI summaries like that.
davorak 17 hours ago|||
I do not want to see posts to AI summaries with the AIs the way they are now. None I have used so far can cite sources correctly or verify its information. If the poster is not doing that verification then it is pushing that work on to the readers. If the poster did do the verifications than posting that verification is better than the ai summary.
lossyalgo 17 hours ago||||
How long do those links exist though? Until the author deletes it?
ASalazarMX 16 hours ago|||
> I think it's quite natural to link AI summaries like that.

I think you misspelled "convenient". More than the small effort that it takes one to share generated text, one has to consider the effort of who knows how many humans that will use their time to read it.

If a LLM wrote something you don't know about, you're not qualified to judge how accurate it is, don't post it. If you do know the subject, you could summarize it more succinctly so you can save your readers many man hours.

If LLMs evolve to the point where they don't hallucinate, lie, or write verbosely, they will likely be more welcome.

rzmmm 8 hours ago||
I'm a bit confused about these replies. The user was talking about posting AI summaries in HN comments. I suggested that posting an URL may be better choice.
p0w3n3d 7 hours ago||
It's quite funny how native speakers can recognise the AI voice writing or speaking their tongue.

As a Polish man I am repulsed when I hear AI generated Polish voice in a commercial, but can't see problems in AI generated English speech

larodi 7 hours ago||
given the content of the text is of significant importance, it would matter very little the tone it is presented with.
nurumaik 7 hours ago||
As a russian I'm repulsed by both english and russian slop the same way
lionkor 4 hours ago||
If you feel the need to fix/edit your own comments with AI, keep in mind that this is not necessary at all. If someone can't figure out what you're saying, and don't care to try, they can run their LLM over it and have it summarize it with emojis, bullet points, and slightly changed content. You dont need to do that for all of us.
Cthulhu_ 42 minutes ago||
> If someone can't figure out what you're saying, and don't care to try,

This puts the onus of being comprehensible to the reader, which isn't fair I think. If you can't get your point across in a way that is comprehensible, maybe don't post.

hrmtst93837 4 hours ago|||
One potential use case is for individuals who cannot read or write English. They could use automatic translation to read HN and an LLM to translate their comment into English. One possibly would be to forbid such use.
layer8 3 hours ago|||
They wouldn’t know what is lost in translation. Automatic translation is often far from perfect, even more so when translating single comments without context. It’s a crutch when nothing else is available, but it’s not a good way to have a conversation.
hrmtst93837 3 hours ago||
It depends. You could include an entire comment thread along with the article in the context for an LLM. This would significantly improve translation quality.
lionkor 2 hours ago|||
Deepl and other services exist, and they at least aren't slop cannons
speefers 4 hours ago||
[dead]
abtinf 20 hours ago||
Good. This helps establish it in the HN culture. That’s the purpose of guidelines.

99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.

Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.

loeg 18 hours ago||
I mostly agree, although we've seen big shifts in the culture towards rule-deviating norms over time. Look at the guidelines for ideological battles or throwaway accounts, for example. And, as always:

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

gr8tyeah 19 hours ago|||
This is only meaningful if enough people read it and agree
Cthulhu_ 40 minutes ago|||
That's assuming community input / democracy, but especially online there's a good argument to be made for authoritarianism.
abtinf 19 hours ago||||
That’s true. Fortunately, by virtue of it being added to the guidelines, quite a few folks here are prepared to reply to obviously generated comments by simply citing and linking the rule. Just search for “shallow dismissal” to see many examples.

It will take time, but eventually everyone will know about it.

altairprime 18 hours ago|||
> quite a few folks here are prepared to reply to obviously generated comments by simply citing and linking the rule

Note that the guidelines do explicitly say not to post about guidelines violations in comments, and to email them instead. I know this isn’t a well-loved guideline in this modern era, but duly noted: those well-intended comments are themselves breaking the guidelines.

lokar 17 hours ago||
Are you referring to:

> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

If so, that seems different. If not, can you clarify?

altairprime 17 hours ago||
That one, yes. “Insinuations” is a less conditional form of “Accusations”, connected by the concept of “Claims”; they’re all synonymous from a general perspective:

- I insinuate that you are a bot (often shortened to “Is this a bot?”)

- I claim that you are a bot. (often shortened to “This is a bot.”)

- I accuse you of being a bot l. (often shortened to “Are you a bot?”)

The part where I’m interpreting to include accusations of bottery and slop is “and the like. It”; the first clause, ‘the like’ refers to the generic category of accusations against posted comments, which historically were the listed examples, but is also defined to include others not listed, such as today’s popular accusations of bot or AI; the second clause, ‘It’, refers to all insinuations-class content. Without the list of examples, this reads:

’Please don’t post insinuations. It degrades discussion

Yep, this is true. Accusations, Insinuations, Claims, of bot or AI or astroturf; they all wreck discussions and I end up having to email the mods to deal with them. A lot of people use the rhetorical device of Discredit The Opposition by invoking this sort of thing, and while that’s less prevalent in ‘reads like AI’ insinuations, they still degrade the site.

With AI-assisted writing is a violation of site guidelines, and even before it was, posting of AI-assisted writing was a clear ‘abuse’ of the community’s expectations of unassisted-human discussions. Aside from expectations, I can also classically understand in Internet history that ‘violating the guidelines’ is the phrase formerly known as ‘abuse of service’, by which I interpret the above reference to abuse to refer to breaking the guideline about posting accusations.

The guidelines are not a legal contract as program code, and perhaps this one is clunky enough that it needs to be reworded slightly; thus my intent, once the flames die down here, to let the mods know about the confusion. As I’m not a mod, this is my interpretation alone; you might have to email the mods and ask them to reply here if you want a formal statement on the matter, given how many comments this thread got in a couple hours.

ps. On ’and is usually mistaken’: I’m not a mod, so I can’t judge how often accusations of AI/bot are mistaken, but I’m also an old human who learned em-dashes in composition class, so I tend to view the modern pitchfork mobs out to get anyone who can compose English as being less accurate in their judgments than they believe they are.

rendleflag 17 hours ago||||
What constitutes “at edited”. If I throw a block of text in to an ai see if it makes sense — say a response to a post — and fold the suggestions in, is that “ai edited”?
bigfishrunning 17 hours ago|||
Yes. That's what the rule is about.
yellowapple 15 hours ago||
Then that's a dumb rule. God forbid someone wants to auto-correct one's own grammar in a comment before posting it.
duskdozer 12 hours ago|||
If you look at what you wrote and can't identify what rules you've broken, how are you able to validate that the AI output doesn't change the meaning of what you wrote?
yellowapple 12 hours ago||
Knowing whether or not the AI changed the meaning of what you wrote is not reliant on knowing which specific rules you broke. It's only reliant on you actually reading what the AI spat out and deciding “yes, this is what I meant” or “no, this is not what I meant”.

Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote?

duskdozer 11 hours ago||
>Knowing whether or not the AI changed the meaning of what you wrote is not reliant on knowing which specific rules you broke. It's only reliant on you actually reading what the AI spat out and deciding “yes, this is what I meant” or “no, this is not what I meant”.

That's fair.

>Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote?

I think what I wanted to get at is more like this:

1. I think that they may be part of the meaning

2. I think that people would be primed to accept changes even if they change the meaning

3. I suspected that it would always correct something and wouldn't just say LGTM even if the input was fine

To check, and at the risk of this being hypocritical, I asked for a grammar correction on part of your post that I thought had no mistakes, and both in context and isolation, it corrected "spat out" to "produced." Now, this isn't a huge deal, but it is a loss of the connotation of "spat out," which is the phrasing you chose.

I think grammatical errors are low-cost, and changes in meaning and intent are high-cost, so with 2. above, running it through an LLM risks more loss than it gains.

bigfishrunning 13 hours ago|||
You're absolutely right! It's not the people correcting their Grammer that are the motivation for this rule, it's the people abusing these tools and ruining every online discussion with cookie-cutter comments.

In all seriousness, if you use some tool to make sure you're using the right "there", noone will mind. Just don't generate another boring predictable comment and everything will be ok

ASalazarMX 17 hours ago||||
Um, why would you do that instead of waiting for someone more knowledgable to reply, and learn from? Replies are not mandatory, and experts/insiders participating is one of the best parts of the human Internet. Let them shine.
rendleflag 15 hours ago|||
It can catch things that I might miss or might be misinterpreted. I sometime miss simple things, like like repeated words, that an AI point out. Is a spell checker considered "AI"? Is Grammerly? Okay, maybe Grammerly from 5 years ago as opposed to today? If I'm typing on my phone and it pops up the next suggested word, is that AI edited?

And no, I don't have to reply to a post, but when I think it's a bad policy, should I just accept it without discussion? And who determines the "experts/insiders" and which voices should be allowed?

I_dream_of_Geni 13 hours ago||
Yes, these are MY questions and feelings too. In the past, if I just HINTED at asking these kinds of questions, I was downvoted into oblivion (in other accounts. I have to say THAT specifically because some people here dive in to my account and get super anal about my age, my previous comments, my moniker, ad nauseum)
nobody9999 12 hours ago|||
>Um, why would you do that instead of waiting for someone more knowledgable to reply, and learn from? Replies are not mandatory, and experts/insiders participating is one of the best parts of the human Internet. Let them shine.

As Isaac Asimov pointed out[0]:

“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'”

This thread runs through many cultures and isn't just a problem on the Internet, although the Internet certainly has accelerated/worsened the problem. And it has created a distrust of experts which (as has been obvious for a long time) has made us, as a whole, dumber and less informed.

I recommend The Death of Expertise[1] by Tom Nichols for a sane and reasonable treatment of this issue. If books aren't your thing, Nichols did a book talk[2] which lays out the main points he makes in the book. During that talk, he also gives the best definition of disinformation I've heard yet.

[0] https://www.goodreads.com/quotes/84250-anti-intellectualism-...

[1] https://en.wikipedia.org/wiki/The_Death_of_Expertise

[2] https://www.c-span.org/program/book-tv/the-death-of-expertis...

rendleflag 2 hours ago||
Again, the question is who blesses the expert? There’s a difference in having a voice and your voice being taken seriously.

If someone posts a link on a a new laptop, who should respond? I am not an expert on the current laptop market, but I have options about it. Maybe my English is not the best so I run through an AI to clean it up of ambiguities or wrong wording. Maybe I say “I like to take my laptop from behind” when I meant “I lift my laptop from the back”. An AI could point out this type of error.

bigiain 18 hours ago|||
Sadly, I suspect the rate of generation of AI "everyones" vastly exceeds the community's capacity to teach culture.
bhhaskin 19 hours ago||||
Nah they are pretty good a banning users that don't follow the guidelines.
abtinf 19 hours ago|||
Yes, and it’s not like they just insta-ban every infraction.

I’ve broken the guidelines on this site before. The mods reply and say “hey, stop doing that, here is the guideline”. I stopped doing it. Life continues.

altairprime 19 hours ago||
(They do react differently if you show a pattern of disregard rather than a one-time event; ‘dang before’ might pull up some of those in a search.)
jbaber 19 hours ago|||
One of the virtues of HN is polite prodding when the rules are broken.
Apofis 17 hours ago||||
When creating an account, there should be a short screen with the salient points from the guidelines to follow.
gus_massa 16 hours ago|||
This https://news.ycombinator.com/newswelcome.html
wombatpm 16 hours ago|||
That will just prompt someone to create a HN account creation agent and post it to Moltbook.
VoodooJuJu 17 hours ago|||
[dead]
wombatpm 16 hours ago||
This discussion reminds me of the Paradigms of Power featured in Adiamante by L E Modisett; about consensus, power, morality and society. It’s a good read.
schopra909 18 hours ago||
Honest question, why were folks posting AI generated comments in the first place? There's such a high inertia to comment. I only comment when I have something to contribute OR find something incredibly interesting.

So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?

throw10920 14 hours ago||
In addition to "Internet points" mentioned above - influence operations, both from nation states (e.g. the PRC 50 Cent Party, and probably the dozen most powerful nations in general), and from gray/black-market marketing companies.

Influence is valuable, and HN is a place that people who are aware of it trust highly.

(AI generation of random comments helps build "trustworthy" accounts that can then be activated when a relevant issue comes up)

[1] https://en.wikipedia.org/wiki/50_Cent_Party

ngruhn 10 hours ago||
Ok, those are probably not deterred by guidelines though.
throw10920 2 hours ago||
They absolutely are. You ever done any work fighting spam? It's all about making it hard and expensive enough for spam to land that it's no longer economically viable - you den't and can't actually stop all spam. Same thing here.

Sure, the bad actors don't particularly care for the guidelines - until their accounts start losing karma and getting dead'd/banned. Then they do, and that still materially improves the site.

komali2 14 hours ago|||
One trend I noticed here and, annoyingly, in my co-op, is that people will take a really dense and complex topic that's either currently engaged in deep conversation with multiple people or ripe for it, and then post a link to a Chatgpt conversation with a tag like "I didn't have time to get my thoughts together but here's a Chatgpt overview/some suggested solutions!" For me that's the equivalent of "I googled that for you," aka extremely rude.

Thanks, if I wanted Chatgpt's middle-of-the-bellcurve ass response I would have put the five seconds of effort in myself to type the question into its input field.

nunez 16 hours ago|||
Most comments on here are really well-written. I can imagine someone for whom English is a second language (or a first language but aren't as good at writing as they'd like to be) using an LLM to "keep up." Of course, this sometimes works until they decide to post something without those tools.
drtgh 15 hours ago||
Although I'm unsure about their purpose, I am fairly certain it is not an English as a second language matter.
RevEng 13 hours ago||
Several people at my work do use LLMs for this in code, commit messages, and even on Slack. It may not be everyone or even a majority but it is something that some people legitimately do.

While many here are saying "who cares about your spelling and grammar," they have not been the people whose poor English gets them flagged as being somehow less intelligent or credible. Half the problem with LLMs is that they speak eloquently and we use that as a signal of someone's intelligence and trustworthiness. For someone who is otherwise intelligent but doesn't know English well this can be a major setback.

deckar01 17 hours ago|||
Reputation farming -> upvote rings -> black market promotion
micromacrofoot 18 hours ago|||
Same as always: being right about something
patrakov 13 hours ago|||
On HN, I sometimes used AI to change the tone of my comments - e.g., to add sarcasm or extra-polished corporate-speak for comical effect. OK, now I won't.
xxs 8 hours ago||
If you cant do the sarcasm yourself (and be witty enough), it's just not fun or improved in any way. Use of corporate speak is sarcasms on its own right, of course - but it only makes sense if it's something your are exposed to (and people can relate), instead of being fake.

Also, if you have to mark the sarcasm, then it's proper bad.

apprentice7 17 hours ago||
Internet points.
mike741 16 hours ago||
which can then translate to real-world money points
Cider9986 14 hours ago||
How would karma on HN lead to this?
mike741 13 hours ago||
You need a minimum threshold of karma in order to downvote others on HN. Additionally, accounts with more well received activity are harder to identify as shills. That's why there are black markets where social media accounts are bought and sold and the price is typically proportional to the account's karma.
kittikitti 1 hour ago||
An important distinction I feel is often left out of the conversation of regulating AI generated content are the psychological effects of negative or positive consequences or reinforcement.

I think we are overwhelmingly utilizing negative reinforcement for AI generated content; where there are consequences for engaging in this behavior. On the other hand, positive reinforcement would encourage authenticity and greater human content. The reality of the situation is that AI generated content won't go away and it's become a game of who can hide their artificial content the best. Thus, I believe that positive reinforcement is the solution.

I think we must instead encourage human created content instead of policing AI generation. There are so many rules to follow already that by the time I create the content, I've gone through enough if/then logic that it feels like AI anyway.

Supermancho 14 hours ago||
I use AI for the elements I feel are weak or unclear in the transcription. Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I cannot always point out why I think they are unclear or jumbled.

I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.

Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.

[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^

Springtime 13 hours ago||
I get the sense the point of the HN rule is to preserve unique human expression, regardless of how someone's communication skills are at a given point. Like, I periodically see articles on HN which have stale turns of phrase and signs of poor LLM use (which then becomes distracting while reading) and then the author sometimes mentioning in the HN comments they used an LLM to 'help' with their post based on some list of points they wanted to communicate. Yet when it's relied on too heavily like that it smothers the author's own voice.

If an opinion/idea is being communicated in the voice of another then something unique to that user has been lost. Like if I were to have a germ of an premise and told someone else about it and I found their thoughts clearer and how they expressed it and then copied how they'd expressed it then I think I'd be at least crediting them. Otherwise our own growth with self-editing and clarity will just atrophy and the internet will be a soup of homogenized ways of expressing things.

isodev 13 hours ago|||
Your “unclear or jumbled” but authentic comment is always better than “feels like chewing sand”, normalised and calibrated LLM outs
duskdozer 12 hours ago|||
I just wrote a similar comment elsewhere, but I would much rather just read your jumbled or unclear writing than whatever's output from an LLM. At least I know you meant at one point the words that are written. It's not a grammar test in English class or an academic paper; if you use a few fragments or run-ons, it's not a big deal.
Nevermark 11 hours ago|||
There is a tradeoff for sure.

But, even though I think slippery slope arguments should be used very sparingly, there is a good case for one here.

Also, learning how to communicate better, and learning to listen better, is a real value add to this site. Which would get washed out if both writing, and therefore reading, were spoon fed by models, who are also washing away individuality of expression and nuance of views.

kindkang2024 13 hours ago|||
> Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I

Same here. And sometimes, I got downvoted and treated as an LLM — in the name of valuing the human.

To me, what matters is the will behind the words. Ideas and words themselves are cheap (this becomes clearer every day in the AI age) — they're almost nothing until they're executed and actually help someone.

> "The Dao can be told, but what is told is not the eternal Dao. The Name can be named, but what is named is not the true Name." — Laozi, Dao De Jing

Like code we write — it's dead text on a screen until it's running. And what we really care about is the running effect — and that is exactly the reason, the will, behind why we write the code in the first place.

Murfalo 13 hours ago||
I am choosing to believe this is satire. A+
smusamashah 10 hours ago||
Not a satire, this user was the reason I submitted a post asking for a policy only to find out it's already on the front page today.
kindkang2024 9 hours ago||
> this user was the reason

Feeling sad I am 'the reason'. But that's ok.

> asking for a policy

It is always the same sad story. Someone learns a new name, gets trapped inside, and tries to escalate conflict. I will not call that 'open mind'.

The deeper reason is that there is no kindness — many really don't care about others who seem alien to them. They just hide that behind all kinds of names.

smusamashah 7 hours ago||
You don't realize that "talk to my hand" is an insult, which is exactly what you are doing.
nobody9999 12 hours ago|||
>I use AI for the elements I feel are weak or unclear in the transcription. Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I cannot always point out why I think they are unclear or jumbled.

Your point is well taken.[0]

Personally, I take a different approach. I use a 5 minute delay for comments on HN so I can look at the post after I submit it, but before anyone else sees it.

This gives me the opportunity to read over my comment and the comment to which I've replied to make sure my prose is decent, my point is clear and any typos or other inaccuracies can be corrected.

I don't use LLMs as an editor as I've found that I'm probably a better editor than the average internet user, which is what LLMs represent.

Perhaps that's arrogant of me, but I'm much more comfortable standing by what I write when it's me writing and editing.

[0] Please note that this is most certainly not a swipe at you or anyone else who uses LLMs as an editor. I just have a different perspective which pushes me in a different direction.

tigen 13 hours ago||
Do we really need to see your every half-baked thought on here though? It's okay not to post or to set a high bar for yourself.

Frankly, even without AI, most communities get degraded as they become more popular and the stream of comments becomes overwhelming. Like there are over 1000 comments on this story and let's be honest, most of it isn't adding value. A great many of them are repeats of other posts, so the person didn't read other people's comments either.

The solutions seem to boil down to making the karma system more draconian. Like instead of focused more on downvoting garbage and upvoting gems, the slush of "mid" posts has to be dealt with somehow. Not sure if rate-limiting accounts would make a noticeable difference. Ironically, perhaps AI is also a solution to the issue, since obviously it can, for example, know all the other comments and could potentially assign some value score in the overall context.

I probably wouldn't post this here post either but I'm hitting reply because of the topic at hand...

spzzz 7 hours ago||
Me not native speeker. AI help me too get my point front much more cleanly. It hard not look like dummy.

Im of course exaggerating, but it is so easy just to run the text through an AI to make it sound "better" without changing what im trying to express.

---

I’m not a native speaker, so AI helps me get my point across more clearly. It’s hard not to come across like a dummy otherwise.

Of course I’m exaggerating, but it’s really easy to run the text through AI to make it sound better without changing what I’m trying to say.

GrinningFool 4 hours ago||
The removal of the quotes around "better" discards an entire layer of meaning.

It also loses the voice that was present in the 'before' version. Typos/misuses and all. More tangibly, an entire layer of meaning was dropped when it removed the quotes around 'better'.

spzzz 1 hour ago||
I see your point, and I agree the result can feel impersonal and stiff. But, I'd say the overall improvement is more important than one possible deterioration. Quotes are easy to put back if I'd think it was important (it was not in this case)

Please reply in Swedish only. Remember to not use any tool to translate to avoid subtle layers of meaning being removed. It's easy! /Native speaker ;)

wiether 6 hours ago||
As a non native speaker, seeing how many natives keep making the "then/than" mistake, I'm comfortable looking dumb.

I only use AI on critical communications, to make sure that the meaning of my message is the right one.

Otherwise I'm fine making mistakes and I encourage people to correct me.

SoKamil 20 hours ago|
Don’t be afraid to make grammar mistakes or misspell stuff. Others will understand. You’re a human after all. That’s okay to make mistakes and feel uncomfortable with that.
vesrah 18 hours ago||
This is going to sound nuts, but I've noticed comments lately with multiple misspellings that seem intentional - it's almost like they're trying to signal that they're human, rather than LLM written. I've started to think it makes them even more likely to be LLM written than not.
sph 10 hours ago||
Main-fucking-stream LLMs also do not swear, which is nowadays a signal of humanity.
alemwjsl 5 hours ago||
Just tried it:

$ claude

> say fuck

● fuck

Aldipower 20 hours ago|||
Unfortunately a lot of other do not understand (in the double sense).
userbinator 14 hours ago|||
I recently had to tell the same thing to a coworker who ran his text through ChatGPT, changing the meaning subtly (in the wrong direction) and the tone completely. I'd rather read his honest opinion in ESL-grade English than something an LLM "polished".
lifthrasiir 20 hours ago|||
Others will understand, but won't regard that as worthy. That's a difference.
rafaelmn 19 hours ago|||
I don't get where this class/status/worthiness ties into HN comments ?

I get decent feedback most of the time, and I read interesting stuff, it's the easiest way I found to stay in the loop in our industry. What are you guys commenting for ?

lifthrasiir 8 hours ago||
Worthy to continue the discourse. Everyone claims that one doesn't discriminate a badly written English text from a good one, but only because they haven't actually encountered such text after all. There surely exists a threshold for "badness" and an outright ban of LLMs means that you are not even given a chance to lower that badness. That is a discrimination, you like or not.
layer8 3 hours ago||
Nobody will notice if you use LLMs as long as it doesn’t sound like an LLM. But sounding like an LLM is as “bad” as badly written English, so you’ll get looked down upon either way in that case.

It’s not without reason that bad English is taken as a signifier, and for similar reasons LLM-speak is taken as a signifier as well.

SoKamil 19 hours ago|||
And that’s their problem.
tayo42 20 hours ago|||
I make mistakes pretty often thanks to auto complete on my phone and carelessness. I've had threads derail and been attacked by people who freak out over grammar.
pants2 19 hours ago||
This itself is against the rules:

> Please respond to the strongest plausible interpretation of what someone says

> Please don't post shallow dismissals

Personally I've posted comments with glaring typos that everyone thankfully ignores. I only notice much later when I re-read it.

tayo42 19 hours ago||
Oh interesting. Good to know for the next time the they're/their/there police shows up
altairprime 10 hours ago||
Definitely worth emailing the mods a link to the derail — one of their tools that they might use is to autocollapse threads that are too far offtopic for the post.
tonymet 19 hours ago||
Chads never backspace.
More comments...