Posted by usefulposter 20 hours ago
Second, I have to confess that I did this sin a couple of times now, but I came to realize that this is neither good for me nor for the HN community. Although I used AI just for rephrasing, I decided to not do this ever and I'd rather write my own words with mistakes than post generated words based on my thoughts.
It happened for me once and it strikes me like a nuke and I felt truly embarrassed. A couple of months I wrote that comment (https://news.ycombinator.com/item?id=42264786) then I asked ChatGPT to rephrase it and then mistakenly, pasted both comments, the original above and the generate one below and I hit submit. Shortly after, a user comes, read my comment and replied with that embarrassing reply and honestly, I deserve it. From that moment I realized how things can got messed up quickly when you rely heavily on that AI.
Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).
I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.
I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.
I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.
Would that be OK or would that count as an AI written comment?
I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:
1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).
2. Use too many commas.
3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.
I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.
If you have domain familiarity with it, have some personal insight to offer a lens through, or care about the topic deeply enough to write a summary yourself, then go ahead! I almost never post about AI given my loathing of generative ML, but I posted a critical summary in a recent “underlying shared structure” post because it was a truly exciting mathematical insight and the paper made that difficult to see for some people.
Please don’t use AI to reduce the distinctiveness of your writing style. Run on sentences are how humans speak to each other. Excess commas are only excess when you consider neurotypicals. I’m learning French and I have already started to fuck up some English spelling because of it. None of that matters in the grand scheme of things. Just add -er suffix checks to your mental proofreading list and move on with being you.
What I do is copy the URLs for reference, and summarize the issue myself in as few sentences as possible. Anyone who wants to learn more can follow the reference.
Who cares about people with reading disabilities, let's shift burden onto the reader. My time is better spent managing my Ais.
Or the reader's AI who is able to format or translate the text to make it easier to read for the reader.
Pasting a chatGPT response into a comment, and labeling it as such, feels the same to me.
It is more, not less, insulting than trying to pass an AI response off as your own.
> Would that be OK or would that count as an AI written comment?
The rule seems written to answer this directly.
Absolutely nobody cares what Perplexity has to say about the case - summary or otherwise. If you mention what the case is, I can ask claude myself if I’m interested.
Better yet, post a link to an authoritative source on the case (helpful but not required).
At minimum, verify your info via another source. The community deserves that much at least.
An AI-generated summary adds nothing positive and actually detracts from the conversation.
I looked at the decision itself sufficiently to see that it was the case I remembered and that my recollection of the facts and the decision was correct.
I just didn't include a summary because I didn't find a good one I could link to. Normally I'd write a brief one myself but I found that hard to do when Perplexity's summary was sitting right there in the next window and it was embarrassingly better than what I would have written.
I'm not asking or advocating for using AI as a copy editor.
The post I replied to asked about using Gemini as if it's Wikipedia - that is, saying "according to Gemini" when citing a fact where one might have once wrote "according to Wikipedia" or even "according to Google."
This is a forum people hang out in part-time. It's nobody's job to go spend an hour researching primary sources to post a comment. Shallow searches and citations are common and often helpful in pointing someone in the right direction. As AI becomes commonplace, a lot of that is being done with AI.
"Can I have AI write a reply for me?"
is a very different question than
"Can I cite an AI search result?"
This rule change is clear about the former. There's room to clarify the latter.
Nope. (For an example of that, see any comment I posted to this discussion that starts with “Please don’t”.)
> "Can I cite an AI search result?"
Ah. An AI response is neither a primary source nor a reference source, and HN tends to strongly prefer those. Linking to a Google /search?q= isn’t any more welcome here than linking to an AI /search?q=; neither are stable over time and may vary wildly based on algorithmic changes. Wikipedia, as a curated reference source, is not classifiable as equivalent to either a search engine or an AI response at this time, and evidences much stronger stability, striving towards that of a classical print encyclopedia (but never reaching it).
Perhaps someday Britannica will release an AI that only provides fully factual replies that are derived in whole from the Britannica encyclopedia, but as of today, AI has not demonstrated the general veracity and reliability that even Wikipedia, the very worst of possible reference sources, has met over the years.
(Note that an Ask-A-Librarian response would be more credible than a Wikipedia page and much more credible than today’s AI attempts to replace that function; but linking such a response would still be quite problematic, not the least of which because the primary value of that response is either directly quotable and/or is citations that should be incorporated into the post itself. But if that veracity differential changes someday once the AI hallucination problem is solved at the underlying level rather than in post-filters, I’m happy to revise my position.)
The point is we don't want to read Ai summaries, we can make one ourselves if we want. Personally, with certainty, I don't want to read one from Perplexity on the basis that they do the Ai for Trump Social. (reverse-kyc if you are not aware)
For some inspiration on why this is meaningful: https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...
In this instance the only reason I considered using the AI summary was that there was no Wikipedia article about the case (which surprised me as it is one of the foundational cases in Commerce Clause law...although maybe all the points in it are covered in later cases that do get their own Wikipedia articles?).
Normally I'd just copy Wikipedia's summary into my comment and link to Wikipedia and to the decision itself for people that want the details.
> The point is we don't want to read Ai summaries, we can make one ourselves if we want.
How would you know if you wanted one? Someone mentioned they would like to see a case on this subject but they didn't think it would ever happen. I knew of a case on the subject, found the reference, and posted the link. At that point we are already on a tangent from what most of the thread is about and from what most people reading it care about.
The point of the summary would be to let you know if the case might actually be relevant to anything you cared about in the thread. (The answer would probably be "no" for 95+% of the people reading the comment).
All of this Ai stuff is new for society and we have a lot to work through. Here on HN, we want to err to the side of keeping as much humanity as possible. It's good to have a place like that, for fresh air and stretching our minds differently and regularly as Ai becomes more ubiquitous in our lives.
I think you misspelled "convenient". More than the small effort that it takes one to share generated text, one has to consider the effort of who knows how many humans that will use their time to read it.
If a LLM wrote something you don't know about, you're not qualified to judge how accurate it is, don't post it. If you do know the subject, you could summarize it more succinctly so you can save your readers many man hours.
If LLMs evolve to the point where they don't hallucinate, lie, or write verbosely, they will likely be more welcome.
As a Polish man I am repulsed when I hear AI generated Polish voice in a commercial, but can't see problems in AI generated English speech
This puts the onus of being comprehensible to the reader, which isn't fair I think. If you can't get your point across in a way that is comprehensible, maybe don't post.
99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.
Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
It will take time, but eventually everyone will know about it.
Note that the guidelines do explicitly say not to post about guidelines violations in comments, and to email them instead. I know this isn’t a well-loved guideline in this modern era, but duly noted: those well-intended comments are themselves breaking the guidelines.
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
If so, that seems different. If not, can you clarify?
- I insinuate that you are a bot (often shortened to “Is this a bot?”)
- I claim that you are a bot. (often shortened to “This is a bot.”)
- I accuse you of being a bot l. (often shortened to “Are you a bot?”)
The part where I’m interpreting to include accusations of bottery and slop is “and the like. It”; the first clause, ‘the like’ refers to the generic category of accusations against posted comments, which historically were the listed examples, but is also defined to include others not listed, such as today’s popular accusations of bot or AI; the second clause, ‘It’, refers to all insinuations-class content. Without the list of examples, this reads:
’Please don’t post insinuations. It degrades discussion
Yep, this is true. Accusations, Insinuations, Claims, of bot or AI or astroturf; they all wreck discussions and I end up having to email the mods to deal with them. A lot of people use the rhetorical device of Discredit The Opposition by invoking this sort of thing, and while that’s less prevalent in ‘reads like AI’ insinuations, they still degrade the site.
With AI-assisted writing is a violation of site guidelines, and even before it was, posting of AI-assisted writing was a clear ‘abuse’ of the community’s expectations of unassisted-human discussions. Aside from expectations, I can also classically understand in Internet history that ‘violating the guidelines’ is the phrase formerly known as ‘abuse of service’, by which I interpret the above reference to abuse to refer to breaking the guideline about posting accusations.
The guidelines are not a legal contract as program code, and perhaps this one is clunky enough that it needs to be reworded slightly; thus my intent, once the flames die down here, to let the mods know about the confusion. As I’m not a mod, this is my interpretation alone; you might have to email the mods and ask them to reply here if you want a formal statement on the matter, given how many comments this thread got in a couple hours.
ps. On ’and is usually mistaken’: I’m not a mod, so I can’t judge how often accusations of AI/bot are mistaken, but I’m also an old human who learned em-dashes in composition class, so I tend to view the modern pitchfork mobs out to get anyone who can compose English as being less accurate in their judgments than they believe they are.
Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote?
That's fair.
>Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote?
I think what I wanted to get at is more like this:
1. I think that they may be part of the meaning
2. I think that people would be primed to accept changes even if they change the meaning
3. I suspected that it would always correct something and wouldn't just say LGTM even if the input was fine
To check, and at the risk of this being hypocritical, I asked for a grammar correction on part of your post that I thought had no mistakes, and both in context and isolation, it corrected "spat out" to "produced." Now, this isn't a huge deal, but it is a loss of the connotation of "spat out," which is the phrasing you chose.
I think grammatical errors are low-cost, and changes in meaning and intent are high-cost, so with 2. above, running it through an LLM risks more loss than it gains.
In all seriousness, if you use some tool to make sure you're using the right "there", noone will mind. Just don't generate another boring predictable comment and everything will be ok
And no, I don't have to reply to a post, but when I think it's a bad policy, should I just accept it without discussion? And who determines the "experts/insiders" and which voices should be allowed?
As Isaac Asimov pointed out[0]:
“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'”
This thread runs through many cultures and isn't just a problem on the Internet, although the Internet certainly has accelerated/worsened the problem. And it has created a distrust of experts which (as has been obvious for a long time) has made us, as a whole, dumber and less informed.
I recommend The Death of Expertise[1] by Tom Nichols for a sane and reasonable treatment of this issue. If books aren't your thing, Nichols did a book talk[2] which lays out the main points he makes in the book. During that talk, he also gives the best definition of disinformation I've heard yet.
[0] https://www.goodreads.com/quotes/84250-anti-intellectualism-...
[1] https://en.wikipedia.org/wiki/The_Death_of_Expertise
[2] https://www.c-span.org/program/book-tv/the-death-of-expertis...
If someone posts a link on a a new laptop, who should respond? I am not an expert on the current laptop market, but I have options about it. Maybe my English is not the best so I run through an AI to clean it up of ambiguities or wrong wording. Maybe I say “I like to take my laptop from behind” when I meant “I lift my laptop from the back”. An AI could point out this type of error.
I’ve broken the guidelines on this site before. The mods reply and say “hey, stop doing that, here is the guideline”. I stopped doing it. Life continues.
So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?
Influence is valuable, and HN is a place that people who are aware of it trust highly.
(AI generation of random comments helps build "trustworthy" accounts that can then be activated when a relevant issue comes up)
Sure, the bad actors don't particularly care for the guidelines - until their accounts start losing karma and getting dead'd/banned. Then they do, and that still materially improves the site.
Thanks, if I wanted Chatgpt's middle-of-the-bellcurve ass response I would have put the five seconds of effort in myself to type the question into its input field.
While many here are saying "who cares about your spelling and grammar," they have not been the people whose poor English gets them flagged as being somehow less intelligent or credible. Half the problem with LLMs is that they speak eloquently and we use that as a signal of someone's intelligence and trustworthiness. For someone who is otherwise intelligent but doesn't know English well this can be a major setback.
Also, if you have to mark the sarcasm, then it's proper bad.
I think we are overwhelmingly utilizing negative reinforcement for AI generated content; where there are consequences for engaging in this behavior. On the other hand, positive reinforcement would encourage authenticity and greater human content. The reality of the situation is that AI generated content won't go away and it's become a game of who can hide their artificial content the best. Thus, I believe that positive reinforcement is the solution.
I think we must instead encourage human created content instead of policing AI generation. There are so many rules to follow already that by the time I create the content, I've gone through enough if/then logic that it feels like AI anyway.
I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.
Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.
[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^
If an opinion/idea is being communicated in the voice of another then something unique to that user has been lost. Like if I were to have a germ of an premise and told someone else about it and I found their thoughts clearer and how they expressed it and then copied how they'd expressed it then I think I'd be at least crediting them. Otherwise our own growth with self-editing and clarity will just atrophy and the internet will be a soup of homogenized ways of expressing things.
But, even though I think slippery slope arguments should be used very sparingly, there is a good case for one here.
Also, learning how to communicate better, and learning to listen better, is a real value add to this site. Which would get washed out if both writing, and therefore reading, were spoon fed by models, who are also washing away individuality of expression and nuance of views.
Same here. And sometimes, I got downvoted and treated as an LLM — in the name of valuing the human.
To me, what matters is the will behind the words. Ideas and words themselves are cheap (this becomes clearer every day in the AI age) — they're almost nothing until they're executed and actually help someone.
> "The Dao can be told, but what is told is not the eternal Dao. The Name can be named, but what is named is not the true Name." — Laozi, Dao De Jing
Like code we write — it's dead text on a screen until it's running. And what we really care about is the running effect — and that is exactly the reason, the will, behind why we write the code in the first place.
Feeling sad I am 'the reason'. But that's ok.
> asking for a policy
It is always the same sad story. Someone learns a new name, gets trapped inside, and tries to escalate conflict. I will not call that 'open mind'.
The deeper reason is that there is no kindness — many really don't care about others who seem alien to them. They just hide that behind all kinds of names.
Your point is well taken.[0]
Personally, I take a different approach. I use a 5 minute delay for comments on HN so I can look at the post after I submit it, but before anyone else sees it.
This gives me the opportunity to read over my comment and the comment to which I've replied to make sure my prose is decent, my point is clear and any typos or other inaccuracies can be corrected.
I don't use LLMs as an editor as I've found that I'm probably a better editor than the average internet user, which is what LLMs represent.
Perhaps that's arrogant of me, but I'm much more comfortable standing by what I write when it's me writing and editing.
[0] Please note that this is most certainly not a swipe at you or anyone else who uses LLMs as an editor. I just have a different perspective which pushes me in a different direction.
Frankly, even without AI, most communities get degraded as they become more popular and the stream of comments becomes overwhelming. Like there are over 1000 comments on this story and let's be honest, most of it isn't adding value. A great many of them are repeats of other posts, so the person didn't read other people's comments either.
The solutions seem to boil down to making the karma system more draconian. Like instead of focused more on downvoting garbage and upvoting gems, the slush of "mid" posts has to be dealt with somehow. Not sure if rate-limiting accounts would make a noticeable difference. Ironically, perhaps AI is also a solution to the issue, since obviously it can, for example, know all the other comments and could potentially assign some value score in the overall context.
I probably wouldn't post this here post either but I'm hitting reply because of the topic at hand...
Im of course exaggerating, but it is so easy just to run the text through an AI to make it sound "better" without changing what im trying to express.
---
I’m not a native speaker, so AI helps me get my point across more clearly. It’s hard not to come across like a dummy otherwise.
Of course I’m exaggerating, but it’s really easy to run the text through AI to make it sound better without changing what I’m trying to say.
It also loses the voice that was present in the 'before' version. Typos/misuses and all. More tangibly, an entire layer of meaning was dropped when it removed the quotes around 'better'.
Please reply in Swedish only. Remember to not use any tool to translate to avoid subtle layers of meaning being removed. It's easy! /Native speaker ;)
I only use AI on critical communications, to make sure that the meaning of my message is the right one.
Otherwise I'm fine making mistakes and I encourage people to correct me.
$ claude
> say fuck
● fuck
I get decent feedback most of the time, and I read interesting stuff, it's the easiest way I found to stay in the loop in our industry. What are you guys commenting for ?
It’s not without reason that bad English is taken as a signifier, and for similar reasons LLM-speak is taken as a signifier as well.
> Please respond to the strongest plausible interpretation of what someone says
> Please don't post shallow dismissals
Personally I've posted comments with glaring typos that everyone thankfully ignores. I only notice much later when I re-read it.