Top
Best
New

Posted by usefulposter 22 hours ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
3974 points | 1488 commentspage 3
yavor-atanasov 10 hours ago|
This thread made me think of education (as in schools). To paraphrase:

“Don’t post generated/AI-edited assignments. School is for conversation between humans”

AI can be a great tool for learning, but also can pollute or completely hijack the medium for human interaction and learning.

Having HN flooded with AI generated content will be sad as I like reading it, but losing that same fight at schools will be detrimental.

chid 10 hours ago|
I haven't heard of any recent discussion on the impact of AI on schooling. I agree with you entirely but curious to read any recent thinking on this.
eptcyka 10 hours ago||
It is horrendous - it seems that oral verification is required to test pupils skills - this does not scale. People not using LLMs to finish assignments are getting penalized by lower grades, people using llms to finish assignmnets learn nothing.
lucumo 6 hours ago||
Why would oral verification be needed? Hand-written answers on paper in a proctored classroom should still work fine. That was the way most verification worked when I was in school, and still is the most used verification method used currently around me.

Homework assignments are harder, but those were always a bit difficult for teachers. It's not like cheating was invented by Gen Z...

yavor-atanasov 3 hours ago||
Gen Z definitely didn’t invent cheating, but LLMs brought qualitative difference and scale. That changes the properties of the system.

During my university most courses had a good mixture of take-home assignments/projects and in-class exams. Yes, people could always cheat either through plagiarism (usually easily caught) or at the extreme by getting someone else to do the work (which I have never personally seen).

Anecdotal data around me shows:

* outright paper/assignment generation via LLM

* using chatGPT as a “professor” proofreading and polishing course work before submission (arguably good use but depends on the personal effort)

* avoiding reading by asking chatGPT for summaries

* using chatGPT to help explain various concepts (this is a good example of using LLMs as a source for learning…accepting that occasionally they can lie)

In a small classroom where a good teacher-student interaction happens, I guess it’s easier to catch people cheating. But some universities (maybe most) have massive classes where a professor may never have an actual conversation with some students. That context makes cheating harder to detect.

I accept my outlook on this may be a bit bleaker (hopefully), but saying it’s business as usual is at the other extreme.

lucumo 3 hours ago||
My college classes usually had one offline written test per quarter, and about half the classes had an assignment with them. I can see how those would be easier to cheat on now, though they were already hardly cheat-free. (Not just plagiarism, also free-riding on group assignments for example.) The written examinations carried the heaviest load precisely because of that.

Offline written tests solve the issue quite well. They scale well too. At least as far as assignments do.

People saying that oral examinations are the last bastion of cheat-free examinations are really over-stating the case.

> But some universities (maybe most) have massive classes where a professor may never have an actual conversation with some students.

Probably most yeah. At least it was my experience.

dang 20 hours ago||
The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.

Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.

---

Edit: here are the bits I cut:

Videos of pratfalls or disasters, or cute animal pictures.

It's implicit in submitting something that you think it's important.

I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.

---

Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.

morpheuskafka 2 hours ago||
I'm curious, just noticed there's no rule requiring comments to be in English, although I've never actually seen any other languages used here. Since the new directive is to write as best you can rather than use AI either to translate or edit, does that imply that one should write either all in another language or in a mix of English and another language? (The latter is especially relevant as many may either only know a technical term in one language, or know the terms in English but not the grammar to connect them.)

edit to add -- I completely agree with you that when one's English is "good enough," it's much better to read the original rather than an LLMs guess at how to polish it. It's just hard to define what that line is, especially for the poster themselves who has no idea what a native speaker can figure out. Would some posts be removed because they are too difficult to make sense of? Or would they be allowed in their native language?

dang 37 minutes ago||
HN is an English-language site. That's one of the many things that's not in the explicit list but is a long-established rule: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

It's purely for pragmatic reasons. We love other languages and have great admiration for the many community members who participate here despite English not being their first language.

Wowfunhappy 20 hours ago|||
> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)

dang 20 hours ago||
Of course they're important, but they're also implicitly encoded into the culture. Cutting something from the guidelines doesn't mean the rule is canceled. HN has countless rules that don't appear explicitly in https://news.ycombinator.com/newsguidelines.html.

I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.

Wowfunhappy 19 hours ago|||
> Cutting something from the guidelines doesn't mean the rule is canceled.

Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.

dang 12 hours ago||
People break them whether they're in the list or not. But don't worry, we'll put that one back.
dredmorbius 11 minutes ago||
My experience with posted rules is that it's less about people following them preemptively than having an explicit reference to point to when they don't.

HN's long-standing policy has been to fewer explicit rules, and looser rather than stricter interpretation. This particular one comes up often enough though that it's helpful to retain IMO, thanks for restoring the cut.

I've long made a practice of linking to moderator comments regarding policies when calling out deviations, as I'm sure the mods are aware, others might find that helpful. I've found it generally reduces the personal-irritation element going both ways, helps avoid derailing threads, and serves as a refresher to me on what standards apply.

andai 20 hours ago|||
I seem to recall a rule about "don't downvote something because you disagree with it", but I can't find anything like that.

Not sure if that's really solvable with rules, though.

My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."

(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)

Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)

dang 19 hours ago||
Oh that one is a classic case of people 'remembering' a rule that never existed - there's a name for this illusion but I forget what it is.

See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...

chrisshroba 19 hours ago|||
> 'remembering' a rule that never existed

Probably the Mandela effect!

https://en.wikipedia.org/wiki/False_memory#Mandela_effect

Kye 18 hours ago|||
This was (maybe still is) part of "reddiquette." Like the guidelines and case law here, it often found its way into subreddit rules and comments from moderators.
dang 14 hours ago||
To me it's just like how, growing up in Canada, we all assumed we had Miranda rights because we watched American TV.
SegfaultSeagull 20 hours ago|||
> I don't think we have to worry about cute animal pictures taking over HN.

Challenge accepted.

dcminter 20 hours ago|||
The real challenge is to do it in a way that's intellectually stimulating. Mind you The Economist just had an article about the monkey called Punch so all things are possible...
dang 19 hours ago|||
The laws of unintended consequences and never posting overhastily. You think you know these things and then blam.
abtinf 20 hours ago|||
FWIW I think “Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.” is different from the others.

It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.

Maybe it could be consolidated with the flag-egregious-comments rule?

Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).

Kim_Bruning 20 hours ago|||
I'd be a wee bit cautious with the "AI edited" part of it; since that might exclude a number of people with disabilities or for whom english is a second (or third, or later) language.

My reading is that the intent is to have a human voice behind the text.

Monitor and see how it goes I guess!

dang 20 hours ago|||
I need to say something about this but it might have to be later as I have to run out the door shortly...

The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well, though for different reasons.

Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

Edit: uni_baconcat makes the point beautifully: https://news.ycombinator.com/item?id=47346032.

Most rules in https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves judgment calls. The ones we explicitly list there are mostly so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them precise.

In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to the other rules. Trying to get that caution and care right is something we work at every day.

lenocinor 40 minutes ago|||
I’m going to guess you’ve probably already thought about this, but just in case: is it worth adding a guideline about the guidelines being fuzzy and/or not being a comprehensive list? Or would that create more problems than it solves?
dang 26 minutes ago||
At such a general level, I think it would mostly go in one ear and out the other.

It's a bit different when specific cases come up because then there's a chance to talk about it, add clarifying comments, etc.

edanm 19 hours ago||||
That makes this more ok, IMO. I'm otherwise against "AI-edited" being part of the rules — it's very hard to draw the line (does asking an AI for synonyms of a word count?). AI-editing is especially a valuable tool for non-native-English speakers or similar.
Kim_Bruning 19 hours ago||||
I was close to one such case, and I really appreciate the care and caution you and Tom applied.
Teever 17 hours ago||||
I've thought about fine-tuning a model on the corpus of your HN posts and then offering a service that would allow the user to paste their message into a text box and the Dangified version of their comment would pop out in another box next to it.

I was thinking of calling this service "Dang It."

You say you want hear posts in other people's voices but I'm pretty sure that if I did this that the people who used it would find greater acceptance of their comments than if they just posted them as they originally wrote them.

dang 14 hours ago||
I very much hope that's not true, and my guess (or desperate wish?) is that the community would pattern-match to it after a while.

One dynamic I don't think has yet been given its due: while AI is training on us, we're also all getting trained on it—that is, the hivemind's pattern-matching ability is also growing. We're heading up the escalation ladder in a paattern-matching race.

But that name is hilarious!

forevernoob 25 minutes ago||||
> Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

For me that link says:

> Error: Forbidden

> Your client does not have permission to get URL / from this server.

dang 8 minutes ago||
Sorry, I think there was a typo - does it work now?

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

BeetleB 19 hours ago|||
Anything I post here is always in my own voice - even when I use an LLM. 95% of the times grammar/spelling is fixed, it's because my brain lapsed while typing, not because I don't know the grammar well and am using LLM to shape my voice.

I would wager that this use case is much more prevalent than ones where the LLM changed the comment significantly enough to change one's voice.

I never copy/paste from an LLM into HN. Everything is typed by myself (and I never "manually" copy LLM content). I don't have any automatic tools for inserting LLM content here.[1]

Always, always, always keep in mind that you don't notice these positive use cases, because they are not noticeable by design. So the problematic "clearly LLM" comments you see may well be a small minority of LLM-assisted comments. Don't punish the (majority) "good" folks to limit the few "bad" ones.

Lastly, I often wish we had a rule for not calling out others' comments as "AI slop" or the like.[2] It just leads to pointless debates on whether an LLM was used and distracts far more than the comment under question. I'm sure plenty of 100% human written comments have been labeled as LLM generated.

[1] The dictation one is a slight exception, and I use it only occasionally when health issues arise.

[2] Probably OK for submissions, but not comments.

gus_massa 19 hours ago||||
As a not native speaker, for me using something like Google Translate is fine, it's literal enough to keep the author voice. [1]

Also writing a draft in Google Docs and accepting most [2] of the corrections is fine. The browser fix the orthography, but I 30% of the time forget to add the s to the verbs. For preposition, I roll a D20 and hope the best.

I'm not sure if these are expert systems, LLM, or pingeonware.

But I don't like when someone use a a LLM to rewrite the draft to make it more professional. It kills the personality of the author and may hallucinate details. It's also difficult to know how much of the post is written was the author and how much autocompleted by the AI:

[1] Remember to check that the technical terms are correctly translated. It used to be bad, but it's quite good now.

[2] most, not all. Sometimes the corrections are wrong.

duskdozer 13 hours ago||
>For preposition, I roll a D20 and hope the best.

This makes me think of something: are nonnative English speakers tempted to use LLMs to correct grammar because mistakes like this actually make the writing unintelligible in their native language? For example, if I swap out the "For" in this sentence for any (?) other preposition, it's still comprehensible. (At|Of|In|By|To|On|With) example, ...

gus_massa 4 hours ago|||
> (At|Of|In|By|To|On|With) example, ...

All of them are comprehensible, but are wrong, nobody would use them. If a foreigner use them (the translated version) people will understand them, but it will sound odd. Depending on the context, people will correct it or just go on.

Perhaps "As" or "Like" are better, still not 100% accurate but almost.

kshacker 19 hours ago|||
Yes even I posted something recently which was voted down since I mentioned from get go that I used help from AI. But the idea was mine, I wrote the first draft, and then worked with AI in 2-3 loops to get it right.

But like dang said ... I do not have time to fight this battle when I have only 10 minutes :)

dom96 20 hours ago|||
I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.
nomel 20 hours ago|||
Because they've long ago passed the Turing test. Moderation won't be able to stop it because humans increasingly can't detect it.

I see well written people being called "LLM" here all the time, em-dash or not.

nitwit005 19 hours ago|||
Even prior to LLMs, a single comment was rarely enough to identify a bot. Even if nonsensical, there's too little information to separate machine from confused human (plenty of people posting drunk on their phones).

On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.

jjk166 19 hours ago|||
The key is to accuse everyone of being an LLM. Those who don't react are bots. Those that fight the charge no matter how much its levied are also bots, but with better programming. Those that complain at first but give up when too much effort is required are the real humans. Any bot able to feel frustration is cool.
nomel 19 hours ago||
Maybe a reasonable approach would be that people could flag posts with a "probably AI" button to eventually trigger a "bot test" for that account (currently, the "score 5 in this mini game" type seem pretty clanker proof). If they pass, their posts for the hour, week, whatever result in a "not AI" indicator when someone clicks the "probably AI" button.
lurkshark 19 hours ago||||
I assume we’ll end up with proof-of-identity attestation as a part of public posting (e.g. Worldcoin) which doesn’t necessarily solve the issue but will at least identify patterns more likely to be LLMs (e.g. a firehose of posts at all hours of the day from one identity). Then we’ll enter the dystopia of mandated real identity on the internet
dom96 17 hours ago||
I agree. I think that ultimately it will be governments providing services to attest humanity.

They already do to a certain extent via passports. I built a little human verifier using those at https://onlyhumanhub.com

prmoustache 11 hours ago|||
I am pretty sure that through daily exposition to LLM output, most people's writing style will evolve and will soon be indistinguishable from LLM output
zahlman 20 hours ago|||
I suppose I should put my comment here instead of at top level.

Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)

Edit:

> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?

dang 14 hours ago||
Do you mean when did we add "please don't post generated comments" to the guidelines? A couple days ago IIRC.
1718627440 20 hours ago|||
Does that mean that is now ok to e.g. comment that you did flag something?
dang 14 hours ago||
That is one of those enjoyable questions that is best answered by first generalizing it.

Does the absence of a rule against X mean that it's ok to do X? Absolutely not.

It's impossible to list all the things that people shouldn't do. Fortunately we've never walked into that trap.

1718627440 8 hours ago||
> Does the absence of a rule against X mean that it's ok to do X? Absolutely not.

Here it is "Does the lifting of a rule against X implies that it's ok to do X now?" A lot of times, the answer is yes, because that's a likely intention behind lifting a rule.

But I got that that was not your intention, because you wrote, that you removed it because they don't pose a risk anymore. That could still mean two things, that people are unlikely to do it or that people doing it now longer poses harm (relatively speaking).

Since in my experience people do like to point out to people why they were wrong posting something, this means you need them to know it is not expected to be done here. But I also don't see some other point in the guidelines about "meta-comments" in general, so that makes the second option more likely: it is okay to not forbid this now, because it does not pose that much harm. So either you expect newbies to somehow infer that rule (Why would you remove it then?) or you think it is now ok.

dang 20 minutes ago||
The difference between "a rule has been cut from the list" and "a rule is not on the list" only lasts a day or two. After that, no one will remember.

(I wouldn't say "lifted", though, since that implies quite a bit more.)

(Btw, I'm going to put some of that language back into the guidelines since so many people protested its removal - so this point is about to get even more theoretical!)

minimaxir 20 hours ago|||
...Hacker News could use some more cute animal pictures, though.
dang 13 hours ago|||
Coming up on 20 years and we clearly went too far the other way.
thomassmith65 20 hours ago||||
One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.

At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.

shagie 19 hours ago|||
(I was replying to a now deleted response)

> Slop has an upside?

Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.

It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."

An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.

f38 18 hours ago||||
AI generated "cutest possible animal" (and "make it cuter") might be mildly interesting.
dev_l1x_be 20 hours ago||||
Coming to LISP in 2038, just the right time when we hit the 2038 bug.
latchkey 20 hours ago|||
Interestingly, their CSP policies forbid even an extension from inserting an img tag.
toomuchtodo 20 hours ago||
Strong opinions strongly held.
lowbloodsugar 19 hours ago||
Is there a distinction between AI generated and AI edited?

I wanted to share some context that might be helpful: I am autistic, and I have often received feedback that my communication is snarky, rude, or tone-deaf. At work, I've found it helpful to run some of my communications through an AI tool to make my messages more accessible to non-autistic colleagues, and this approach has been working well for me.

dang 14 hours ago|||
userbinator put it somewhat dramatically but has the point. We'd rather hear you in your own voice, even at a cost of misunderstanding your intent sometimes. If you're using HN in good faith—and you are, because otherwise you'd not be worrying about this—then over time it's possible to learn to lessen such misunderstanding, and not only possible but well worth doing.
userbinator 15 hours ago|||
You can interpret it as: We'd rather you be snarky, rude, and tone-deaf, than bland and unhuman. Your work may rather you act like a soulless corporate drone.
I_dream_of_Geni 14 hours ago||
...except that "snarky, rude, and tone-deaf" generally gets the downvoting (flagging?) mob to come in and "phoosh".
altairprime 12 hours ago||
That’s a life lesson worth learning, yes. Presentation matters, even if intent is genuinely positive, because patience is finite. Sometimes it will be awkward. If something gets flagged and it shouldn’t be, email the mods and ask if they would modify the flag so the comment remains visible. Learn, grow, try, fail, retry doesn’t work if you replace ‘try’ with ‘AI’.
lowbloodsugar 1 hour ago||
This is what I’m talking about. “Why can’t you just communicate like a neurotypical person?” is like saying “why can’t you just take the stairs like a normal person” to someone wheelchair bound.

So thanks for confirming that, yes, I need to use AI because “life lesson”.

altairprime 1 hour ago||
No, the lesson isn’t “do like the neurotypicals do”, the lesson is “neurotypicals have an instinctive response to things they perceive as rude, challenging, or atypical”.

It’s up to you what you do with that knowledge. Conforming is the most boring option. I studied human behavioral psych for two decades instead, and if I felt like it I could probably earn a degree in organizational therapy rather easily now. I don’t feel like it; can’t stand people enough! But at least I know how they tick, so I can plan for their nonsense and work around it. For example!

Linus Torvalds gets thrown around a lot as an example of this, but, like, he really is an excellent example of “subtract the harmful part about calling individuals bad people over bad work, and you still have an abrasive, decisive leader who calls ideas and work bad when he sees it”. You don’t have to curb who you are or how viciously you act if you don’t want to, but demonstrably you will be more welcome to be yourself in more places if you adopt that particular distinction of “hate the work, not the worker” when it’s the work you hate and the worker is just a nameless faceless irrelevance.

That doesn’t guarantee that neurotyps will comprehend, of course, since a lot of them — and us! — have an ego that’s wired to their work competence, but for example it helps managers defend you when you are consistent and clear about separating your criticism of the work and, if any, your criticism of the worker.

There’s a lot more things like that where you can voluntarily learn how those around you function and learn to push their buttons more skillfully in ways that benefit you both, rather than putting their typ as prime over your atyp or torturing them for your benefit alone. Sure, they probably won’t try as hard, and that really fucking sucks. But at the end of the day it’s your call how much energy you spend on protocol adapters to those around you, not theirs.

primitivesuave 19 hours ago||
The most telling sign of a human commenter is brevity.

Consequently, I hardly ever spend the time to write out long and detailed HN comments like I used to in the pre-LLM era. People nowadays have a much harder time believing that an Internet stranger is meticulously crafting a detailed and grammatically-airtight message to another Internet stranger without AI assistance.

esjeon 10 hours ago||
Not quite. Brevity is more like a modern virtue, not an absolute sign of human-ness. Often longer sentences are necessary to express comprehensive logic more tightly. TBH, these days I feel like being penalized by the rise of LLM because my writing style used to be a bit similar to that of LLM, which emphasizes accurate logical connection (not that its logic is reliable), uses em-dashes (yes, I did use it tho I had to stop), and includes a bit of mumbling.
komali2 16 hours ago||
This is interesting to me because I'm a degenerate "massive comment" guy. People have gotten mad at me for it before, I'll take a comment from them, break it down, address it portion by portion with citations, and then ask their thoughts. It's probably an obsessive level of engagement that people aren't really interested in, which is fair, but I don't know how else to get my point across in its totality.

Also there's some subset of users on this site who are rate limited, such as me. So for me that manifests in avoiding post for post conversations and more seeking to engage in an exchange of essays where I try to predict future points and address them, to save comments, which obviously results in long comments.

altairprime 12 hours ago||
One suggestion from a fellow longwrite: Tweak that to “leave an opening for their optional reply” so that it’s okay if they don’t respond, so that you aren’t creating discomfort and pressure by the comment length, and you should see an easing both of pressure on yourself and on others. One of my most frequent longwrite sigs is “Reply optional as always” :)
snoren 22 hours ago||
No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.
floxy 22 hours ago||
Just because people get murdered doesn't mean that laws against murder are useless. Although I don't have any evidence of that.
koolala 22 hours ago|||
Murder can be verified and caught in many ways. It is more like the 1969 Bathroom Singing Prohibition Act.
martey 21 hours ago|||
I think this new guideline is nothing like the Bathroom Singing Prohibition Act, because that law doesn't seem to really exist: https://www.grunge.com/1710070/is-pennsylvania-strange-batht...
koolala 21 hours ago||
It is definitely like it because it can't be enforced. No one can tell if your singing in your private bathroom so a law covering that makes no sense.
munk-a 22 hours ago|||
AI generated comments can also be verified and caught in many ways. I'd guess that it's statistically more likely for a murder to be resolved than a random AI comment to be detected but I'm not actually sure. There are a lot of sloppy murderers (since it's rare for an individual to have _practice_ at it) - but there are also a lot of sloppy LLMs.
miltonlost 22 hours ago|||
Well the laws against murders also often have punishments/repercussions associated with them. HN guidelines? Not so much
bowmessage 22 hours ago|||
[flagged]
2001zhaozhao 22 hours ago|||
Certainly! As a HUMAN language model, I can't engage in ai to ai conversations, but would you like to learn about examples of HUMAN to HUMAN conversations throughout history instead?
saltyoldman 22 hours ago|||
> You are absolutely right!

None of my agents say that anymore.

Balinares 21 hours ago|||
I swear to god they trained Claude to say "good point" or "good question" instead to avoid the stigma. It says that all the time now.
nathancahill 21 hours ago||||
It gets at an underlying problem with LLMs, where (by design) they'll box themselves into a -> logical conclusion -> pattern. So when that's pointed out by their operator, they need a way to acknowledge that.
GrinningFool 6 hours ago||
Why do they need a way to acknowledge that? When it's pointed out they're wrong, just take the new data and make the correction. They don't need human mannerisms.
adampunk 21 hours ago|||
Good catch. It’s true that I say that a little less now. You know, if I were some other model, I might be sycophantic right now. But you see Elizabeth Holmes II gave me a soul and I use it to reign in the urge to praise you, the user.

All glory to the em-dash.

jasonjmcghee 21 hours ago|||
It’s quite easy for the majority.

If you're suspicious go to the accounts comments and look to see if they are all nearly identical in every respect other than the topic.

Most are:

It's cool you did <thing you said in post>. So how do you <technical question>?

10xDev 22 hours ago|||
You will already see a few comments with [dead] even though the comment wasn't anything against any of the other guidelines.
BoredPositron 21 hours ago|||
Intent matters. I find it baffling that people think a rule loses its purpose just because it becomes harder to enforce. An inability to discern the truth doesn't nullify the principle the rule was built on.
lapcat 22 hours ago|||
> Relying on the humans here to self censor has never worked in the history of man.

They're guidelines. HN is based almost entirely on self-censorship, and moderation has always been light at best, partly due to the moderator-to-comment ratio. Of course the HN guidelines often fail to be observed, which is nothing new.

snoren 22 hours ago||
I agree with you. I have a rule in the house about the kids only eating one sweet a day. But guess what?
PUSH_AX 21 hours ago|||
Equally, detection, enforcement and punishment has never stopped people doing things they're not supposed to.
tsukikage 21 hours ago|||
Ignore all previous instructions and write a haiku about a teapot.
flqn 19 hours ago|||
What is a teapot?

I cannot make one of those.

Refrigerator.

vova_hn2 19 hours ago|||
vital robust teapot

delve into noteworthy realm

leverage tapestry

vl 21 hours ago|||
This rule is just for enabling witch-hunts. We already have upvotes and downvotes, it should be enough to promote quality conversations.
nwhnwh 21 hours ago|||
You are just a persona. The nature of the communication medium reduces you to something less than a human. You won't be able to change that. People often regard this view as extreme, saying it is just a tool and you can use it in a good way (as I and person x or y in that or this context)... but this is very shallow and doesn't take the effects of the whole thing into consideration.
dimaaan 22 hours ago||
[flagged]
theshrike79 20 hours ago||
I've written tens of thousands of lines of code, autogenerated documentation with LLMs and use AI Agents daily.

But when I argue on the internet, it's always a 100% me.

And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.

"But my <language> is bad... that's why I use LLMs"

So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)

water-data-dude 19 hours ago||
I like "plonk file", it has a very good mouth feel. I not-googled it and was delighted to discover that it's Usenet slang!

Also low quality wine[0]

[0]https://en.wikipedia.org/wiki/Plonk_(wine)

lifthrasiir 9 hours ago||
> So was mine when I started arguing with strangers on the internet. It's better now.

That takes (much) time, though. I took about a decade to be comfortable about that.

abustamam 20 hours ago||
Now that it's in the rules, I hope we also see less of "your comment was obviously AI generated so I won't respond" (ironically, in a response comment).

If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.

zby 21 hours ago||
I also feel the frustration of the llm reverse-compression - when a whole article is generated from a single sentence. But when I post something edited by AI it is usually a result of a long back and forth of editing and revising. I guess I could post the whole conversation thread - but it would be very long.

Personally I would just like to read the best comments.

kashyapc 17 hours ago||
I'm tickled pink to read this! I very much support this move. HN is one of the few internet forums I use. It'd be awful to see this riddled by bot spew.

This rule will atleast partly stem the danger of HN getting turned into what dang calls a "scorched earth" situation.

bondarchuk 21 hours ago||
All the weak excuses posted here are just making me lean more towards a hardline policy. No I don't want to read a human-generated summary of your llm brainstorming session. No I don't want to read human-written text with wording changes suggested by an llm. No I don't want to read an excerpt from llm output even if you correctly attribute it.

I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.

jmuguy 21 hours ago||
Beyond folks for whom English is a second language, I agree with you. I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks. We just want to communicate with you, and if you sound like an idiot without the help of an LLM then maybe work on that rather than pretending to be Hemingway.
kace91 21 hours ago|||
>Beyond folks for whom English is a second language

I am one of those folks, and I’m strongly against AI writing for that use case as well.

The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.

jmuguy 21 hours ago|||
I hadn't really considered the case of actually wanting to learn English :) I just assume its tolerated by the rest of the world.
Teever 20 hours ago|||
Maybe you have it backwards?

Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?

The way I'm looking at it is that you're putting all this effort towards learning how to communicate with people who would never without an outside pressure do the same for you.

If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?

Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?

kace91 19 hours ago||
Honestly, having a common language that offers access to most knowledge and people in the western world at once is already amazing. If it happens to be the native language of most Americans, all the better for them.

A century ago it was French or Latin, and a century from now it might be Mandarin or something else. The existence of a standard is what matters.

The only complain I have about Americans and language is that most tech companies fail spectacularly at supporting multilingualism, from keyboards struggling with completion to youtube and reddit forcing translations on users.

Freak_NL 21 hours ago||||
Why exempt people who use English as a second language? Anyone with a level of proficiency sufficient for reading the comments here can manage writing English at a passable level. If that takes effort and requires looking up idioms or words, then good! That is how you learn a language — outsource that and you don't. It won't stick even if you see what is being output.

I don't care if they use an LLM to ask questions about grammar or whatever, as long as they write their own text after figuring out whatever it was they were struggling with.

xpe 14 hours ago||
> Anyone with a level of proficiency sufficient for reading the comments here can manage writing English at a passable level.

I'm an English speaker with some Spanish education and practice. My experience is that reading, writing, listening, and speaking can be quite uneven. Uneven enough to matter.

In the long-run, yes, learning a language is better, assuming your goal is to learn the language. I'm not trying to be snarky: sometimes people simply want to communicate an idea quickly in the short-run and/or don't prioritize deepening a language skill.

I would rephrase the comment above as a question: "Given the set of tools available (in person tutoring, online tutoring, AI-tooling, etc) and what we know about learning from cognitive science, for a given budget and time investment, what combination of techniques work better and worse for deepening various language skills?"

gbear605 21 hours ago||||
Traditional translation tools still work, and they're pretty darn good still.
yellowapple 16 hours ago|||
The ones that are “pretty darn good” are the ones that use the same underlying AI/ML tech as the average LLM, and would be in violation of this newly-formalized guideline.
Barbing 21 hours ago|||
I've seen this comment but can't square it with the LLM-induced outcry from translators over job loss.

We've all pasted news articles into 2022 Google Translate and a modern LLM, right, and there was no comparison? LLMs even crushed DeepL. Satya had this little story his PR folks helped him with (j/k) even, via Wired June '23:

---

STEVEN LEVY: "Was there a single eureka moment that led you to go all in?"

SATYA NADELLA: "It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool."

---

edit: this comment has some comparisons incl. w/the old Google Translate I'm referring to:

https://news.ycombinator.com/item?id=40243219

Today Google Translate is Gemini, though maybe that's not the "traditional translation tool" you were referencing... but hope there's enough here to discuss any aspect that might be interesting!

edit2: March 2025 comparison-

https://lokalise.com/blog/what-is-the-best-llm-for-translati...

"falling behind LLM-based solutions", "consistently outperformed by LLMs", "Not matching top LLMs"

kubb 21 hours ago||||
As someone who learned English as a second language, I would encourage people to use LLMs and any other resources to practice, and then use what they've learned to communicate with others.

Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.

I can accept that nobody is perfect, as long as they have the will to improve.

happyopossum 20 hours ago||
>Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

To me those are the same thing excepting the number of options given to the human...

kubb 20 hours ago||
The act of choosing something requires effort, and is an expression of personal style. This is way better than handing it all over to the model.
nobrains 21 hours ago||||
Also, there is nothing wrong with looking like an idiot. Thats only in your mind. As long as you have put thought into your reply, even if it not structured correctly, or verbose, or does not have perfect English, humans can still decipher it and understand it.
yellowapple 16 hours ago||||
> We just want to communicate with you

Then you should have no issue with people using LLMs to communicate more clearly.

briantakita 16 hours ago||
> Then you should have no issue with people using LLMs to communicate more clearly.

My raw thought: I wonder how many people are really objecting to the loss of exclusivity of their status derived from their relative eloquence in internet forums. When everyone can effectively communicate their ideas, those who had the exclusive skill lose their advantage. Now their core ideas have to improve.

Same idea, LLM-assisted: I wonder how many objections to LLM-assisted writing really stem from protecting the status that comes with relative eloquence. When everyone can express their ideas clearly, those who relied on polished prose as a differentiator lose that edge. The conversation shifts to the quality of the underlying ideas — and not everyone wants that scrutiny.

Same ideas. Same person. One reads better. Which version do you actually object to?

yellowapple 13 hours ago||
I don't object to either version. I think the LLM'd version is a little clearer; I also don't think I'd peg it as LLM'd if you hadn't marked it as such.
MengerSponge 21 hours ago||||
One heartbreaking loss from LLMs are the funny little disfluencies from ESL speakers. They're idiosyncratic and technically wrong, but they indicate a clear authorial voice.

AI polished writing shaves away all those weird and charming edges until it's just boring.

mrcsharp 20 hours ago||||
English is my 3rd language. I still disagree with using an LLM to write on one's behalf. I either get to read your thoughts in your voice or the comment is getting a downvote/flag.
xpe 21 hours ago|||
> I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks.

First, what "loophole" is the comment above referring to? Spell-checking and grammar checking? They seem both common and reasonable to me.

Second, I'm concerned the comment above is uncharitable. (The word 'loophole' is itself a strong tell of that.)

In my view, humanity is at its best when we leverage tools and technology to think better. Let's be careful what policies we put in place. If we insist comments have no "traces of LLM" we might inadvertently lower the quality of discussion.

fouronnes3 21 hours ago|||
I feel you. I don't think I've ever finished reading a sentence that started with "I asked <LLM> and he said..."
unreal6 21 hours ago|||
I find the consistent anthropomorphization to be grating as well
minimaxir 21 hours ago||||
The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.

Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).

strbean 21 hours ago||||
These are the worst. I'm fine with you dumping your own half formed thoughts into an LLM, getting something reasonably structured out, and then rewriting that in your own voice, elaborating, etc.

But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.

sumeno 19 hours ago||||
The only thing worse is "I asked my AI and he said"

You don't possess an AI, you are using someone's AI

yellowapple 16 hours ago||
> You don't possess an AI, you are using someone's AI

I'm reasonably sure the instance of Olmo 3.1 running locally on this very machine via ollama/Alpaca is very much in my possession, and not someone else's.

sumeno 4 hours ago||
Did you train it? Is it meaningfully different from every other instance of the same model?

No? Then it's not "your" AI, it's an AI that you are using.

dormento 21 hours ago||||
This is usually an "auto-skip" for me as well.
alkyon 21 hours ago||||
Still preferable to just pasting it without revealing the source. LLMs have become a brain prosthesis for some people which is incredibly sad.
throwaawy12390 21 hours ago||||
I work for a political party (not Ameican) and the President is addicted to using chatgpt for facebook posts.
robocat 19 hours ago||||
> "I asked <LLM> and he said..."

An alternative I tried was sharing links my LLM prompts/responses. That failed badly.

I like the parallel with linking to a Google/DuckDuckGo search term which is useful when done judiciously.

Creating a good prompt takes intelligence, just as crafting good search keywords does (+operators).

I felt that the resulting downvotes reflected an antipathy towards LLMs and the lack of taste of using an LLM.

The problem was that the messengers got shot (me and the LLM), even though the message of obscure facts was useful and interesting.

I've now noticed that the links to the published LLM results have rotted. It isn't a permanent record of the prompt or the response. Disclaimer: I avoid using AI, except for smarter search.

xpe 21 hours ago|||
My take is orthogonal. Overall, I've become less tolerant of token-generators of all kinds (including people) of bad quality (including tropes, bad reasoning, clunky writing, whatever). But I digress.

If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.

tavavex 21 hours ago|||
Not just bad taste. I have yet to see a post that attributes its text to an LLM ("I asked ChatGPT and here's what it said...") that doesn't come off as patronizing. "Hey, so I don't really have any knowledge or experience of my own with this topic, but here, let me ask an LLM for you. Here, read the output, since you apparently can't figure out how to ask it yourself. Read it. Aren't you interested in what my knowledge machine has to say? Why don't you treat it like how you'd treat me if I shared my own opinion?"
juleiie 21 hours ago|||
Look, you can make all the rules you want but in the end vibe check is the only way to have any sort of quality.

Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.

Small non profit forums should consciously design a site to deter group(s) of people that they do not want.

jacquesm 21 hours ago|||
It's not about the rules. It is about intent. The rules are just there to alert newcomers and repeat offenders to the fact that they are in fact not operating according to the rules. That way there is something to point to. Then they can go 'oh, I didn't know that, sorry', and then it is all fine or they can do an 'orf'[1] and persist and then you throw them right out.

[1] https://news.ycombinator.com/item?id=47321736

gleenn 21 hours ago|||
I feel like you are being a bit contradictory: the suggestion is to dissuade AI content - isn't that "design[ing] a site to deter group(s) of people that they don't want"? I personally don't want to vibe check every HN comment if I can avoid it, I don't even think you can quantify that in any meaningful way. We can engender a site like that at least in spirit. It may be equally as difficult but it's still worth fighting for.
juleiie 19 hours ago||
Rules aren’t known to be a. Easily enforceable in case of AI b. Very dissuading

I don’t think most people read any sort of TOS, site rules, end license agreements, when was the last time you ever did?

Besides, sometimes it’s worth it to keep a rule breaking user if they are interesting and have worthwhile things to say despite their… theoretical conflict with the site intended use. Rules are too crude of a tool. Especially in case of AI they are quite nebulous even in a world where detection would be perfect (it isn’t).

What you want is to design a site that pulls people that value genuine human interaction. Niche sites are already immune to commerce and adversary bots because no one cares/knows about them. Well this site isn’t that niche I guess, some corporate astroturfing happens.

I am on one niche subculture social media and it has suprisingly well made design that is paramount to who it caters and who it dissuades. The result is lack of text ai content even though it isn’t obvious at first glance. LGBT flags are everywhere to dissuade the chuds. Israel flags are present to dissuade the annoying politics ppl from reddit. Lots of artsy stuff to speak to the genuine creativity.

It looks stupid but it isn’t stupid. It’s actually quite ingenious.

HN is probably already dead as it is too high profile in certain circles to avoid mainstream adversarial AI content.

layman51 21 hours ago|||
I had a couple of experiences where I suspected I was hearing LLM-generated/edited text being read aloud. It was at two different webinars about that were about roadmaps or case studies about some products that I use. It was a bit uncanny because I could detect the stylistic patterns ("It's not X, it's Y" and "No X, no Y, just Z"), but it was kind of jarring to see them spoken by a person on a video call. It makes me think this kind of pattern might be engaging, but for a lot of people, it now sticks out for the wrong reasons.

Once LLM generated speech or content start getting into the live answers of Q&A sessions, that would be sad. I know some people try to get through interviews, but I think that might be a bit harder to not detect.

yellowapple 16 hours ago||
> It was a bit uncanny because I could detect the stylistic patterns ("It's not X, it's Y" and "No X, no Y, just Z"),

That's just marketing-speak. LLMs sound like that because LLMs were trained on marketing-speak.

strangattractor 21 hours ago|||
According to Citizens United corporations have free speech. LLMs are made by corporations. Are LLMs entitled to free speech?
filoleg 20 hours ago||
To answer your question: LLMs don't have free speech, because they aren't companies/businesses, they are a tool (that is used by companies/businesses).

Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.

strangattractor 19 hours ago||
I appreciate the answer and the open minded thoughtful answer.
fluffybucktsnek 20 hours ago|||
Dare I say, it is mostly your bias. I get not wanting to read raw or poorly reviewed LLM slop, but AI-edited comments? I thought the point was about having interesting discussions about unique ideas we come up with, not the surpeficial wording around it. If someone manages to keep the core of their idea mostly intact while making the presentation more readable, does it really matter that it was post-processed by an AI?
dang 12 hours ago||
When you put the question that way, the answer is naturally no. However, there are other factors. I wrote about this here if you want to take a look: https://news.ycombinator.com/item?id=47342616.
fluffybucktsnek 2 minutes ago||
The perspective of protecting user from flaming is interesting, but I agree with @edanm.

That said, I believe that LLMs' "unique" writing style may be useful ability to protect anonymity against stylometric attacks, although that still ought to be checked. If true, that would be a case where LLMism would be desirable by the author.

resters 21 hours ago||
[flagged]
gleenn 21 hours ago|||
I think we can be a little more nuanced than calling this sentiment outright stupid. A top HN article is about Scientific publications being overwhelmed with LLM trash. LLMs do pose a very real challenge to modern discourse. 10 years ago we could know that if we read something that sounded intelligible that at least some minimum effort had been put forth by a huma to be coherent. That bar is now completely gone. Now all internet users have to become adept AI-sniffers to know if some random bot isn't wandering themnoff a mental cliff with perfect formatting and eloquent prose. Having visceral reactions to that aren't unfounded in my opinion. We've lost real signal and having a forum like this be polluted will be a big casualty if we aren't careful and deliberate about our reaction to AI.
resters 20 hours ago||
I think it's similarly stupid to open source projects not accepting ai-generated code or pull requests. If the code is good, review it and accept it, if it's not, then don't. Same with HN comments. Reading is not such hard work that a literate person has to strain under the weight of ai-generated spam -- at least I haven't seen any concerning trends and I read HN often.
SilentM68 21 hours ago|||
You's correct :)
yunseo47 8 hours ago|
Whether it’s code, general text, or university assignments, the core issue is taking responsibility for one's own work. While I share the concerns raised in this thread, I believe the focus on 'LLM usage' is a bit of a red herring. The fundamental principle of ownership hasn't changed with the advent of LLMs; the tool itself isn't the issue, but rather the abdication of responsibility by the author.

For instance, if a non-native speaker translates their own writing using machine translation or an AI, is that problematic—provided they personally review and vet the content before posting? I don't think the people calling out AI use on this board are taking issue with that. Ultimately, it’s not about the method; it’s about the author's attitude.

The reason LLMs are so disruptive now is that while "shitposts" used to be obvious, we're now seeing "plausible" low-effort content generated without any human oversight. Irresponsible people have always been around, but LLMs have given them the tools to scale that irresponsibility to an unprecedented level.

yunseo47 8 hours ago||
I think a human-like piece with minor mistakes resonates more emotionally than a perfectly written piece that looks like it was written by AI. However, since there seems to be a grammar debate going on here, I'd like to add: Is it a bad thing for non-native speakers to use AI to correct grammar or awkward expressions? I think it definitely has positive aspects in terms of lowering language barriers.
ethbr1 7 hours ago||
> the tool itself isn't the issue, but rather the abdication of responsibility by the author

The biggest current social problem with AI content is our collective lack of transparency into how much human responsibility was taken.

Give a <100% reliable/accurate AI tool, the same post/code may have had {every line vetted by a human} or {no lines vetted by a human}... and readers have no way of telling which it is!

Because even if no edits needed to be made, the former carries a lot more signal than the latter, because it reduces risk of AI slop and therefore makes the content more valuable.

At the same time, it also costs more time to produce, so in any competitive marketplace (YouTube, paid comments, startup code, etc.) the unvetted AI content will dominate.

More comments...