Posted by usefulposter 22 hours ago
“Don’t post generated/AI-edited assignments. School is for conversation between humans”
AI can be a great tool for learning, but also can pollute or completely hijack the medium for human interaction and learning.
Having HN flooded with AI generated content will be sad as I like reading it, but losing that same fight at schools will be detrimental.
Homework assignments are harder, but those were always a bit difficult for teachers. It's not like cheating was invented by Gen Z...
During my university most courses had a good mixture of take-home assignments/projects and in-class exams. Yes, people could always cheat either through plagiarism (usually easily caught) or at the extreme by getting someone else to do the work (which I have never personally seen).
Anecdotal data around me shows:
* outright paper/assignment generation via LLM
* using chatGPT as a “professor” proofreading and polishing course work before submission (arguably good use but depends on the personal effort)
* avoiding reading by asking chatGPT for summaries
* using chatGPT to help explain various concepts (this is a good example of using LLMs as a source for learning…accepting that occasionally they can lie)
In a small classroom where a good teacher-student interaction happens, I guess it’s easier to catch people cheating. But some universities (maybe most) have massive classes where a professor may never have an actual conversation with some students. That context makes cheating harder to detect.
I accept my outlook on this may be a bit bleaker (hopefully), but saying it’s business as usual is at the other extreme.
Offline written tests solve the issue quite well. They scale well too. At least as far as assignments do.
People saying that oral examinations are the last bastion of cheat-free examinations are really over-stating the case.
> But some universities (maybe most) have massive classes where a professor may never have an actual conversation with some students.
Probably most yeah. At least it was my experience.
Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.
---
Edit: here are the bits I cut:
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.
edit to add -- I completely agree with you that when one's English is "good enough," it's much better to read the original rather than an LLMs guess at how to polish it. It's just hard to define what that line is, especially for the poster themselves who has no idea what a native speaker can figure out. Would some posts be removed because they are too difficult to make sense of? Or would they be allowed in their native language?
It's purely for pragmatic reasons. We love other languages and have great admiration for the many community members who participate here despite English not being their first language.
> If you flag, please don't also comment that you did.
I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)
I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.
Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.
HN's long-standing policy has been to fewer explicit rules, and looser rather than stricter interpretation. This particular one comes up often enough though that it's helpful to retain IMO, thanks for restoring the cut.
I've long made a practice of linking to moderator comments regarding policies when calling out deviations, as I'm sure the mods are aware, others might find that helpful. I've found it generally reduces the personal-irritation element going both ways, helps avoid derailing threads, and serves as a refresher to me on what standards apply.
Not sure if that's really solvable with rules, though.
My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."
(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)
Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)
See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...
Probably the Mandela effect!
Challenge accepted.
It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.
Maybe it could be consolidated with the flag-egregious-comments rule?
Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).
My reading is that the intent is to have a human voice behind the text.
Monitor and see how it goes I guess!
The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well, though for different reasons.
Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
Edit: uni_baconcat makes the point beautifully: https://news.ycombinator.com/item?id=47346032.
Most rules in https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves judgment calls. The ones we explicitly list there are mostly so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them precise.
In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to the other rules. Trying to get that caution and care right is something we work at every day.
It's a bit different when specific cases come up because then there's a chance to talk about it, add clarifying comments, etc.
I was thinking of calling this service "Dang It."
You say you want hear posts in other people's voices but I'm pretty sure that if I did this that the people who used it would find greater acceptance of their comments than if they just posted them as they originally wrote them.
One dynamic I don't think has yet been given its due: while AI is training on us, we're also all getting trained on it—that is, the hivemind's pattern-matching ability is also growing. We're heading up the escalation ladder in a paattern-matching race.
But that name is hilarious!
For me that link says:
> Error: Forbidden
> Your client does not have permission to get URL / from this server.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I would wager that this use case is much more prevalent than ones where the LLM changed the comment significantly enough to change one's voice.
I never copy/paste from an LLM into HN. Everything is typed by myself (and I never "manually" copy LLM content). I don't have any automatic tools for inserting LLM content here.[1]
Always, always, always keep in mind that you don't notice these positive use cases, because they are not noticeable by design. So the problematic "clearly LLM" comments you see may well be a small minority of LLM-assisted comments. Don't punish the (majority) "good" folks to limit the few "bad" ones.
Lastly, I often wish we had a rule for not calling out others' comments as "AI slop" or the like.[2] It just leads to pointless debates on whether an LLM was used and distracts far more than the comment under question. I'm sure plenty of 100% human written comments have been labeled as LLM generated.
[1] The dictation one is a slight exception, and I use it only occasionally when health issues arise.
[2] Probably OK for submissions, but not comments.
Also writing a draft in Google Docs and accepting most [2] of the corrections is fine. The browser fix the orthography, but I 30% of the time forget to add the s to the verbs. For preposition, I roll a D20 and hope the best.
I'm not sure if these are expert systems, LLM, or pingeonware.
But I don't like when someone use a a LLM to rewrite the draft to make it more professional. It kills the personality of the author and may hallucinate details. It's also difficult to know how much of the post is written was the author and how much autocompleted by the AI:
[1] Remember to check that the technical terms are correctly translated. It used to be bad, but it's quite good now.
[2] most, not all. Sometimes the corrections are wrong.
This makes me think of something: are nonnative English speakers tempted to use LLMs to correct grammar because mistakes like this actually make the writing unintelligible in their native language? For example, if I swap out the "For" in this sentence for any (?) other preposition, it's still comprehensible. (At|Of|In|By|To|On|With) example, ...
All of them are comprehensible, but are wrong, nobody would use them. If a foreigner use them (the translated version) people will understand them, but it will sound odd. Depending on the context, people will correct it or just go on.
Perhaps "As" or "Like" are better, still not 100% accurate but almost.
But like dang said ... I do not have time to fight this battle when I have only 10 minutes :)
I see well written people being called "LLM" here all the time, em-dash or not.
On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.
They already do to a certain extent via passports. I built a little human verifier using those at https://onlyhumanhub.com
Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)
Edit:
> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
> If you flag, please don't also comment that you did.
Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?
Does the absence of a rule against X mean that it's ok to do X? Absolutely not.
It's impossible to list all the things that people shouldn't do. Fortunately we've never walked into that trap.
Here it is "Does the lifting of a rule against X implies that it's ok to do X now?" A lot of times, the answer is yes, because that's a likely intention behind lifting a rule.
But I got that that was not your intention, because you wrote, that you removed it because they don't pose a risk anymore. That could still mean two things, that people are unlikely to do it or that people doing it now longer poses harm (relatively speaking).
Since in my experience people do like to point out to people why they were wrong posting something, this means you need them to know it is not expected to be done here. But I also don't see some other point in the guidelines about "meta-comments" in general, so that makes the second option more likely: it is okay to not forbid this now, because it does not pose that much harm. So either you expect newbies to somehow infer that rule (Why would you remove it then?) or you think it is now ok.
(I wouldn't say "lifted", though, since that implies quite a bit more.)
(Btw, I'm going to put some of that language back into the guidelines since so many people protested its removal - so this point is about to get even more theoretical!)
At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.
> Slop has an upside?
Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.
It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."
An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.
I wanted to share some context that might be helpful: I am autistic, and I have often received feedback that my communication is snarky, rude, or tone-deaf. At work, I've found it helpful to run some of my communications through an AI tool to make my messages more accessible to non-autistic colleagues, and this approach has been working well for me.
So thanks for confirming that, yes, I need to use AI because “life lesson”.
It’s up to you what you do with that knowledge. Conforming is the most boring option. I studied human behavioral psych for two decades instead, and if I felt like it I could probably earn a degree in organizational therapy rather easily now. I don’t feel like it; can’t stand people enough! But at least I know how they tick, so I can plan for their nonsense and work around it. For example!
Linus Torvalds gets thrown around a lot as an example of this, but, like, he really is an excellent example of “subtract the harmful part about calling individuals bad people over bad work, and you still have an abrasive, decisive leader who calls ideas and work bad when he sees it”. You don’t have to curb who you are or how viciously you act if you don’t want to, but demonstrably you will be more welcome to be yourself in more places if you adopt that particular distinction of “hate the work, not the worker” when it’s the work you hate and the worker is just a nameless faceless irrelevance.
That doesn’t guarantee that neurotyps will comprehend, of course, since a lot of them — and us! — have an ego that’s wired to their work competence, but for example it helps managers defend you when you are consistent and clear about separating your criticism of the work and, if any, your criticism of the worker.
There’s a lot more things like that where you can voluntarily learn how those around you function and learn to push their buttons more skillfully in ways that benefit you both, rather than putting their typ as prime over your atyp or torturing them for your benefit alone. Sure, they probably won’t try as hard, and that really fucking sucks. But at the end of the day it’s your call how much energy you spend on protocol adapters to those around you, not theirs.
Consequently, I hardly ever spend the time to write out long and detailed HN comments like I used to in the pre-LLM era. People nowadays have a much harder time believing that an Internet stranger is meticulously crafting a detailed and grammatically-airtight message to another Internet stranger without AI assistance.
Also there's some subset of users on this site who are rate limited, such as me. So for me that manifests in avoiding post for post conversations and more seeking to engage in an exchange of essays where I try to predict future points and address them, to save comments, which obviously results in long comments.
None of my agents say that anymore.
All glory to the em-dash.
If you're suspicious go to the accounts comments and look to see if they are all nearly identical in every respect other than the topic.
Most are:
It's cool you did <thing you said in post>. So how do you <technical question>?
They're guidelines. HN is based almost entirely on self-censorship, and moderation has always been light at best, partly due to the moderator-to-comment ratio. Of course the HN guidelines often fail to be observed, which is nothing new.
I cannot make one of those.
Refrigerator.
delve into noteworthy realm
leverage tapestry
But when I argue on the internet, it's always a 100% me.
And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.
"But my <language> is bad... that's why I use LLMs"
So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)
Also low quality wine[0]
That takes (much) time, though. I took about a decade to be comfortable about that.
If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.
Personally I would just like to read the best comments.
This rule will atleast partly stem the danger of HN getting turned into what dang calls a "scorched earth" situation.
I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.
I am one of those folks, and I’m strongly against AI writing for that use case as well.
The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.
Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?
The way I'm looking at it is that you're putting all this effort towards learning how to communicate with people who would never without an outside pressure do the same for you.
If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?
Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?
A century ago it was French or Latin, and a century from now it might be Mandarin or something else. The existence of a standard is what matters.
The only complain I have about Americans and language is that most tech companies fail spectacularly at supporting multilingualism, from keyboards struggling with completion to youtube and reddit forcing translations on users.
I don't care if they use an LLM to ask questions about grammar or whatever, as long as they write their own text after figuring out whatever it was they were struggling with.
I'm an English speaker with some Spanish education and practice. My experience is that reading, writing, listening, and speaking can be quite uneven. Uneven enough to matter.
In the long-run, yes, learning a language is better, assuming your goal is to learn the language. I'm not trying to be snarky: sometimes people simply want to communicate an idea quickly in the short-run and/or don't prioritize deepening a language skill.
I would rephrase the comment above as a question: "Given the set of tools available (in person tutoring, online tutoring, AI-tooling, etc) and what we know about learning from cognitive science, for a given budget and time investment, what combination of techniques work better and worse for deepening various language skills?"
We've all pasted news articles into 2022 Google Translate and a modern LLM, right, and there was no comparison? LLMs even crushed DeepL. Satya had this little story his PR folks helped him with (j/k) even, via Wired June '23:
---
STEVEN LEVY: "Was there a single eureka moment that led you to go all in?"
SATYA NADELLA: "It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool."
---
edit: this comment has some comparisons incl. w/the old Google Translate I'm referring to:
https://news.ycombinator.com/item?id=40243219
Today Google Translate is Gemini, though maybe that's not the "traditional translation tool" you were referencing... but hope there's enough here to discuss any aspect that might be interesting!
edit2: March 2025 comparison-
https://lokalise.com/blog/what-is-the-best-llm-for-translati...
"falling behind LLM-based solutions", "consistently outperformed by LLMs", "Not matching top LLMs"
Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.
The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.
I can accept that nobody is perfect, as long as they have the will to improve.
To me those are the same thing excepting the number of options given to the human...
Then you should have no issue with people using LLMs to communicate more clearly.
My raw thought: I wonder how many people are really objecting to the loss of exclusivity of their status derived from their relative eloquence in internet forums. When everyone can effectively communicate their ideas, those who had the exclusive skill lose their advantage. Now their core ideas have to improve.
Same idea, LLM-assisted: I wonder how many objections to LLM-assisted writing really stem from protecting the status that comes with relative eloquence. When everyone can express their ideas clearly, those who relied on polished prose as a differentiator lose that edge. The conversation shifts to the quality of the underlying ideas — and not everyone wants that scrutiny.
Same ideas. Same person. One reads better. Which version do you actually object to?
AI polished writing shaves away all those weird and charming edges until it's just boring.
First, what "loophole" is the comment above referring to? Spell-checking and grammar checking? They seem both common and reasonable to me.
Second, I'm concerned the comment above is uncharitable. (The word 'loophole' is itself a strong tell of that.)
In my view, humanity is at its best when we leverage tools and technology to think better. Let's be careful what policies we put in place. If we insist comments have no "traces of LLM" we might inadvertently lower the quality of discussion.
Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).
But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.
You don't possess an AI, you are using someone's AI
I'm reasonably sure the instance of Olmo 3.1 running locally on this very machine via ollama/Alpaca is very much in my possession, and not someone else's.
No? Then it's not "your" AI, it's an AI that you are using.
An alternative I tried was sharing links my LLM prompts/responses. That failed badly.
I like the parallel with linking to a Google/DuckDuckGo search term which is useful when done judiciously.
Creating a good prompt takes intelligence, just as crafting good search keywords does (+operators).
I felt that the resulting downvotes reflected an antipathy towards LLMs and the lack of taste of using an LLM.
The problem was that the messengers got shot (me and the LLM), even though the message of obscure facts was useful and interesting.
I've now noticed that the links to the published LLM results have rotted. It isn't a permanent record of the prompt or the response. Disclaimer: I avoid using AI, except for smarter search.
If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.
Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.
Small non profit forums should consciously design a site to deter group(s) of people that they do not want.
I don’t think most people read any sort of TOS, site rules, end license agreements, when was the last time you ever did?
Besides, sometimes it’s worth it to keep a rule breaking user if they are interesting and have worthwhile things to say despite their… theoretical conflict with the site intended use. Rules are too crude of a tool. Especially in case of AI they are quite nebulous even in a world where detection would be perfect (it isn’t).
What you want is to design a site that pulls people that value genuine human interaction. Niche sites are already immune to commerce and adversary bots because no one cares/knows about them. Well this site isn’t that niche I guess, some corporate astroturfing happens.
I am on one niche subculture social media and it has suprisingly well made design that is paramount to who it caters and who it dissuades. The result is lack of text ai content even though it isn’t obvious at first glance. LGBT flags are everywhere to dissuade the chuds. Israel flags are present to dissuade the annoying politics ppl from reddit. Lots of artsy stuff to speak to the genuine creativity.
It looks stupid but it isn’t stupid. It’s actually quite ingenious.
HN is probably already dead as it is too high profile in certain circles to avoid mainstream adversarial AI content.
Once LLM generated speech or content start getting into the live answers of Q&A sessions, that would be sad. I know some people try to get through interviews, but I think that might be a bit harder to not detect.
That's just marketing-speak. LLMs sound like that because LLMs were trained on marketing-speak.
Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.
That said, I believe that LLMs' "unique" writing style may be useful ability to protect anonymity against stylometric attacks, although that still ought to be checked. If true, that would be a case where LLMism would be desirable by the author.
For instance, if a non-native speaker translates their own writing using machine translation or an AI, is that problematic—provided they personally review and vet the content before posting? I don't think the people calling out AI use on this board are taking issue with that. Ultimately, it’s not about the method; it’s about the author's attitude.
The reason LLMs are so disruptive now is that while "shitposts" used to be obvious, we're now seeing "plausible" low-effort content generated without any human oversight. Irresponsible people have always been around, but LLMs have given them the tools to scale that irresponsibility to an unprecedented level.
The biggest current social problem with AI content is our collective lack of transparency into how much human responsibility was taken.
Give a <100% reliable/accurate AI tool, the same post/code may have had {every line vetted by a human} or {no lines vetted by a human}... and readers have no way of telling which it is!
Because even if no edits needed to be made, the former carries a lot more signal than the latter, because it reduces risk of AI slop and therefore makes the content more valuable.
At the same time, it also costs more time to produce, so in any competitive marketplace (YouTube, paid comments, startup code, etc.) the unvetted AI content will dominate.