Top
Best
New

Posted by usefulposter 23 hours ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4051 points | 1535 commentspage 4
bikamonki 23 hours ago|
My words:

This feels like don't buy at Walmart, support the local small shop. We passed the no return sign miles ago.

Gemini's:

This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.

Yeah, we can tell the difference :)

vova_hn2 40 minutes ago||
> We passed the no return sign miles ago > we passed the point of no return miles back

Unrolling a metaphor into its literal meaning is one of the most annoying features of the "AI voice", IMO

GuinansEyebrows 22 hours ago|||
leave it to Gemini to dismiss artisanal craft when the community of discussion is primarily one of craftspeople :)
bondarchuk 23 hours ago||
All the weak excuses posted here are just making me lean more towards a hardline policy. No I don't want to read a human-generated summary of your llm brainstorming session. No I don't want to read human-written text with wording changes suggested by an llm. No I don't want to read an excerpt from llm output even if you correctly attribute it.

I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.

jmuguy 22 hours ago||
Beyond folks for whom English is a second language, I agree with you. I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks. We just want to communicate with you, and if you sound like an idiot without the help of an LLM then maybe work on that rather than pretending to be Hemingway.
kace91 22 hours ago|||
>Beyond folks for whom English is a second language

I am one of those folks, and I’m strongly against AI writing for that use case as well.

The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.

jmuguy 22 hours ago|||
I hadn't really considered the case of actually wanting to learn English :) I just assume its tolerated by the rest of the world.
Teever 22 hours ago|||
Maybe you have it backwards?

Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?

The way I'm looking at it is that you're putting all this effort towards learning how to communicate with people who would never without an outside pressure do the same for you.

If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?

Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?

kace91 21 hours ago||
Honestly, having a common language that offers access to most knowledge and people in the western world at once is already amazing. If it happens to be the native language of most Americans, all the better for them.

A century ago it was French or Latin, and a century from now it might be Mandarin or something else. The existence of a standard is what matters.

The only complain I have about Americans and language is that most tech companies fail spectacularly at supporting multilingualism, from keyboards struggling with completion to youtube and reddit forcing translations on users.

Freak_NL 22 hours ago||||
Why exempt people who use English as a second language? Anyone with a level of proficiency sufficient for reading the comments here can manage writing English at a passable level. If that takes effort and requires looking up idioms or words, then good! That is how you learn a language — outsource that and you don't. It won't stick even if you see what is being output.

I don't care if they use an LLM to ask questions about grammar or whatever, as long as they write their own text after figuring out whatever it was they were struggling with.

xpe 16 hours ago||
> Anyone with a level of proficiency sufficient for reading the comments here can manage writing English at a passable level.

I'm an English speaker with some Spanish education and practice. My experience is that reading, writing, listening, and speaking can be quite uneven. Uneven enough to matter.

In the long-run, yes, learning a language is better, assuming your goal is to learn the language. I'm not trying to be snarky: sometimes people simply want to communicate an idea quickly in the short-run and/or don't prioritize deepening a language skill.

I would rephrase the comment above as a question: "Given the set of tools available (in person tutoring, online tutoring, AI-tooling, etc) and what we know about learning from cognitive science, for a given budget and time investment, what combination of techniques work better and worse for deepening various language skills?"

gbear605 22 hours ago||||
Traditional translation tools still work, and they're pretty darn good still.
yellowapple 17 hours ago|||
The ones that are “pretty darn good” are the ones that use the same underlying AI/ML tech as the average LLM, and would be in violation of this newly-formalized guideline.
Barbing 22 hours ago|||
I've seen this comment but can't square it with the LLM-induced outcry from translators over job loss.

We've all pasted news articles into 2022 Google Translate and a modern LLM, right, and there was no comparison? LLMs even crushed DeepL. Satya had this little story his PR folks helped him with (j/k) even, via Wired June '23:

---

STEVEN LEVY: "Was there a single eureka moment that led you to go all in?"

SATYA NADELLA: "It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool."

---

edit: this comment has some comparisons incl. w/the old Google Translate I'm referring to:

https://news.ycombinator.com/item?id=40243219

Today Google Translate is Gemini, though maybe that's not the "traditional translation tool" you were referencing... but hope there's enough here to discuss any aspect that might be interesting!

edit2: March 2025 comparison-

https://lokalise.com/blog/what-is-the-best-llm-for-translati...

"falling behind LLM-based solutions", "consistently outperformed by LLMs", "Not matching top LLMs"

kubb 22 hours ago||||
As someone who learned English as a second language, I would encourage people to use LLMs and any other resources to practice, and then use what they've learned to communicate with others.

Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.

I can accept that nobody is perfect, as long as they have the will to improve.

happyopossum 22 hours ago||
>Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

To me those are the same thing excepting the number of options given to the human...

kubb 22 hours ago||
The act of choosing something requires effort, and is an expression of personal style. This is way better than handing it all over to the model.
nobrains 22 hours ago||||
Also, there is nothing wrong with looking like an idiot. Thats only in your mind. As long as you have put thought into your reply, even if it not structured correctly, or verbose, or does not have perfect English, humans can still decipher it and understand it.
yellowapple 17 hours ago||||
> We just want to communicate with you

Then you should have no issue with people using LLMs to communicate more clearly.

briantakita 17 hours ago||
> Then you should have no issue with people using LLMs to communicate more clearly.

My raw thought: I wonder how many people are really objecting to the loss of exclusivity of their status derived from their relative eloquence in internet forums. When everyone can effectively communicate their ideas, those who had the exclusive skill lose their advantage. Now their core ideas have to improve.

Same idea, LLM-assisted: I wonder how many objections to LLM-assisted writing really stem from protecting the status that comes with relative eloquence. When everyone can express their ideas clearly, those who relied on polished prose as a differentiator lose that edge. The conversation shifts to the quality of the underlying ideas — and not everyone wants that scrutiny.

Same ideas. Same person. One reads better. Which version do you actually object to?

yellowapple 14 hours ago||
I don't object to either version. I think the LLM'd version is a little clearer; I also don't think I'd peg it as LLM'd if you hadn't marked it as such.
MengerSponge 22 hours ago||||
One heartbreaking loss from LLMs are the funny little disfluencies from ESL speakers. They're idiosyncratic and technically wrong, but they indicate a clear authorial voice.

AI polished writing shaves away all those weird and charming edges until it's just boring.

mrcsharp 22 hours ago||||
English is my 3rd language. I still disagree with using an LLM to write on one's behalf. I either get to read your thoughts in your voice or the comment is getting a downvote/flag.
xpe 22 hours ago|||
> I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks.

First, what "loophole" is the comment above referring to? Spell-checking and grammar checking? They seem both common and reasonable to me.

Second, I'm concerned the comment above is uncharitable. (The word 'loophole' is itself a strong tell of that.)

In my view, humanity is at its best when we leverage tools and technology to think better. Let's be careful what policies we put in place. If we insist comments have no "traces of LLM" we might inadvertently lower the quality of discussion.

fouronnes3 22 hours ago|||
I feel you. I don't think I've ever finished reading a sentence that started with "I asked <LLM> and he said..."
unreal6 22 hours ago|||
I find the consistent anthropomorphization to be grating as well
minimaxir 22 hours ago||||
The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.

Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).

strbean 22 hours ago||||
These are the worst. I'm fine with you dumping your own half formed thoughts into an LLM, getting something reasonably structured out, and then rewriting that in your own voice, elaborating, etc.

But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.

sumeno 20 hours ago||||
The only thing worse is "I asked my AI and he said"

You don't possess an AI, you are using someone's AI

yellowapple 17 hours ago||
> You don't possess an AI, you are using someone's AI

I'm reasonably sure the instance of Olmo 3.1 running locally on this very machine via ollama/Alpaca is very much in my possession, and not someone else's.

sumeno 5 hours ago||
Did you train it? Is it meaningfully different from every other instance of the same model?

No? Then it's not "your" AI, it's an AI that you are using.

dormento 22 hours ago||||
This is usually an "auto-skip" for me as well.
alkyon 22 hours ago||||
Still preferable to just pasting it without revealing the source. LLMs have become a brain prosthesis for some people which is incredibly sad.
throwaawy12390 22 hours ago||||
I work for a political party (not Ameican) and the President is addicted to using chatgpt for facebook posts.
robocat 20 hours ago||||
> "I asked <LLM> and he said..."

An alternative I tried was sharing links my LLM prompts/responses. That failed badly.

I like the parallel with linking to a Google/DuckDuckGo search term which is useful when done judiciously.

Creating a good prompt takes intelligence, just as crafting good search keywords does (+operators).

I felt that the resulting downvotes reflected an antipathy towards LLMs and the lack of taste of using an LLM.

The problem was that the messengers got shot (me and the LLM), even though the message of obscure facts was useful and interesting.

I've now noticed that the links to the published LLM results have rotted. It isn't a permanent record of the prompt or the response. Disclaimer: I avoid using AI, except for smarter search.

xpe 22 hours ago|||
My take is orthogonal. Overall, I've become less tolerant of token-generators of all kinds (including people) of bad quality (including tropes, bad reasoning, clunky writing, whatever). But I digress.

If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.

tavavex 22 hours ago|||
Not just bad taste. I have yet to see a post that attributes its text to an LLM ("I asked ChatGPT and here's what it said...") that doesn't come off as patronizing. "Hey, so I don't really have any knowledge or experience of my own with this topic, but here, let me ask an LLM for you. Here, read the output, since you apparently can't figure out how to ask it yourself. Read it. Aren't you interested in what my knowledge machine has to say? Why don't you treat it like how you'd treat me if I shared my own opinion?"
juleiie 22 hours ago|||
Look, you can make all the rules you want but in the end vibe check is the only way to have any sort of quality.

Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.

Small non profit forums should consciously design a site to deter group(s) of people that they do not want.

jacquesm 22 hours ago|||
It's not about the rules. It is about intent. The rules are just there to alert newcomers and repeat offenders to the fact that they are in fact not operating according to the rules. That way there is something to point to. Then they can go 'oh, I didn't know that, sorry', and then it is all fine or they can do an 'orf'[1] and persist and then you throw them right out.

[1] https://news.ycombinator.com/item?id=47321736

gleenn 22 hours ago|||
I feel like you are being a bit contradictory: the suggestion is to dissuade AI content - isn't that "design[ing] a site to deter group(s) of people that they don't want"? I personally don't want to vibe check every HN comment if I can avoid it, I don't even think you can quantify that in any meaningful way. We can engender a site like that at least in spirit. It may be equally as difficult but it's still worth fighting for.
juleiie 21 hours ago||
Rules aren’t known to be a. Easily enforceable in case of AI b. Very dissuading

I don’t think most people read any sort of TOS, site rules, end license agreements, when was the last time you ever did?

Besides, sometimes it’s worth it to keep a rule breaking user if they are interesting and have worthwhile things to say despite their… theoretical conflict with the site intended use. Rules are too crude of a tool. Especially in case of AI they are quite nebulous even in a world where detection would be perfect (it isn’t).

What you want is to design a site that pulls people that value genuine human interaction. Niche sites are already immune to commerce and adversary bots because no one cares/knows about them. Well this site isn’t that niche I guess, some corporate astroturfing happens.

I am on one niche subculture social media and it has suprisingly well made design that is paramount to who it caters and who it dissuades. The result is lack of text ai content even though it isn’t obvious at first glance. LGBT flags are everywhere to dissuade the chuds. Israel flags are present to dissuade the annoying politics ppl from reddit. Lots of artsy stuff to speak to the genuine creativity.

It looks stupid but it isn’t stupid. It’s actually quite ingenious.

HN is probably already dead as it is too high profile in certain circles to avoid mainstream adversarial AI content.

layman51 22 hours ago|||
I had a couple of experiences where I suspected I was hearing LLM-generated/edited text being read aloud. It was at two different webinars about that were about roadmaps or case studies about some products that I use. It was a bit uncanny because I could detect the stylistic patterns ("It's not X, it's Y" and "No X, no Y, just Z"), but it was kind of jarring to see them spoken by a person on a video call. It makes me think this kind of pattern might be engaging, but for a lot of people, it now sticks out for the wrong reasons.

Once LLM generated speech or content start getting into the live answers of Q&A sessions, that would be sad. I know some people try to get through interviews, but I think that might be a bit harder to not detect.

yellowapple 17 hours ago||
> It was a bit uncanny because I could detect the stylistic patterns ("It's not X, it's Y" and "No X, no Y, just Z"),

That's just marketing-speak. LLMs sound like that because LLMs were trained on marketing-speak.

strangattractor 22 hours ago|||
According to Citizens United corporations have free speech. LLMs are made by corporations. Are LLMs entitled to free speech?
filoleg 22 hours ago||
To answer your question: LLMs don't have free speech, because they aren't companies/businesses, they are a tool (that is used by companies/businesses).

Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.

strangattractor 21 hours ago||
I appreciate the answer and the open minded thoughtful answer.
fluffybucktsnek 22 hours ago|||
Dare I say, it is mostly your bias. I get not wanting to read raw or poorly reviewed LLM slop, but AI-edited comments? I thought the point was about having interesting discussions about unique ideas we come up with, not the surpeficial wording around it. If someone manages to keep the core of their idea mostly intact while making the presentation more readable, does it really matter that it was post-processed by an AI?
dang 14 hours ago||
When you put the question that way, the answer is naturally no. However, there are other factors. I wrote about this here if you want to take a look: https://news.ycombinator.com/item?id=47342616.
fluffybucktsnek 1 hour ago||
The perspective of protecting user from flaming is interesting, but I agree with @edanm.

That said, I believe that LLMs' "unique" writing style may be useful ability to protect anonymity against stylometric attacks, although that still ought to be checked. If true, that would be a case where LLMism would be desirable by the author.

resters 23 hours ago||
[flagged]
gleenn 22 hours ago|||
I think we can be a little more nuanced than calling this sentiment outright stupid. A top HN article is about Scientific publications being overwhelmed with LLM trash. LLMs do pose a very real challenge to modern discourse. 10 years ago we could know that if we read something that sounded intelligible that at least some minimum effort had been put forth by a huma to be coherent. That bar is now completely gone. Now all internet users have to become adept AI-sniffers to know if some random bot isn't wandering themnoff a mental cliff with perfect formatting and eloquent prose. Having visceral reactions to that aren't unfounded in my opinion. We've lost real signal and having a forum like this be polluted will be a big casualty if we aren't careful and deliberate about our reaction to AI.
resters 22 hours ago||
I think it's similarly stupid to open source projects not accepting ai-generated code or pull requests. If the code is good, review it and accept it, if it's not, then don't. Same with HN comments. Reading is not such hard work that a literate person has to strain under the weight of ai-generated spam -- at least I haven't seen any concerning trends and I read HN often.
SilentM68 22 hours ago|||
You's correct :)
GMoromisato 22 hours ago||
I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.

But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?

I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

altairprime 22 hours ago||
> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

This is an artificial dichotomy. HN’s guidelines specify thoughtful, curious discussion as a specific goal. One-off / pithy / sarcastic throwaway comments are generally unwelcome, however popular they are. Insightful responses can be three words, ten seconds to write and submit, and still be absolutely invaluable. Well-thought-out responses are also always appreciated, even if they tend to attract fewer upvotes than a generic rabble-rousing sentiment about DRM or GPL or Apple that’s been copy-pasted to the past hundred posts about that topic. But LLM-enhanced responses are not only unwelcome but now outright prohibited.

Better an HN with fewer words than an HN with more AI writing words. We’ve been drowned in Show HN by quantity as proof of why already.

GMoromisato 21 hours ago||
But what if it turns out that human+LLM can produce more "thoughtful, curious discussion" than human alone?

That's the dichotomy: Do we prefer text with the right "provenance" over higher quality text?

[Perhaps you'll say that human+LLM text will never be as high-quality as human alone. But I'm pretty sure we've seen that movie before and we know how it ends.]

That said, you're right that because human+LLM is so much more efficient, we'll be drowning in material--and the average quality might even go down, even if the absolute quantity of high-quality content goes up.

I think, in the long term, we will have to come up with more sophisticated criteria for posting rather than just "must be unenhanced human".

Avicebron 20 hours ago|||
I think "must be unenhanced human" is probably the most sophisticated criteria even if it's simple. I don't think there's much value in trying to optimize the perfect "thoughtful, curious discussion", why would there be, it implies some ideal state for "thoughtful and curious" vs the reality that discussions between living breathing people is interesting by default as long as folks loosely follow some guidelines.
altairprime 20 hours ago||||
> what if it turns out that

HN need not offer itself up as a Petri dish for AI writing experimentation. There are startups in that space, and at least one must be YC-funded, statistically speaking. Come back with the outcomes of the experiment you describe and make a case that they should change the rule. Maybe they will! As of today, though, they are apparently unconvinced.

> the average quality might even go down

We have a recent concrete analysis of Show HN indicating support for this possibility, resulting in the mods banning new users for posting to Show HN (something they’ve probably been resisting for close to twenty years, I imagine, given how frequent a spam vector that must be).

> Perhaps you’ll say that human+LLM text will never be as high-quality as human alone

Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

> in the long term, we will have to come up with more sophisticated criteria

Our current criteria seem sophisticated already. Perhaps you could make a case that AI-assisted writing helps avoid guideline violations. This one tends to be especially difficult for us all today:

”Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents.”

GMoromisato 20 hours ago||
> Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

I apologize--the "you" I meant was the person currently reading my post, not the person I was replying to. I was merely trying to answer a common objection that I've heard.

> HN need not offer itself up as a Petri dish for AI writing experimentation.

I'm not sure HN has a choice. I don't think we can prevent posters from experimenting with LLMs to post on HN--even if they adhere to the guidelines. For example, can I ask the LLM to come up with the strongest argument it can and then re-write it in my own words? That seems to be allowed by the guidelines. Would someone even be able to tell that's what I did? [NOTE: I did not do that.]

I think you're arguing that we should not encourage even more use of LLMs on HN. I get that. But I feel like that this community is uniquely qualified to search for better solutions.

> Our current criteria seem sophisticated already.

I hope you're right! That implies that you believe the current guidelines are sufficient to keep HN as the place we all love despite the assault from LLMs. I'm skeptical, but I've been wrong plenty of times!

altairprime 19 hours ago||
> I don't think we can prevent posters from experimenting with LLMs to post on HN

And yet, she persisted, we will still set guidelines; so that people know they’re unwelcome to do so when they do, so that they can’t argue that they didn’t know, so that we as a social club can strive towards the standards we argue about and accept from the organizers. The point of guidelines is not that they prevent malicious intent; the point is that they inhibit those behaviors that exceed the defined boundaries, however vague or precise they may be. Prevention of malice is an impossibility in all human social affairs, whether guidelines are defined or not; one must find other reasons for rules than prevention to understand why rules are at all.

GMoromisato 19 hours ago||
> And yet, she persisted, we will still set guidelines

I'm not sure if you're including or excluding me from the "we". If you're excluding me, then I feel our conversation has come to an end.

But if you're including me, then I think the guidelines need to evolve to deal with LLMs. Maybe not right now--maybe the current guidelines are sufficient for the next year or two or three. But I think we as a community are uniquely qualified to design and influence the future of internet social clubs in the face of LLMs.

altairprime 19 hours ago||
> I'm not sure if you're including or excluding me from the "we".

“We” here refers to individual human beings that are members of the human social-entity constructs (‘social clubs’) that precipitate naturally out of human groups, both in general to all such groups and in specific to the group under discussion here today, HN participants.

Whether or not you’re a member of “we” HN participants is conditional on whether or not you are honoring the policy of no AI-assisted writing at HN that is in effect as of whenever you saw this post or the new guidelines. I have no judgment to offer you in that regard, and in any case you’re readily able to decide that for yourself. Separately, I’m not engaging with discussion about future policy; perhaps you should start a top-level thread about it, or write a blog post and submit it (after a few days have passed, so it doesn’t get topic-duped and so that passions have cooled somewhat).

davebranton 20 hours ago|||
It doesn't matter.

The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it.

If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?".

Yes. Yes, we do.

customguy 18 hours ago|||
> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

For me it's the first one every time. If only because LLM don't learn from responses to it (much less so when the response is to a paste of their output). It's just not communication. From that perspective, the quality of even the most brilliant LLM output is zero, because it's (whatever high value) multiplied by zero.

Even a real person saying something really horrible and too dense to learn from any response at least gives me a signal about what humans exist. An LLM doesn't tell me anything, and if wanted the reply of an LLM, I would simply feed my own posts into an LLM. A human doing that "for me" is very creepy and, to my sensibilities, boundary violating. Okay, that may be too strong a word, but it feels gross in a way I can't quite put my finger on, but reject wholeheartedly.

alpha_squared 22 hours ago|||
> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

I'd argue that anything insightful or well-though-out doesn't use LLMs at all. We can quibble over whether discussions with an LLM lead to insightful responses, but that still isn't your own personal thought. Just type what's on your mind, it's not that hard and nitpicking over this is just looking for ways to open up unnecessary opportunities for abuse.

rozal 22 hours ago|||
Often i think of a novel idea or solution to a problem, but use AI to communicate or adjust what I already wrote out so it’s more comprehensible. Sometimes when I write, it’s hard to understand.
davebranton 20 hours ago|||
The more you write, the less this will be true. The more you write, the better you will become at it. Using an LLM to write is like sending a robot to the gym for you.

The more you use an LLM to write for you, the worse you will become at writing yourself. There is simply no other possible outcome. It's even true of spellcheck - the more you use a spellcheck the worse you become at spelling. I know this for a fact because I can no longer spell for shit. However, spelling is to writing as arithmetic is to mathematics. I also can't add up, but I have a degree in pure mathematics.

LLMs are a cancer on human thought and expression.

briantakita 18 hours ago||
> LLMs are a cancer on human thought and expression.

LLMs help to express what many people dont have the energy or ability to express. It also has a broader scoped view of protocol...It does not have emotions, which often leads to less than optimal discourse.

In many ways, it help those who are challenged in discourse to better express themselves...rather than keeping silent or being misunderstood.

rozal 15 hours ago||
[dead]
jamiek88 19 hours ago||||
How do you expect to get better at it then if you avoid the hard work and emotional weight of fixing it?
yellowapple 18 hours ago||
So if you want to reply to a comment you read today, and you don't feel like your writing skill is up to snuff, you should be content with expecting to wait the requisite weeks or months or years of practice before even considering replying to it?

This seems especially relevant for non-English-fluent commenters, who are increasingly using LLMs to be able to communicate more effectively on an English-only site like Hacker News than they'd otherwise be able to do.

rukuu001 17 hours ago||
I've noticed a considerable drop-off in HN commenters who are unable to deal with the substance of a comment if it contains errors in spelling or grammar, so I don't think this is the issue it used to be.

It's still daunting posting in a second language, and LLMs are an attractive solution to that (depending on your definition of 'solution').

yellowapple 14 hours ago||
Is that an actual drop-off in commenters, or in comments? The latter is readily explainable by “commenters who would previously call out the errors now choose to not engage with those comments/posts at all”.

In any case, I don't think it's a bad thing to want to communicate as clearly as possible, and if an LLM helps you do that, I ain't one to judge. Sure, ideally I'd want to read folks' thoughts without the LLM-induced layer of vaseline smoothing them over, but even that's better than not reading them at all :)

sharken 21 hours ago|||
In that sense AI is a tool much like a dictionary, it enhances and I'd say improve the end result.
verdverm 19 hours ago||
The difference is that I will retain what I drew out from the dictionary the next time. If people use Ai this way for writing, great! What many of the "enhanced-by-ai" arguments sound like is that this will be an indefinite outsourcing.

Use them to get better, like how reading good writing directly (not summarized) will also make you a much better writer. Learn from the before and after so next time there isn't a need to reach for Ai.

RhodesianHunter 22 hours ago|||
There are many obvious ways in which this may not be true.

Anyone learning the language and some people with learning disabilities, for example, may communicate better via an LLM.

bonoboTP 22 hours ago|||
There is a sliding scale from that, to it being the LLM that communicates, not the person. LLMs can really reshuffle and change priorities and modify emphasis in a text. All the missing pieces will be filled in and rounded out and sandpapered off by the inner-average-corporate-HR-Redditor of the LLM.
postalcoder 22 hours ago|||
I promise you, after this past year, you don’t know how happy I am to read issues and PRs in broken English.
bittercynic 22 hours ago|||
I like to read human comments because I'd like to know what my fellow humans think. I'd prefer not to read low-effort, throw away comments, but other than that I want to know what people think about different topics.
GMoromisato 20 hours ago||
I read HN both because I want to read what humans think, and because I want to read insightful discussion.

The tension is that as insightful discussion becomes easier/better with LLMs, there is less need to read HN. All I'm left with is provenance: reading because a human wrote it, not because it is uniquely insightful.

jmull 21 hours ago|||
If the goal is to read what actual humans think, it's hard to see how an LLM filter can do anything but obscure and degrade the content.

LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in.

If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves.

GMoromisato 20 hours ago||
I think it's a spectrum:

1. I enter "Describe the C++ language" at an LLM and post the response in HN. This is obviously useless--I might as well just talk to an LLM directly.

2. I enter "Why did Stroustrup allow diamond inheritance? What scenario was he trying to solve" then I distill the response into my own words so that it's relevant to the specific post. This may or may not be insightful, but it's hardly worse than consulting Google before posting.

3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN. This could be a genuinely novel idea and the fact that it is summarized by an LLM does not diminish the novelty.

My point is that human+LLM can sometimes be better than human alone, just as human+hammer, human+calculator, human+Wikipedia can be better than human alone. Using a tool doesn't guarantee better results, but claiming that LLMs never help seems silly at this point.

Avicebron 20 hours ago|||
> 3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN

I think where you are getting hung up is the idea of "better results". We as a community don't need to strive for "better results" we can easily say, hey we just want HN to be between people, if you have the LLM generate this hypothetical test, just tell people in your own words. Maybe forcing yourself to go through that exercise is better in the long run for your own understanding.

GMoromisato 20 hours ago|||
My example was not great.

But my point is that I read HN partly because people here are insightful in a way I can't get in other places. If LLMs turn out to ultimately be just as insightful, then my incentive to read HN is reduced to just, "read what other people like me are thinking." That's not nothing, but I can get that by just talking with my friends.

Unless, of course, we could get human+LLM insightfulness in HN and then I'd get the best of both worlds.

xenophonf 19 hours ago|||
If someone can't explain something in their own words, then they don't _really_ understand it. The process of taking time to think through a topic and check one's understanding, even if only for oneself and the rubber duck, will reveal mistakes or points of confusion.
Avicebron 19 hours ago||
Which gets to the core of the issue nicely, I want to go on to HN and talk to people who know things or have thought about things to the degree that they don't need a cheat sheet off to the side to discuss them.
jmull 20 hours ago|||
How is it not better, in your third scenario, if you described what you think are the important and interesting aspects of your idea/demo?

And what motivated you to make it -- probably the most interesting thing to readers, and not something an LLM would know.

Believe me, I don't care what an LLM has to say about your thing. I care about what you have to say about your thing.

caconym_ 22 hours ago|||
What is the value of this "output"? If I want to know what LLMs think about something, I can go ask an LLM any question I want. For a comment on [a site like] HN, either the substantive content of the comment originated inside a human mind, or there is no substantive content that I couldn't reproduce by feeding the comment's context into an LLM. At the extreme, I don't have any interest in reading or participating in a conversation between a bunch of LLMs.
neutronicus 22 hours ago||
They’re referencing LLM-enhanced output.

The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.

caconym_ 21 hours ago|||
> perhaps only in English

Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?

> someone who is a lousy writer with deep domain knowledge going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own

This sounds reasonable on its face, but how often does it actually come up that somebody can't clearly express an idea in writing on their own but can somehow get an LLM to clearly express it by writing a series of prompts to the LLM?

And, if it does come up, why don't they just have that conversation with me, instead?

zajio1am 19 hours ago|||
> Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?

Nontrivial translation tools are AI(neural net)-based tools (although not necessary LLM). Whole transformer neural net architecture was originally designed for translation.

caconym_ 19 hours ago||
I don't have a problem with people using these tools to translate their writing into languages they aren't fluent/literate in. It's a completely different dynamic vs. having them write for you.
neutronicus 9 hours ago|||
> And, if it does come up, why don't they just have that conversation with me, instead?

Because (the royal) you will be argumentative and shitty, and sour this person on their desire to communicate their knowledge at all.

caconym_ 53 minutes ago||
This also seems mostly made up. In decades of using the internet, I can't remember ever seeing someone trying to share deep domain knowledge and getting mocked/shouted down just because they had a language gap or otherwise weren't a great communicator. In spaces where substantive discussion happens, people generally seem willing to engage in good faith and help close that particular gap.
GMoromisato 19 hours ago|||
Exactly!

Just as Google-enhanced output and Wikipedia-enhanced output has helped my writing/thinking, I believe LLM-enhanced output also helps me.

Plus, I personally gain more benefit from using an LLM as a researcher than as a writer.

caconym_ 19 hours ago||
Using LLMs for research is completely different from using them to write for you. And if you're using them to write about the results of research, you're almost certainly getting a lot less value out of the whole exercise.
abtinf 22 hours ago|||
By this logic, you might consider vibe coding a browser plugin that takes any HN comment less than 50 words and auto-expands it into an “insightful, well thought-out response.”
zahlman 22 hours ago|||
Length is not insight. I understand this to be a community oriented towards people who are not impressed by such superficial things.
_se 21 hours ago||
That's the point :)
telotortium 19 hours ago|||
Delivered: https://github.com/telotortium/dotfiles/tree/27c11efd967eebc...
kelnos 21 hours ago|||
> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Neither. I want insightful, well-thought-out, human comments.

It's a little sad that this might be too much to ask sometimes...

munificent 17 hours ago|||
> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

If your definition of "superior" includes some amount of "provides a meaningful connection to another living being", then LLM output will rarely be superior even when it's factually and grammatically correct.

jedahan 22 hours ago|||
I prefer low effort human thought to low effort llm output.
gkfasdfasdf 21 hours ago|||
> But here's where it gets tricky

Pretty sure this comment is AI

GMoromisato 19 hours ago||
Now I know how the Salem witches felt. How can I prove that it's not AI?
yellowapple 18 hours ago||
You can't. Nobody can. False positives are the inherent danger of these sorts of policies — especially when the LLMs were trained on the exact writing styles that have dominated online conversations and publications for decades.
amarble 21 hours ago|||
The point of a discussion site is to hear what other people think and get different perspectives. Just getting an LLM's insightful, well-thought-out response isn't really a big draw, if one is looking for that, there's a pretty obvious way to get it. I posted this the other day (ignore the title I realized later it's too clickbaity) but this is why IMO LLMs won't replace the workforce, people aren't looking for answers to things, they're looking for other people's takes: https://news.ycombinator.com/item?id=47299988
Ensorceled 21 hours ago|||
> If I wanted to read what an LLM thinks, I could just ask it.

and

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

What is the difference? What's the line between these two?

The prompt: "Analyze <opinion> and respond" is pretty clearly "I would just ask it." and, the prompt: "here's my comment, please ONLY the check the grammar and spelling" would probably be ok.

What about prompt:"I disagree with using LLMs for commenting at all for <reasons>. Please expound on this and provide references and examples". That would explode the word count for this site.

GMoromisato 19 hours ago||
What about:

1. "Here is my answer to a comment. Give me the strongest argument against it."

2. "I think xyz. What are some arguments for and against that I may not have thought of."

3. "Is it defensible for me to say that xyz happened because of abc?"

All of these would help me to think through an issue. Is there a difference between asking a friend the above vs. an LLM? Do we care about provenance or do we care about quality?

verdverm 19 hours ago||
The difference is in the journey to find the answer, rather than outsourcing it to man or machine. Spending more time reflecting before first post will often answer the easy questions so you can formulate more thoughtful questions.
js8 16 hours ago|||
I agree there is a dichotomy. I personally think AIs are better debaters than humans, at the very least in their ability to make less logical mistakes and have wider knowledge. I would suggest everyone should run their thoughts through an AI to get a constructive critique, it would certainly reduce lot of time wasted.

And I find the decision to "ban" AI slightly ironic, when HN has a disdain (unlike its predecessor Slashdot) for funny or sarcastic comments, which require the reader to think more, rather than having a clear argument handed on a silver platter. I mean, it is what truly human communication is like - deliberately not always crystal clear.

I suspect that HN will eventually be replaced by an AI-moderated site, because it will have more quality content.

GMoromisato 15 hours ago||
There are huge advantages to AI-moderation. TBD what the unintended consequences are. But I think it's worth trying.

I believe banning AI is a temporary solution. Even today it is very hard to tell human from AI. In the future it will be impossible. We are in the Philip Dick future of "Do Androids Dream" (the book, not the movie). Does it matter if we can't tell human from AI? The book proposes that how we feel about the piece we're reading is the only thing that matter. How the piece got created is irrelevant.

js8 10 hours ago||
I think what would be nice (but won't happen until cost of AI somewhat decreases):

1. Pre-moderation - AI looks at your comment before you submit it, and suggests changes for clarity, factuality and argumentative strength. You can decide whether to accept these (individual) changes or not. It will also automatically flag if it breaks moderation guidelines too much.

2. Discussion summary - AI will periodically edit main debate points and supporting sources into a comprehensive document, which you can further add to with your comment. This will help to steer the discussion and make it easier to consume in the future. It can also make discussions less ephemeral, which is a huge problem.

bonoboTP 22 hours ago|||
Humans have more variability and "edge". If a person is passionately arguing for some point of view (perhaps somewhat outside the usual), it signals to me that they probably thought about this and it is a distillation of a long thought process and real-life experience. One could say that the logical argument should stand alone, but reality doesn't work that way. There are many things you have to implicitly trust and believe when you read. Of course lying and bullshitting already existed before ("nobody knows you're a dog" etc etc). But LLMs will really eloquently defend X, not X, X*0.5 and anything inbetween. There is no information content in it, it doesn't refer to an actual human life experience and opinion that someone wants to stand behind. It just means that someone made the LLM output a thing.
unsui 21 hours ago|||
Gonna put out a blanket assertion about my preferences, to get a read on whether these are shared or not:

As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.

History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).

But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)

So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?

I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.

paganel 21 hours ago|||
> well-thought-out response, even if it is LLM-enhanced?

There's no insight nor well-thought-out response once a person decides to "LLM-enhance" their response. The only insight that the person using the LLM is too limited to have a decent conversation with.

verdverm 19 hours ago|||
> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

My ideal vision is that instead of outsourcing indefinitely, we learn from the enhanced versions and become better independent writers.

relaxing 22 hours ago|||
If you like reading LLM output, just talk directly to an LLM. Problem solved.
TacticalCoder 22 hours ago|||
> Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)?

Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").

Now, granted, not all sparkling wine are Champagne.

The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".

I drank enough of it to be stating my case, of which I'm certain!

P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.

sireat 21 hours ago||
Basically you have Cremant type sparking wines which are produced from other regions of France besides Champagne. It is just like Champagne just that other French regions like Loire, Alsace, Bordeux etc are not allowed to call it Champagne.

So just like Armanac's are like Cognac's for lower price, good Cremant will be cheaper and more enjoyable that cheaper Champagne (I've not had any really expensive Champagne).

Then you have Cava from Spain which is similar process to Cremants and Champagne. The difference would be in type of grapes used. A friend of mine swears by Cavas just like I swear by Cremants from Loire region. However my wife hates Cava.

Then Proseccos from Italy again are similar, but quality varies more.

After that we get into more questionable cheaper sparkling wines which usually means some sort of out of bottle insertion of CO2 and even worse version include some other modifications such as sugar.

In general to avoid literal headaches you want BRUTs. Anything semi-sweet or sweet is suspicous.

Again I am not a full wine expert but this is mostly years of ahem experience.

browningstreet 22 hours ago||
I keep wishing for a public place to put a formatted version of my LLM threads. I have long conversations with LLMs that usually result in some kind of documentation, tutorial, or dataset. Many of them are relatively novel, but I haven't created a place for them yet.

And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.

vova_hn2 21 hours ago|||
Have you ever read someone else's conversation with an LLM?
abustamam 21 hours ago|||
Not the op but I barely even read my own conversations with an LLM. ChatGPT was always so verbose even when I told it to be succinct.

Claude is a bit better but still prone to rambling.

browningstreet 21 hours ago|||
I hinted at "formatted" and "good".. add the words "curated" or "edited".
vova_hn2 16 hours ago|||
Well, you haven't really answered the question.

I think that if you actually try reading someone else's conversation with LLM, you'll find out that it's less exciting than it seems.

For the one who has the conversation the excitement comes mostly from the ability to steer it the way you want. Reader doesn't have this ability, so they are just forced to endure the excessive wordiness, that is so typical for most LLMs.

If you learned something interesting, then why not express this knowledge in a normal article/blogpost? What advantage does a conversation between you and LLM has over just a normal text or, perhaps, text with pictures, diagrams, maybe some interactive illustrations etc

jamiek88 19 hours ago|||
Make a blog? Hardly a hard problem there mate.

If you can’t even be arsed doing that how much value is there, really?

Personally the only thing less interesting to me than someone else’s conversations with an LLM is hearing about someone else’s dream they had last night but you never know, some people may be interested.

browningstreet 19 hours ago||
Thanks for slagging.

But I was thinking less blog and more like an LLM research notebook, à la Jupyter. Jupyter for LLM prompts, outputs, refinements.

jamiek88 19 hours ago||
No slagging meant, sorry. Reading back it does seem a bit like that you are right.
verdverm 19 hours ago|||
Simon Willison published something for turning Claude Convos into something publishable. [1] I haven't tried it, so cannot speak to the ergonomics.

Where to post it? Any blog site, probably a good few Show HN too. Will anyone read it? I haven't read anyone else's, I'm more inclined to dock them reputation for suggesting I read their Ai session. Snippets of weird things shared on socials were interesting to me early on, but I'm over that now too.

[1] https://simonwillison.net/2025/Dec/25/claude-code-transcript...

Someone1234 23 hours ago||
"AI-edited comments" is a very interesting one. Where is the line between a spelling/grammar/tone checker like Grammarly, that at minimum use N-Grams behind the scenes, and something that is "AI" edited? What I am asking is, is "AI" in this context fully featured LLMs, or anything that improves communication via an automated system. I think many people have used these "advanced" spellcheckers for years before Chatgpt et al came on the scene.

I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.

PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.

dang 21 hours ago||
You're touching on an important point. More here: https://news.ycombinator.com/item?id=47342616.

All this stuff is in flux. I thought a lot about whether to add the "edited" bit - but it may change. What I deliberately left out was anything about the articles and projects that get submitted here. There's a lot of turbulence in that area too, but we don't yet have clarity, or even an inkling, of how to settle that one.

Edit: what I mean is this: while most of those submissions aren't very interesting, some really are. Here's an example from earlier today:

Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids - https://news.ycombinator.com/item?id=47338091

How do we close the aperture for the lame stuff while opening wider for the good stuff? That is far from clear.

dataflow 19 hours ago|||
Do the guidelines also disallow comments along the lines of "according to <AI>, <blah>"? (I ask this given that "according to a Google search, <blah>" is allowed, AFAIK.)
BeetleB 18 hours ago|||
I would lean towards disallowing those. With "According to a Google search ...", someone can ask for specific links (and indeed, people often say to link to those sources to begin with instead of invoking Google). With "According to AI ... " - why would most readers care what the AI thinks? It's not a reliable source! You might as well say "According to a stranger I just met and don't know ..."

If you're going to say that the AI said X, Y, Z, provide a rationale on why it is relevant. If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.

dataflow 16 hours ago||
For reference, the point here isn't to say "what AI thinks", but what you found with the help of AI. The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. Sometimes I could spend half an hour verifying them independently, sometimes I can't do that but they still seem worthwhile.

> If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.

I think you're seeing this as too black-and-white, and missing the heart of the issue.

The purpose of mentioning AI is to convey the level of (un)certainty as accurately as possible. The most accurate way to do that would often be to mention any use of AI, rather than hiding it.

If AI tells me that it believes X is true because of links A and B that it cites, and I find those links compelling, then I absolutely want to mention that AI gave me those links because I have no clue whether the model had any reason to bias itself toward those sources, or whether alternate links may have existed that stated otherwise.

Whereas if a normal web search just gives links that mention terms from my query, then I get a chance to see the other links too, and I end up being the one who actually compare the contents of the different pages and figure out which one is most convincing.

Depending on various factors, such as the nature of the question and the level of background knowledge I have on the topic myself, one of these can provide a more useful response than the other -- but only if I convey the uncertainty around it accurately.

BeetleB 16 hours ago||
> The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't.

In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions.

This is true not just from the chat, but for Google AI summaries.

When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar?

(If you look at my other comments, I'm actually in favor of using LLMs in some capacity for HN comments. Just not in this case.)

dataflow 16 hours ago||
>> actually does cite sources that I feel appear plausible.

> In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions.

Note that those are specifically not the cases where the AI is citing "sources that I feel appear plausible."

(I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...)

> When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar?

To be clear, I do understand both sides of the argument, and I don't think either side is unreasonable. I've also had the experience of being on both sides of this myself, and I don't think there's a clear-cut answer. I'm just hoping to get clarity on what the new policy is as far as this goes. I'm sure it'll be reevaluated either way as time goes on.

BeetleB 4 hours ago||
> (I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...)

I should point out that I'm not saying 50% of the AI summaries have an error. Merely that the references it provides me don't state what the summary is claiming. The summary may still be accurate, while the references incorrect.

MetaWhirledPeas 17 hours ago||||
I don't have a problem with that. First off it's not very common. Second off it can add to a conversation, just as it can with in-person discussions. If you feel like it doesn't, don't upvote and don't reply. There's no value in pretending we're Woodward and Bernstein every time we leave a comment.
yellowapple 18 hours ago||||
I think those should be allowed iff the nature of being AI-generated is relevant to the topic of discussion — e.g. if we're talking about whether some model or other can accurately respond to some prompt and people feel inclined to try it themselves.
lossyalgo 18 hours ago||
I constantly read those comments and I personally have conflicting opinion with them. On one hand, it's interesting to compare what is coming out of models, but on the other hand, LLMs are all non-deterministic, so results will be fairly random. On top of that, everybody has a different "skill" level when prompting. In addition, models are constantly changing, therefore "I asked chatGPT and it said..." means nothing when there is a new version every few months, not to mention you can often pick one of 10+ flavors from every provider, and even those are not guaranteed to not be changed under the hood to some degree over time.
crossroadsguy 17 hours ago||||
I'd rather ask AI to provide a source and then cite the source. But if the source itself is AI backed, then it's a bit different :)
dataflow 16 hours ago||
I explained this in a bit more depth in an adjacent reply (feel free to take a look) but obtaining the source from AI doesn't achieve the same thing. For example, there might be other links that contradict that source, which the AI wouldn't cite. Knowing that AI picked the "best" one vs. a human is incredibly relevant when assigning and weighing credibility.
snowwrestler 18 hours ago||||
Citations can be helpful. But AI summaries and Google searches are poor citations because they are not primary sources.
dang 14 hours ago||||
We don't want people copy-pasting in comments generally. Summary comments, onlyquote comments (i.e. consisting of a quote and nothing else), duplicate comments are other examples of this. It's not specific to LLMs.

However, that's probably not critical enough to formally add to the explicit guidelines, so it's probably fine to leave it in the "case law" realm—especially because downvoters tend to go after such comments.

dataflow 13 hours ago||
Great, thanks for clarifying.
dfxm12 17 hours ago|||
AI is not a source. A Google search result page is not a source. Hopefully, these things help you find a source. If you're posting something you feel the need to source, post the source along with your comment! For example, don't say "according to a Google search, x"... say something like "according to Microsoft's documentation, x" and provide a link to Microsoft Learn page...
crossroadsguy 17 hours ago||||
I wasn't sure whether it was an omission or an unintended gap, as the guideline specifically points to "comments". So it seems AI generated/edited posts are fine. Strange, because both can be flagged/downvoted if it was to be left with that.
dang 14 hours ago||
I'm not saying they're all fine, I'm saying we don't yet have any idea of where to make a cut.

The comments thing is a lot more intimate in the sense that anyone posting comments is inside the house.

schappim 20 hours ago|||
Please rethink the “edited” bit on accessibility grounds.

I have a kid with severe written language issues, and the utilisation of speech to text with a LLM-powered edit has unlocked a whole world that was previously inaccessible.

I would hate to see a culture that discourages AI assistance.

dang 14 hours ago|||
That's totally legit and your kid, should they ever take an interest in Hacker News, is welcome here.

These rules are always fuzzy and there's always a long tail of exceptions. All the more so under turbulent conditions like right now. I wrote more about this elsewhere in the thread, in case it's useful: https://news.ycombinator.com/item?id=47342616.

davorak 19 hours ago||||
Are you up for sharing details?

> I would hate to see a culture that discourages AI assistance.

Mostly I think the push back is about ai assistance in its current form. It can get in the way of communicating rather than assisting. The cost though is mostly borne by the readers and those not using the AI for assistance. I have seen this happen when the ai adds info and thoughts that were tangental to the original author and I think, but I can not verify times where an author seems to try to dig down on the details but seemingly can not.

BeetleB 20 hours ago||||
Oh wow. I did not anticipate that, which is embarrassing given that I wrote this just recently:

https://news.ycombinator.com/item?id=47326351

Yes, please at least have a carveout for accessibility. I definitely have dictated HN comments in the past, and my flow uses LLMs to clean it up. It works, and is awesome when you're in pain.

happytoexplain 19 hours ago||||
Since it's mostly a good-faith rule to begin with, it seems easy to add something like, "unless you are using it as an assistive technology for accessibility reasons".
dang 14 hours ago||
Yes, and that's the case with all the rules. I don't want to say "you should break them when it makes sense" because if I do, someone will post "Tell HN: dang says break the rules". But the rules are there to serve the intended spirit of the site—not the other way around. If you're posting in that spirit, I would hope we would recognize and and welcome that, not tut-tut it with rules.
pesfandiar 20 hours ago|||
Hear hear. And like many other aspects of accessibility, it will help a huge number of people who may not have any severe issues. e.g. non-native English speakers using LLM-powered edits.
jaysonelliot 23 hours ago|||
You should use your own words. It might seem that a tool like Grammarly is just an advanced spellcheck, but what it's really doing is replacing your personal style of writing with its own.

It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

bruckie 22 hours ago|||
My elementary school kid came home yesterday and showed me a piece of writing that he was really proud of. It seemed more sophisticated than his typical writing (like, for example, it used the word "sophisticated"). He can be precocious and reads a ton, though, so it was still plausible that he wrote it. I asked him some questions about the writing process to try to tease out what happened, and he said (seemingly credibly) that he hadn't copied it from anywhere or referenced anything. He also said he didn't use any AI tools. After further discussion, I found out that Google Docs Smart Compose (suggested-next-few-words feature) is enabled by default on his school-issued Chromebook, and he had been using it. The structure of the writing was all his, but he said he sometimes used the Smart Compose suggestions (and sometimes didn't). He liked a lot of the suggestions and pressed tab to accept them, which probably bumped up the word choice by several grade levels in some places.

So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.

edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed.

Terr_ 22 hours ago|||
To rationalize my gut-feelings on this, I think it comes down to the spectrum between:

1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result.

2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP.

The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common.

zahlman 21 hours ago|||
The analogy with tab-completion of code seems apt. At first you blindly accept something because it has at least as good a chance of working as what you would have typed. Then you start to pay attention, and critically evaluate suggestions. Then you quickly if not blindly accept most suggestions, because they're clearly what you would have written anyway (or close enough to not care).

The phenomenon was observed in religious philosophy over a millennium ago (https://terebess.hu/zen/qingyuan.html).

abustamam 21 hours ago||
Tab completion was so novel back when full e2e AI tooling was not really effective.

Now that it is, I just turn tab completion off totally when I write code by hand. It's almost never right.

skydhash 20 hours ago||
Emacs has completion (but you can bind it to tab). The nice thing is that you can change the algorithm to select what options come up. I’ve not set it to auto, but by the time I press the shortcut, it’s either only one option or a small sets.
bruckie 21 hours ago||||
From his description, it sounded like this was more of #1. He cared a lot about the topic he was writing about, and has high standards for himself, so it's very likely that he would have considered and rejected poor suggestions.

I have mixed feeling about it. On the one hand, you're right: carefully considering suggestions can be a learning opportunity. On the other hand, approval is easier than generation, and I suspect that without flexing the "come up with it from scratch" muscle frequently, that his mind won't develop as much.

yellowapple 18 hours ago|||
#1 would be a net improvement over the status quo IMO. Seems like a great way for people to expand their vocabularies organically.
lossyalgo 18 hours ago||
That reminds me of one of the biggest IMO missing feature of Wordle: They never give a definition of the word after the game is finished! I usually do end up googling words I don't know (which is quite often) but I'm guessing I'm one of the few who goes to the trouble. I've even written to The New York Times a couple times to suggest adding a short definition at the end as I honestly feel like a ton of people could totally up their vocabulary game and it surely could be added with minimum effort (considering they even added a Discord multiplayer mode).
Terr_ 13 hours ago|||
Is Wordle really the best vehicle for that, though? I mean, it tends towards a subset of 5-letters words the audience is more likely to know in advance, excluding a lot of the more-surprising words.

A "click to see more about why this answer fits" crossword, on the other hand...

lossyalgo 9 hours ago||
How often have you played Wordle? I've played well over 1000 games, and at lesat 1/5th of those were words I had to look up. They seem to enjoy picking obscure words in order to make the game more challenging.
Terr_ 2 hours ago||
Perhaps the unusual outcomes are just more memorable, and so seem more frequent? Here's a representative sample of 30 that were used very recently.

    Shoal, Hasty, Lobby, Vogue, Gunky, 
    Sheep, Theft, Linen, Slime, Fluke, 
    Hydra, Dizzy, Lance, Shred, Buyer, 
    Attic, Guava, Awake, Stank, Hoist,
    Mogul, Squad, Roost, Skull, Bloom,
    Mooch, Surge, Vegan, Scene, Cello,
None of those stand out as "WTF does that even mean", but maybe I'm the weird one if we adjust for age-demographics or book-reading.

If I had to guess at a riskier 20%... Guava, a fruit some people may not have had; Gunky because it's slang; Mogul, Vogue, and Mooch were borrowed from other languages; Cello is something people may have heard more than read; Hoist.

lossyalgo 2 hours ago||
> Perhaps the unusual outcomes are just more memorable, and so seem more frequent?

That's a good point and could very well be true. I just know I've played plenty of games where I was mad that they didn't show the meaning. So let's say its 5% for native speakers, and up to 20% for non-native speakers - that's still a golden opportunity to expand vocabularies. And honestly it can't be a lot of work to add a couple lines of static text. At worst it would be ignored, and at most, help people learn more interesting words.

yellowapple 14 hours ago|||
That's a brilliant idea and now that you've mentioned it it seems like a rather glaring omission.
lossyalgo 9 hours ago||
Please write to the NY Times and suggest it! I still play and it still irks me when I have to go google a word.
comboy 22 hours ago||||
Oh how I despise these suggestions. You sometimes look for a way to express something and you are on the verge of giving the world something truly original, but as soon as your brain sees the suggestion it goes "oh yeah that fits"
SchemaLoad 21 hours ago|||
I disabled them immediately, it feels like the tech version of the ADHD person who keeps interrupting you with what they think you are trying to say. Even if the suggestion is correct, it saves you at most 2 seconds at the cost of interrupting you constantly.
Terr_ 22 hours ago||||
True! There's an important cybernetic aspect to all this, where an automatic suggestion can be an interruption, sometimes worse if the suggestion is decent.

A certain amount of friction is necessary, at least if the goal is to help the person learn or make something original.

lossyalgo 18 hours ago||||
I look forward to reading studies in 10 years how we all became stupider thanks to this "feature". One step closer to the movie Idiocracy.
TimTheTinker 22 hours ago||||
GK Chesterton would have something brilliant to say about the inauthenticity of it all or something.
jrockway 22 hours ago||||
I see the suggestions and then choose something different anyway. I don't want to use one of the top 3 most popular responses to an email from a friend. Even if it's something transactional.
JumpCrisscross 22 hours ago|||
> I despise these suggestions

As an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them.

Gibbon1 22 hours ago||
Friend of mine was a English teacher. She quit because she's not going to waste her time 'grading' 30 essays written by AI.

Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants.

zahlman 21 hours ago|||
One problem I see is that LLMs have a more nuanced... well, model of how words and their meanings relate to each other than a dead-tree thesaurus could ever present, what with its simplified "synonym" and "antonym" categories. Online versions try to give some similarity metrics, but don't get into the nuance. (It's not as if someone who takes either approach would want to spend the time reading and understanding that, anyway.)
JumpCrisscross 22 hours ago||||
> she could tell when students were using it to make their writing more fancy pants

I had two teachers who called us out on this, and actually coached us on our writing, and I remember them fondly. (They were also fans of in-class essaying.)

The others wanted to count big words.

tigen 16 hours ago|||
In-class essays impossible? Pencil to paper?
ma2kx 21 hours ago||||
As a non native English speaker my own words wouldnt be in English. If I express myself in English I soon struggle for the right words. On the other hand I think when I read some English text I'm quite capable of sensing the nuances. So it feels when I auto translate my text to English an than read against it again and make some corrections, I can express my thoughts much better.
comboy 22 hours ago||||
My broken english now officially bumps my comments up instead of down. Sweet.
zahlman 21 hours ago||
For what it's worth, I had a quick look through your comment history and your English seems just fine to me as a native speaker (at least for informal communication).
ziml77 20 hours ago||
People who don't have English as their first language often seem to underestimate how good their English actually is. I wonder if it's because their reference point is formal English rather than the much more forgiving English we use in casual day-to-day conversation.
NewsaHackO 22 hours ago||||
>It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better."

It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.

RevEng 16 hours ago||
Exactly. Tell that to whoever is grading your next paper, or reviewing your resume, or watching your presentation. People are judged by their linguistic ability even in cases where it shouldn't matter. It's a well known heuristic bias. It's no surprise that many of the people here denying it are themselves quite literate.
lamontcg 23 hours ago||||
Books and newspapers have had editors for centuries. It is just code review for the written word.

[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]

MeetingsBrowser 22 hours ago||
Editors are mostly tasked with maintaining a consistent style and standard.

There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words.

lamontcg 22 hours ago||
I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it.
pseudalopex 22 hours ago||
Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable.
lamontcg 21 hours ago||
Well good luck detecting it.
davorak 19 hours ago||
If it never gets in the way of the humans communicate it probably won't be an issue. That is the reading I have of the rule and Dang's comments

> HN is for conversation between humans.

If it is enhancing that instead of detracting and wasting peoples time it does not seem to be against the spirt of the rules.

yellowapple 18 hours ago||
Except the letter of the rule makes it verboten even “if it never gets in the way of the humans communicate”.
davorak 17 hours ago||
> HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise.

That is from dang's post in: https://news.ycombinator.com/item?id=47342616

That whole post is clarifying for the intent of the new rule(s).

yellowapple 14 hours ago||
The problem with “spirit-of-the-law” is that having rules be subject to discretion is a pretty clear avenue for discrimination and abuse. Not as big of a deal for an Internet forum as it would be for, say, a country's legal code and the enforcement thereof, but the lack of a clear standard for a rule makes that rule hard to follow and harder to enforce impartially.
davorak 12 hours ago||
The typical problem with trying to create clear standards with no spirit of the law is that it never matches the intentions with the 1st, 2nd, etc iterations of developing the clear standards. At least when trying to deal with something nuanced. It can get to the point that it takes more time and effort to follow the clear standards than to think through it fresh each time. The rules can also eat up time and effort to maintain and distract from the original purpose.

"Don't post generated comments or AI-edited comments."

What about non-native speakers? Can they not use translation software like google translate any more?

"Don't post generated comments or AI-edited comments, except for translating to english"

What about cases of disabilities?

"Don't post generated comments or AI-edited comments, except for translating to english and when used as assistive technologies."

Some translation tools and assistive technologies are still going to case the same issues that we have right now so maybe limit the technologies used

"Don't post generated comments or AI-edited comments, except for translating to english and when used as assistive technologies. Technologies x, y, z are not allowed a and b and similar can be used for translation c and d as assistive technologies"

But we do not want to spend time/effort on filtering technologies and/or people into the above categories.

In the long run we likely will come up with technologies that most everyone is satisfied with using in different use cases, spelling grammar, assistive, maybe even tone, and others.

In the mean time we can not let the perfect be the enemy of the good. If there are clear standards that achieve the goals, great, if not we have to do something until everything shakes out.

lamontcg 1 hour ago||
This thread is literally doing nothing.

Nobody is going to stop using grammarly extensions to post to HN, nobody is going to be able to detect its usage.

This thread just lets a certain kind of people put on their best condescending hall-monitor voice and lecture other people about how they should behave.

And the rule is arguably less useful than speed limits and will be broken about as often (at least speed limits have a very real link to physical safety via kinetic energy).

mjg2 23 hours ago||||
I was just re-reading the passage from Plato's "The Phaedrus" on writing & the "art" of the letter for an essay I'm working on, and your remark is salient for this discussion on LLM-style AI and social media at large.
dbacar 22 hours ago||
RIP Robert M.Pirsig.
llbbdd 21 hours ago||
Oof, I haven't finished Zen yet. I didn't know he was gone. RIP
davebranton 20 hours ago||||
Precisely. As I wrote in my assessment of AI for my workplace;

"Your unique human voice is more valuable than a thousand prompt-driven LLM doggerels."

Aldipower 23 hours ago||||
That's true, but on the flip side I regularly get downvoted because my English is not the best, so say it mildly. So, now I need to be really careful, to a) write in a good English or b) not to be recognised as an LLM corrected version of my English. Where is the line? I shouldn't be downvoted for my English I think, but that is the reality.

Edit: I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while..

ssl-3 23 hours ago|||
It goes both ways.

The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.

Which is absurd, since I don't use the bot for writing at all.

colpabar 22 hours ago||||
> I shouldn't be downvoted for my English I think, but that is the reality.

How do you know? Is it possible the downvoters just didn't like what you said?

phs318u 22 hours ago||
It’s possible of course but reading all the comments from various non-native English speakers here it seems like a common story. It may indicate a subliminal bias in readers (most of whom are presumably American).
yorwba 22 hours ago||
Note that those comments are written in perfectly understandable English. Further note how often you come across comments written in perfectly understandable English, but they're downvoted anyway.

It suggests a bias in writers to assume that people would agree with them if only they could express their thoughts accurately.

Teever 22 hours ago||||
But the problem is that people with poor written language / english skills are 'competing' with people who have superb skills in this domain.

There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies.

Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit.

You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

What's the solution for that?

magicalist 22 hours ago|||
> What's the solution for that?

Remember that you're on a message board and you're not actually 'competing' for anything?

Teever 22 hours ago||
This is a perfect example of what I'm talking about.

I knew someone was going to comment on my use of the word there despite me putting it in quotes which was intended to let the reader know that I meant that word as an approximation of what I was meaning.

When I say competing I mean competing in the space of ideas here. There is a ranking system here that raises or lowers the visibility and prominance of your comments and it's based on upvotes by other uses. For better or worse people penalize comments with grammatical errors over ones that don't and that affects how much exposure other users have to the ideas that people write and how much interaction they get from them.

If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?

davorak 19 hours ago|||
> If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?

The main problem is that ai consistently is seeing making things worse. Take a look at the examples in Dang's link in their comment: https://news.ycombinator.com/item?id=47342616

In the ones I read the AI editing is either hurting or needs to be much, much better to help.

NewsaHackO 21 hours ago|||
No, I get your point. Unfortunately, alot of people here try to act high and mighty like they are posting here for some altruistic reason. The reason why I, you, and everyone else posts here is the human reason that we want others to engage with our posts. In order to do that, you have to put your best foot forward, which includes making sure the spelling and grammar of your posts is correct. While I do not use an LLM for this, I think that it is vaild to use these tools to make sure nothing gets in the way of whatever point you are trying to make.
Teever 21 hours ago||
> In order to do that, you have to put your best foot forward

In English. You have to put your best foot forward in English. And in your environment with the resources you have at your disposal.

For example, I'm currently engaging with you between steps in a chemistry process that's happening under the fumehood next to me while wearing a respirator, a muggy plastic chemical resistant gown and disposable gloves nitrile globes.

I am absolutely certain that these conditions are different than the ones I would need to 'put my best food forward' in this discussion. I'm also certain that quite certain that you and I would both absolutely stumble if we were obligated to particpate in this forum in a language that we're not proficient in as many users often attempt to do and are unfairly penalized for by other members of the community.

I'm with you on the LLM usage for grammatical issues for non-native speakers. I bet more in this community would feel the same way if Dang whimsically mandated that people had to use a language other than English on certain days of the week.

fragmede 19 hours ago||
Oh shit that would be fun. Tuesday, we're going to do it in Mongolian, see how that goes.
12_throw_away 21 hours ago|||
> You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"?

fragmede 19 hours ago||
Yes! If my comment is above yours in a thread, it means I got more upvotes than you did, which means I get special bonuses and more to eat and you go hungry in Internet land. Also it means I'm better than you (obviously) and I get to go to this secret club with all the pretty people and you're not invited. Isn't that how this all works?
fragmede 22 hours ago||||
I disagree. HN is going to bury my raw unedited tirade of a comment about those fucking morons that couldn't code their way out of a paper bag. If I send a comment to ChatGPT and open up the prompt with "this poster is a fucking dumbass, how do I tell them this" and use that to get to a well reasoned response because that's the tool we have available today, we're all better off.

The guidelines state:

> Be kind. Don't be snarky. Converse > Edit out swipes. > Don't be curmudgeonly.

On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok?

I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move.

zahlman 21 hours ago|||
If you see an incompetent coder and wish to communicate that the person responsible is a "fucking moron/dumbass", the tone with which you do so is not the problem. Tell us what is wrong with the code, as objectively as possible. That's what the guidelines are trying to convey.
yorwba 22 hours ago|||
The guidelines don't say anything about not posting something because an LLM told you that you shouldn't...
jjk166 21 hours ago||||
> It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

This is the opposite of how language works. You want people to understand the idea you're trying to communicate, not fixate on the semantics of how you communicated. Language is like fashion - you only want to break the rules deliberately. If AI or an editor or whatever changes your writing to be more clear and correct, and you don't look at it and say "no, I chose that phrasing for a reason" then the editor's version is much more likely to be understood correctly by the recipient.

drusepth 23 hours ago|||
I'm not sure I agree with this. I don't really want to see someone else's stylistic "warts".

I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.

timeinput 23 hours ago||
You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

You could even write a plugin for your favorite web browser to do that to every site you visit.

It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read

phs318u 22 hours ago|||
> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

For those coming from a language other than English, you are more likely to lose information by using a tool to “reconstruct” meaning from poorly phrased English as an input, as opposed to the poster using a tool to generate meaningful English from their (presumably) well-written native language.

kazinator 22 hours ago||||
> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

But that creates a private version of the text which the original poster didn't sign off on. You could have fixed something contrary to their intent.

tempestn 22 hours ago|||
There's a big difference between me running a filter on other people's words, and those people themselves choosing to run one and then approving the results.

I personally don't see a problem with someone using a grammar checker as long as they aren't just blindly accepting its suggestions. That said, if someone actually is using it in that way, it shouldn't be detectable anyway, so it probably doesn't matter all that much whether or not it's included in the letter of the rule.

Mordisquitos 23 hours ago|||
I think that the line between A"I" editing to fix grammar or to translate from a different native language and A"I" editing by using an LLM is one of those things that's very hard to unambiguously encode in written guidelines, but easy to intuitively understand using common sense, in the vein of I know it when I see it.

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

tsukikage 23 hours ago|||
> Where is the line between a spelling/grammar/tone checker like Grammarly

For me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write.

observationist 22 hours ago|||
On a technical level, you can really only guard against changing your semantics and voice - if you're letting software alter the meaning, or meanings, you intend, and use words you don't normally use, it's probably too far.

This is probably ok:

>> On a technical level, you can really only guard against software that changes your semantics or voice. If you're letting it alter the meaning (or meanings) you intend, or if it starts using words you would never normally use, then it's gone too far.

This is probably too far:

>>> On a technical level, it's important to recogn1ize that the only robust guardrail we can realistically implement is one that prevents modifications to core semantics or authorial voice. If you're comfortable allowing the system to refine or rephrase the precise meanings you originally intended — or if it begins incorporating vocabulary that doesn't align with your typical linguistic patterns — then you've likely crossed a meaningful threshold where the output no longer fully represents your authentic intent.

Something to consider is that you can analyze your own stylometric patterns over a large collection of your writing, and distill that into a system of rules and patterns to follow which AI can readily handle. It is technically possible, albeit tedious, to clone your style such that it's indistinguishable from your actual human writing, and can even icnlude spelling mistakes you've made before at a rate matching your actual writing.

AI editing is weird, though. Not seeing a need, unless English isn't your native language.

happytoexplain 23 hours ago|||
I think there's a pretty clear gap between editing for grammar/spelling and editing for tone.
RevEng 16 hours ago||
How so and why? I know plenty of people whose writing naturally carries a tone that they don't intend. I often help them to change their wording to be less confrontational or seemingly sarcastic when it isn't meant to be. Would you say it is wrong for them to get assistance to get the tone they intend rather than the one they would tend to write?
happytoexplain 6 hours ago||
It's the difference between correctness and tone/character/semantics (tone and character do affect semantics). We need to do things we don't quite mean in subjective spaces, to learn. Developing yourself is wonderful, but presenting a writing style that does not yet represent your learned tone feels disingenuous to the reader and harms the tone of the whole conversation. Using LLMs to iterate might help you learn, but use that tool privately, or with friends/family/mentors. With others, simply make your mistakes.

To be clear, I also think you shouldn't rely on auto-correction or LLMs for correctness (they are great for identifying your mistakes, but I think you should then fix the mistakes yourself, to develop your brain). It's just that "assisted" correctness isn't misleading/harmful in the way that "assisted" tone/character/semantics are.

jacquesm 23 hours ago|||
Trying to lawyer this is the wrong approach. When in doubt: don't.
Someone1234 22 hours ago||
That feels very uncharitable.

When a policy is introduced to seemingly guard against new problems, but happens to be inadvertently targeting preexisting and common technology, I don't feel like it is "lawyering" it to want clarity on that line.

For example, it could be argued this forbids all spellcheckers. I don't think that is the implied intent, but the spectrum is huge in the spellchecker space. From simple substitutions + rule-based grammar engines through to n-grams, edit-distance algorithms, statistical machine translation, and transformer-based NLP models.

unsignedint 22 hours ago|||
I think the only practical litmus test here is whether you can stand by the text as your own words. It’s not like we have someone looking over commenters’ shoulders as they type.

Ultimately, this comes down to people making a good-faith judgment about how much AI was involved, whether it was just minor grammatical fixes or something more substantial. The reality is that there isn’t really a shared consensus on exactly where that line should be drawn.

altairprime 22 hours ago|||
Grammarly use is outright prohibited by this; AI-edited writing is no longer writing that you hold personal and exclusive responsibility for having written. Consider Stephen Hawking’s voice box generator. While the sounds produced were machine-assisted, the writing was his alone. If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant.
phs318u 22 hours ago||
> If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant.

You forgot the /s ?

altairprime 22 hours ago||
It’s not sarcasm. If you feel if I have misunderstood the intent of the guideline we’re discussing — “Don’t post generated/AI-edited comments”, as the title currently reads, then I’m happy to discuss further. (I often make logical negation errors that I miss in proofing, so it’s possible I slipped up, too!)
phs318u 22 hours ago||
I thought it was sarcasm given you are asking people to “pay a proofreader”. This sounds ludicrous. Could you clarify wha you meant by that line if it’s not sarcasm? Because I’m having a hard time thinking that it’s meant to be taken at face value.
altairprime 21 hours ago||
No worries. The post I replied to was asking if use of ‘grammar improvement services’ (my paraphrase) qualified as AI-assisted writing at HN. All such services cost something; Grammarly makes a lot of money charging businesses, AI consumes watts of power that someone pays for, and even Microsoft Word’s grammar checker spins up the CPU fans on an old Intel laptop with a long enough document. I took from that the generic point that one “pays” for machine-assisted proofreading by one means or another, whether it’s trading personal data for services (Google) or watts of power for services (MSWord et al.) or donating writing samples to a for-profit training corpus (Grammarly free tier) or paying for evaluations where your data is not retained for training (Grammarly paid enterprise tier with a carefully-redlined service contract) and generalized to “pay for machine proofreading”.

Then, I considered whether HN would appreciate posts/comments by a human where they’d had a PR team or a hired editor come in and review/modify/distort their original words in order to make them more whatever. I think that this probably is most likely to have occurred on the HN jobs posts, and I’ve pointed out especially egregious instances to the mods over the years — but in general, the people who post on HN tend to do so from their own voice’s viewpoint, as reaffirmed by the no-AI-writing guideline above. So I decided instead to say “pay a proofreader” because, bluntly, if the community found out that someone was paying a wage to a worker to proofread their HN comments, the response would plausibly be the same mob of laughing mockery, disgusted outrage, and blatant dismissal that we see today towards AI writing here. “You hired someone to tone-edit your HN comments?!” is no different than “You used Grammarly to tone-edit your HN comments?!” to me, and so it passed the veracity test and I posted it.

czhu12 22 hours ago|||
Finding it more refreshing these days when reading text with broken grammar, incorrect use of pronouns, etc. especially for HN, the human connection is more palpable. It’s rarely so bad that it’s not understandable
glitch13 23 hours ago|||
I saw a similar conversation somewhere about some project saying they don't allow AI generated code.

It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them?

It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia.

kazinator 22 hours ago|||
Projects cannot allow AI generated code if they require everything to have a clear author, with a copyright notice and license.

IDE code suggestions come from the database of information built about your code base, like what classes have what methods. Each such suggestion is a derived work of the thing being worked on.

RevEng 15 hours ago||
That is not correct because it hasn't been tested in court. In past decisions about who owns the output generated by a computer program the owner has been the operator of the program. You own your Word documents and Photoshopped images. There is good reason to believe that LLM output where you provided the prompt would also fit under that umbrella. We are still waiting for that to be tested in court.
sumeno 20 hours ago|||
Nobody is actually confused about what AI generated code means in those cases, they're just trying to be argumentative because they don't like the rules
raw_anon_1111 22 hours ago|||
There is no need to use any of it. Just use your own words.
RevEng 16 hours ago|||
I agree on the editing. We use these things all the time - chances are many of you are using it right now as you type on your phone and it checks your spelling for you.

By the same token, what if I have a human editor help me out? What if we go back and forth on how to write something, including spelling, grammar, tone, etc. For example, my wife occasionally asks me to review her messages before sending them because she thinks I speak well and wants to be understood correctly.

The problem is that we are punishing the technology, not the result. Whether it's a human or an LLM that acts as your editor should be irrelevant; what matters is that you are posting your own work and not someone else's. My wife having me write all of her messages for her would be just as dishonest as her having an LLM write all of her messages for her if she always presented them as her own writing. But if she writes the copy and I provide suggests for changes, what's the harm in that? And why should it matter if it's a human or an LLM that provides that assistance?

ern 17 hours ago|||
I caught myself structuring a comment like an LLM on another site. It's expected that people who chat heavily to LLMs will start to mirror their styles.
thousand_nights 23 hours ago|||
i don't care if someone has bad grammar, i want to hear their thoughts as they came up with them, we're all intelligent beings and can parse the meaning behind what you write.

i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i type

your writing style is your personality, don't let a robot take it away from you

tempestn 22 hours ago||
I, on the other hand, find incorrect grammar mildly annoying, especially when it's due to laziness. It distracts from the thoughts being conveyed. I appreciate when people take the time to format comments as correctly as they're able.

In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs.

asadotzler 17 hours ago|||
ML based word or phrase editing is hardly a problem any more than pre-AI spellcheckers were. AI sentence and paragraph manufacturing is a problem and everyone knows the difference between that slop and a spellchecker. No one cares if your editor does inline spellchecking or even word autocomplete. What they care about is slop and word at a time spelling or phrase grammar checking are harmless.
skywhopper 23 hours ago|||
I don’t think it’s really necessary to play Captain Nitpick over spell-check or whatever. You know what is meant.
SecretDreams 23 hours ago||
Your comment is one of semantics. Worth discussing if we're talking a truly hard line rule rather than the spirit of the rule.

I benefit from my phone flagging spelling errors/typos for me. Maybe it uses AI or maybe it uses a simple dictionary for me. Maybe it might even catch a string of words when the conjunction isn't correct. That's all fair game, IMO. But it shouldn't be rewriting the sentence for me. And it shouldn't be automatically cleaning up my typos for me after I've hit "reply". That's on me.

theshrike79 22 hours ago||
I've written tens of thousands of lines of code, autogenerated documentation with LLMs and use AI Agents daily.

But when I argue on the internet, it's always a 100% me.

And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.

"But my <language> is bad... that's why I use LLMs"

So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)

water-data-dude 21 hours ago||
I like "plonk file", it has a very good mouth feel. I not-googled it and was delighted to discover that it's Usenet slang!

Also low quality wine[0]

[0]https://en.wikipedia.org/wiki/Plonk_(wine)

lifthrasiir 10 hours ago||
> So was mine when I started arguing with strangers on the internet. It's better now.

That takes (much) time, though. I took about a decade to be comfortable about that.

yunseo47 9 hours ago||
Whether it’s code, general text, or university assignments, the core issue is taking responsibility for one's own work. While I share the concerns raised in this thread, I believe the focus on 'LLM usage' is a bit of a red herring. The fundamental principle of ownership hasn't changed with the advent of LLMs; the tool itself isn't the issue, but rather the abdication of responsibility by the author.

For instance, if a non-native speaker translates their own writing using machine translation or an AI, is that problematic—provided they personally review and vet the content before posting? I don't think the people calling out AI use on this board are taking issue with that. Ultimately, it’s not about the method; it’s about the author's attitude.

The reason LLMs are so disruptive now is that while "shitposts" used to be obvious, we're now seeing "plausible" low-effort content generated without any human oversight. Irresponsible people have always been around, but LLMs have given them the tools to scale that irresponsibility to an unprecedented level.

yunseo47 9 hours ago||
I think a human-like piece with minor mistakes resonates more emotionally than a perfectly written piece that looks like it was written by AI. However, since there seems to be a grammar debate going on here, I'd like to add: Is it a bad thing for non-native speakers to use AI to correct grammar or awkward expressions? I think it definitely has positive aspects in terms of lowering language barriers.
ethbr1 9 hours ago||
> the tool itself isn't the issue, but rather the abdication of responsibility by the author

The biggest current social problem with AI content is our collective lack of transparency into how much human responsibility was taken.

Give a <100% reliable/accurate AI tool, the same post/code may have had {every line vetted by a human} or {no lines vetted by a human}... and readers have no way of telling which it is!

Because even if no edits needed to be made, the former carries a lot more signal than the latter, because it reduces risk of AI slop and therefore makes the content more valuable.

At the same time, it also costs more time to produce, so in any competitive marketplace (YouTube, paid comments, startup code, etc.) the unvetted AI content will dominate.

0xbadcafebee 22 hours ago||
I wish more people would filter their comments through AI. It has so many benefits. If you're being emotional, it can detect that and rewrite your comment to be less confrontational and more constructive. If you're positing a position out of ignorance or as an armchair expert, it can verify your claims before posting. Most of the mod's problems would be solved if every comment were filtered through the HN guidelines before posting.

AI is a tool. You can use it constructively, like Grammarly, or spellcheck. You don't need to be afraid of it.

darkwater 7 hours ago||
> If you're being emotional, it can detect that and rewrite your comment to be less confrontational and more constructive

Are you learning something in the process? does ti have your full emotional context, beside the full conversation context? There are probably many bade side-effects if people would actually start doing what you mention at scale.

One thing is computer code, which is an intermediate product to an end (instruct the computer what it needs to do) and another is YOUR direct output to some other human being, which is the end game in human-to-human communication.

salicaster 22 hours ago||
> If you're being emotional, it can...

It can't. It will rewrite anything you give it.

> it can verify your claims before posting

It can't.

> You don't need to be afraid of it

Nobody is afraid of it. It's annoying. General population cannot be trusted to use it in whatever idealistic way you are imagining.

sebringj 20 hours ago||
I do too care about this but I say this in the reality in which we are. This reminds me of those signs "no shirt, no shoes, no service" except it's much worse, only sentient beings will actually care about it, while non-sentients will simply trample over the sign while token predicted laughter erupts from their token predicted sense of humor artifact.

Elon said it well, there must be some disincentive to do this.

fidotron 23 hours ago||
The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point.

After all, no one knows I'm a dog.

LeifCarrotson 23 hours ago||
No, those properties are tied to the state of mind and experiences of the human, dog, or LLM behind any given comment.

When someone posts:

> You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site.

then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value.

An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore.

eikenberry 22 hours ago|||
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.

This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI.

That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI.

yellowapple 17 hours ago||||
For all you know that LLM could've indeed actually run an actual Redis, given the increasing use of AI agents for digital infrastructure provisioning.
fidotron 22 hours ago|||
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.

This is my point.

There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters.

For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter.

AlecSchueler 23 hours ago|||
> The only question is is the entity interesting and/or correct.

This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.

throwaway2027 22 hours ago|||
>But trying to change the mind of an LLM just feels like a waste of my time.

It often is with humans as well.

AlecSchueler 22 hours ago||
Indeed it is, and there are often times I choose not to engage with my fellow humans. But the exceptions are valuable to me and to others. With an LLM I don't feel there would be any exception, that's the difference.
skeledrew 23 hours ago||||
Instead of wanting to change the mind of the other entity, how about focusing on coming to a mutual understanding of what is "correct"? That way it shouldn't matter much if said entity is human, LLM or dog. Unless you're just arguing to push your "correct" on other humans, with little care about their "correct".
AlecSchueler 22 hours ago||
It feels like you've loaded quite a lot on a way that feels unfair: "pushing" and "little care" etc. I maybe should have used a term like "discuss" target than the more loaded "argue."

Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.

You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line.

yellowapple 17 hours ago|||
Arguing for the sake of convincing the other person is doomed to inevitable failure, even without the possibility of that person being an LLM.

Arguing for the sake of convincing onlookers reading the conversation is more likely to be effective, and in that case it doesn't matter if the other person is an LLM.

craftkiller 23 hours ago||
Not necessarily. Using AI you can trivially perform astroturfing campaigns to influence public perception. That doesn't really fall on the interesting or correctness spectrums. For example, if 90% of the comments online are claiming birds aren't real with a serious tone, you might convince people to fall into that delusion. It becomes "common knowledge" rather than a fringe theory. But if comments reflect reality then only a tiny portion of people have learned the truth about birds, so people will read those claims with more skepticism.

(naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war)

Nevermark 14 hours ago|
This is a wonderful rule.

It also points out the need for AI writing tools that very strictly just:

1. Point out misspellings and typos.

2. Point our grammar mistakes, if they confuse the point.

3. Point out weaknesses of argument, without injecting their own reasoning.

I.e. help "prompt" humans to improve their writing, without doing the improvement for them.

In fact, I would like a reliable version of that approach for many types of tasks where my creativity or thought processes are the point, and quality-control feedback (but not assistance), is helpful.

This is a mode where models could push humans to work harder, think deeper, without enabling us to slack off.

cobbzilla 14 hours ago|
I don’t want to read AI slop, but how do you feel about translations?

I don’t mind when non-native speakers use it to express themselves, especially if disclaimed (but I give a pass even if not). Does it bother you?

thezipcreator 14 hours ago|||
We've had machine translation for a while and I don't think anybody particularly thinks of it as a bad thing? Writing something and then having a machine directly translate it (possibly imperfectly) is a lot different than a machine writing the thing.

Personally I would like people to try learning other languages more (it's hard but rewarding) but you can't learn every language ever, and it is really hard to learn a language to fluency.

lifthrasiir 10 hours ago||
> We've had machine translation for a while and I don't think anybody particularly thinks of it as a bad thing?

Not all, but some machine translators can be comically (if not horrifically) bad sometimes. Search Twitter-become-X for examples. Native writers can't pick a working machine translator unless they are explicitly allowed to do so themselves.

Nevermark 14 hours ago|||
I think it makes perfect sense.

But that a site might still want to discourage it, to avoid general degradation. It is a tradeoff.

If someone can write in the target language, just not well, a model could be asked to point out problems for the writer to fix. Rewrite a difficult sentence.

cobbzilla 13 hours ago||
I suppose for me, it is the difference between a true “translation“ and having an LLM reinterpret intent and state “its” words.

Ideally, I want the speaker’s words translated “verbatim” to English, to the extent possible.

More comments...