Top
Best
New

Posted by usefulposter 2 days ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4185 points | 1651 commentspage 29
realaaa 1 day ago|
[flagged]
HelloUsername 2 days ago||
[flagged]
gabriel666smith 2 days ago||
Inconsistent capitalisation ('Twitter' vs 'reddit'); subtly using the outdated name for 'Twitter' as most humans do; the genuinely hard-to-parse final clause of the comment.

Though I note it didn't say "read comments by other humans", only "read comments by humans", so confirmed AI.

I think the guidelines here work quite well, and expect a good-faith interpretation, which they mostly receive.

I think you're asking for some sort of empirical verification of "this is / is not LLM text" (which seems impossible), but there's no real reason to expect the existence of LLMs to change that this website is, generally, interacted with in a good-faith way. People are really good at calling others out on here -- I doubt that will change.

vasco 2 days ago||
Boop beep bop on the internet nobody knows I'm a dog.
HelloUsername 2 days ago||
Exactly (https://news.ycombinator.com/item?id=47139675)
alterom 2 days ago||
[flagged]
altairprime 2 days ago||
AI coding versus AI writing may be a useful lens to focus through; while I personally abhor both, HN seems extremely positive about the former and (now) extremely negative about the latter. I hope that policy is extended to all YC startups someday :)
alterom 2 days ago||
>AI coding versus AI writing may be a useful lens to focus through; while I personally abhor both, HN seems extremely positive about the former and (now) extremely negative about the latter. I hope that policy is extended to all YC startups someday :)

Coding is writing though.

Somehow, HN can say that "code is written once and read many times", and insist that code isn't writing at the same time.

All programming languages were created with the express purpose of allowing humans to express their ideas in a way that other humans can understand while simultaneously being convertible into machine code in a precise enough way.

Code has style, code has readability, and when it comes to algorithms, code is often the best way to communicate them (I haven't seen a CS book without at least some pseudocode in it).

Code is supposed to tell what a program does, and what it's for— to a human that wants to understand or change that behavior.

A human who doesn't have this need has no need for the code.

Programming languages make coding less tedious and more efficient (compared to writing assembly) as a side effect.

The primary purpose is facilitating communication about what the machine should do from humans and to humans.

Sure, the scope of ideas computer languages are tailored to facilitate expression in is not universally broad. But that doesn't mean we're not writing when we write code. Lawyers writing a legal argument are still writing, even when they are doing so in very specific, formal language. Mathematicians are still writing papers.

It takes extreme mental gymnastics to consider coding (which is universally an act of producing text) to not be a form of writing.

To that end, having a negative view towards LLM writing while cheering on LLM coding seems (to me) to be borderline schizophrenic.

The people that advocate AI coding for throwaway projects, or using LLMs as a tool to get more insight into codebases make points that I can understand.

But a day or two ago I've responded to a person that argued that Open Source is no longer necessary because you can just vibe code anything. Many others advocate for using agentic coding in production religiously.

Apparently, this is not incompatible with rejecting AI writing at the same time.

I'd be very curious to hear about how people are overcoming this sort of cognitive dissonance.

altairprime 2 days ago||
> I'd be very curious to hear about how people are overcoming this sort of cognitive dissonance.

It’s not difficult:

Drafting AI-assisted programming of computers is fine.

Drafting AI-assisted communications to other humans is not fine.

If your program is written for the express purpose of communicating a specific written message then the message itself must not be AI-assisted but, here anyways, it’s fine if the executable code is AI-assisted. If your personal views conflate those two points, then you’ll have difficulty coping with the distinction here, and may end up exiting HN if you’re unable to coexist with the cognitive dissonance that separation creates.

> It takes extreme mental gymnastics to consider coding […] to not be a form of writing

It does not: coding is generally a form of writing whose primary audience is non-humans. That other humans may read your code and appreciate it is not related to its primary purpose: to direct the operation of a technological device in a programmatic way. Separately, the primary purpose of human-to-human communications is to convey something from your mind to another’s; the mechanism by which that occurs is secondary and has largely shown to be swappable across all possible substrates that can support communication.

So, then: if your marriage proposal to an imagined lover were in the form of code as poetry, it would be offensive to post that here if you wrote the poem with AI — and since the primary purpose of such a program is human-to-human taking precedence over human-to-machine, that’s an obvious case where AI assistance is unwelcome.

Yes, one can adopt a definition of ‘language’ that incorporates both English and Perl into one bucket; but the poem point still applies. Regardless of what dialect your writing is in, if the foremost audience of the written words is humans, then AI-assisted writing isn’t welcome here.

If you’re unable to judge whether code is foremost intended for a computer or for a human, then that’s an area where you’ll need to invest much more consideration if you wish to adhere to the guidelines.

> which is universally an act of producing text

Brainfuck is not in any way classifiable as ‘text’, nor is Renesas SH-2A assembly code. It may be possible to represent them in an ASCII file, but they are not interpretable through human linguistic processes. TIS-100 programs are representable as ASCII text, but without their shape and structure in a 4x3 visual grid, lose all cohesion and functionality. People who program music synthesizers using knobs and wires aren’t writing text, but are creating communications for a human audience, which is why the outcome (AI-assisted music) is disgusting while the process (AI-assisted synthesizer implementation) would not be. And so on, et cetera.

minimaxir 2 days ago||
It's almost as if being immediately reactionary removes nuance and worsens discourse.
julius_eth_dev 2 days ago||
[flagged]
gensym 2 days ago||
> The final comment is mine, shaped by my experience and opinions

I can understand why you think this is true, but it is false.

Kim_Bruning 2 days ago||
Can you expand on that? Why do you think so?
gensym 2 days ago||
That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.

In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.

And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".

fluffybucktsnek 2 days ago|||
I get what you are saying, but I disagree on the last part, "[...] way to convince you they are your own". If it managed to convince the author that it is their own, chances are, it is their own. Specially so if the author does review and edit the output prior to posting it.

The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.

Kim_Bruning 2 days ago|||
> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.

Oh, right, yes, if you're not careful they can definitely do that.

But look at what julius_eth_dev is actually saying they're doing:

> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."

That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.

I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)

Kim_Bruning 2 days ago||
https://news.ycombinator.com/item?id=47331891

> "Error: Reached max turns (1)"

Or. You know... Not at all. I mean, their argument happened to be good. But I have doubts they're telling the truth here.

(flagging the comment makes it dead, but that also hides the substantive discussion that came afer, I'm genuinely not sure what the best move is here)

antics9 2 days ago|||
Why not be real and multi faceted in both thinking and writing? Trying to be perfect in writing just makes you plastic.

By the looks of it, I don't even think I'm replying to a human.

b40d-48b2-979e 2 days ago||

    By the looks of it, I don't even think I'm replying to a human.
They didn't even bother to remove any of the signals. Perhaps this post is actually a honeypot for these bots.
throw310822 2 days ago|||
I'm also not averse to pasting Claude's output sometimes, with clear attribution, if it adds something. It's not that different from pasting a quote from Wikipedia- might bring useful information but there is a chance that it could be wrong.
fsloth 2 days ago|||
"It's not that different from pasting a quote from Wikipedia"

Claude's output it _totally different_ from pasting a quote from Wikipedia.

The latter has the potential to be edited and reviewed by global subject experts.

Claude's output totally depends on what priors you gave it and while you can have high confidence in the context no third party should have.

throw310822 2 days ago||
Indeed, but we know this, right? When it's relevant, the prompt should also be included.
fsloth 2 days ago||
No, that’s not how LLM:s work. Single prompt does not make it any better. Please focus on interesting humans comments.

If you feel like it sure chat with claude to build your insight. Then write what you think _yourself_.

If you want to introduce references use urls to non-ai generated contexts.

I means as a HN protocol.

HN is supposed to be interesting.

LLM output specifically is not interesting because everyone else can generate roughly the same output.

bondarchuk 2 days ago|||
Yes it is different and I don't want to read it.
throw310822 2 days ago||
Yes exactly, when it's clearly attributed you can skip it. It's a tool, it can be used to process and analyse large amounts of information. Not different from Excel.
bondarchuk 2 days ago||
No thanks. Thankfully there is a policy against it now so I don't even have to convince you.
bakugo 2 days ago|||
The fact that several users posted genuine replies to this obvious bot account is proof that this rule will likely go mostly unenforced. The average person is seemingly unable to notice they're reading slop, no matter how obvious it is.
Kim_Bruning 2 days ago|||
Despite being a bot, it appears to have made a substantive comment that sparked thoughtful replies. Many other comments by this user have been moderator-flagged or auto-flagged, but flagging this one would hide the human discussion.
b40d-48b2-979e 2 days ago|||
People calling it out seem to be getting downvoted, too. Sure, let's trust this one-day-old cryptobro's vague criticism of difficult enforcement.
desireco42 2 days ago||
Tell me about it. English is not my first language... I would say weird things and get downvoted for it. But... we really need this as people started automating too much.
wolfcola 2 days ago||
lol, lmao
vivid242 2 days ago||
Pinky swear!
SilentM68 2 days ago||
Hacker News turning more authoritarian every day. Me thinks Trump should consider annexing it :)
dopidopHN2 2 days ago||
You are absolutely right !
Kim_Bruning 2 days ago|
I would amend to:

"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."

This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.

(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )

zbentley 2 days ago||
Why would "human originated" be a better place to draw the line than "no generated/AI-edited comments"?

Like, I'm sure that AIs technically can write non-crap HN comments, but they rarely do. Even if it was less rare, the community that resulted from fostering AI-generated content would be unappealing to a lot of people, myself included. The fact that information here is the result of real people with real human opinions conversing is at least as important to me as the content being posted.

Kim_Bruning 2 days ago||
To begin with, some people have handicaps and use AI for assist. Other times people use AI for research. Finally, in general, when it comes to guidelines, making the lines slightly fuzzy makes enforcement more practical and believable.

It'd be silly if the rule gets interpreted such that people aren't allowed to do research with modern tools, and only gut takes are permitted.

I'm sure that's not the intent!

I think the important part is to have the human voice come through, rather than -say- force humans to run their text through an ai-detector first. (Itself an ai editing tool!)

See also : https://news.ycombinator.com/item?id=47290457 "Training students to prove they're not robots is pushing them to use more AI"

majorchord 2 days ago|||
Honestly, I think "human originated" is the only rule that actually matters because we can't stop LLMs from sounding smart anyway. If you wait for a technical ban on AI-generated text, you're just playing catch-up with tools that already pass as human.

The real point isn't stopping bad grammar, it's preserving the vibe. HN feels different because it's messy humans arguing, not optimized algorithms trying to be helpful.

Once we allow "good enough" AI content, the community stops feeling like a town square and starts feeling like a customer service chatbot. We need real people with actual stakes in their opinions, not just perfect outputs. Let's keep it human or leave it.

This comment may or may not have been generated with an LLM, but I won't tell and you can't prove it either way.

nippoo 2 days ago||
I can't prove it either way, but it's pretty clearly LLM-generated slop!
majorchord 2 days ago||
What makes you think that so confidently?
armchairhacker 2 days ago||
These are guidelines. I'm sure asking an AI about your comment (not pasting its text, so it's still your words) isn't an issue. The main target is obvious slop like https://news.ycombinator.com/threads?id=patchnull
Kim_Bruning 2 days ago||
Yeah, I think a big problem is that irresponsible AI use is very visible, while more responsible use tends to be invisible.
More comments...