Top
Best
New

Posted by usefulposter 1 day ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4164 points | 1635 commentspage 20
CrzyLngPwd 1 day ago|
How will this be policed?
tomhow 1 day ago|
Same as all the other guidelines. Moderators look at the threads and act on what we see. We also look at lists of flagged comments, and emails sent to hn@ycombinator.com by community members. One-off offending comments are flagged+killed, and a warning given. Repeat offenders/obvious bots are banned.
zekenie 1 day ago||
You’re absolutely right!
tedggh 1 day ago||
If a comment sucks it gets downvoted anyway. If it’s thoughtful, the drafting tool and process is kind of beside the point.

Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.

The practical approach is the one HN has always used: judge the content.

Btw, this was co written with ChatGPT. Does that make any difference to anyone?

J/K, actually it was not co written by ChatGPT.

Or maybe it was…

minimaxir 1 day ago|
The blatantly LLM comments do get downvoted/flagged, it's just still noise.
robotswantdata 1 day ago||
Welcome change, there is enough AI slop on the internet already.

I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.

Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.

alansaber 1 day ago||
Reddit is absolutely infested with AI generated comments. Good to see a site taking a stance against. That being said my main gripe in HN wasn't comments, it's the volume of shitty AI generated submissions.
kittikitti 1 day ago||
An important distinction I feel is often left out of the conversation of regulating AI generated content are the psychological effects of negative or positive consequences or reinforcement.

I think we are overwhelmingly utilizing negative reinforcement for AI generated content; where there are consequences for engaging in this behavior. On the other hand, positive reinforcement would encourage authenticity and greater human content. The reality of the situation is that AI generated content won't go away and it's become a game of who can hide their artificial content the best. Thus, I believe that positive reinforcement is the solution.

I think we must instead encourage human created content instead of policing AI generation. There are so many rules to follow already that by the time I create the content, I've gone through enough if/then logic that it feels like AI anyway.

nunez 1 day ago||
I hate how easy AI has made outsourcing thinking. You can literally type fragments of a thought into $CHAT_ASSISTANT and get a super polished response back that gets you 99% of the way there. It's almost like we, collectively, looked at the final scene of WALL-E and decided "Yes! Gimme that!"
skeeter2020 1 day ago|
Is this true for you? How often do you get 99% of a complete, valuable thought?

My experience is that it is quite rare. Occasionally high 90's for simple things of low value, 60's or less for things that approximate "thinking". At best it feels like a new search channel that amalgamates data better, and hasn't been thoroughly polluted by ads and SEO - yet.

wellpast 1 day ago||
One way to potentially discourage or curb AI-edited/written is integrate AI into HN so that your submissions get recommendations based on HN post guidelines such as “consider tone”, “substance” etc.

Then less motivation to jump out to external LLM to even get comments on your content which can temptingly lead to editing/generation.

insin 1 day ago||
Am I imagining things, or has HN become even more noticeably overrun with green usernames spewing LLM-generated comments since this guideline was added? Spiteclaws?
Bender 1 day ago|
At some point might internet text will just be recognized as meaningless drivel both to bots and humans? a.k.a. dead internet theory... I am curious what organizations would benefit from this. i.e. Who lost legitimacy when the internet became a popular way for people to communicate ideas?
More comments...