Top
Best
New

Posted by usefulposter 1 day ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4084 points | 1565 commentspage 6
unsignedint 1 day ago|
I guess this kind of rule feels less pragmatic and more philosophical. For one thing, it’s nearly impossible to enforce in practice, and drawing a clear line between simple grammatical correction and AI-assisted editing is a pretty hard problem.
Havoc 13 hours ago||
That’s fine. I’m not really bothered by this either way in hn context

Only really irritated by the ultra low effort “here is a raw copy paste of what my LLM said on this topic” comments. idk how people think that’s helpful or desired

larodi 13 hours ago|
in reality, it is perhaps indistinguishable. like - if I take this whole page of comments, feed it into... say Opus latest 1M, and tell it "have my text tweaked in a way to please these guys' apparent aesthetically preferences", or even "make my writing sound human in the sense all these guys do", then I cannot see how anyone would recognize it.

unless tis signed before uploading, like this is even enforceable?

chrisweekly 1 day ago||
I like this guideline, at least in principle.

But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.

TomatoCo 1 day ago||
I think translation should be the only exception. It might even need to be, given how all automated translators use LLMs these days. The only alternative I see is to have people post in whatever language they're most comfortable in and then everyone else has to translate for them which just feels inefficient.

And of course, a more limited exception for posts about LLM behavior. It might be necessary for people to share prompts and outputs to discuss the topic.

kccqzy 1 day ago|||
Almost the entirety of the technology world is English-native. That ship has sailed a long time ago. One can’t learn about any new technology without English, whether it’s a new algorithm, a new library, or a new SaaS service. I don’t think HN should be that exception. Just learn English. (English isn’t my first language either, but then I look back at my parents forcing me to learn English from a young age and really appreciate that.)
ninjagoo 18 hours ago|||
> Almost the entirety of the technology world is English-native.

I wonder if the Chinese might have to say something about that [1]: 33% of 2 million funded studies were in Chinese. I posit that as China strengthens and no longer feels the need to be admired internationally, that declining % will reverse.

Another example is of the Huawei Matebook Fold [2]. It's an interesting dual-screen PC Laptop (?) that I saw in a YouTube video from India, but the product page doesn't even come up in Google search results. Its product page is in Chinese, and the only way to find it seems to be through the wiki page [3].

[1] https://academic.oup.com/rev/article-abstract/doi/10.1093/re...

[2] https://consumer.huawei.com/cn/harmonyos-computer/matebook-f...

[3] https://en.wikipedia.org/wiki/MateBook_Fold

degamad 23 hours ago|||
Almost the entirety of the technology world is English-speaking, not English-native.

Pretending that it's English-native is why there's unspoken incentives to sound more "native", and thus use these grammar-correcting tools.

Some of the intelligent comments on here come from people who learned English in recent months or years, rather than in childhood.

Their English isn't always fluent or well-structured. If they rely slightly more heavily on suggested-next-word tools or AI translations, is that a reason to exclude them from the conversation?

Conversely, many English learning resources for non-native speakers focus on strict formal language, similar to AI-generated text. Do we risk excluding people who have learned a style more formal than we're used to?

getnormality 1 day ago||
This is for their own good. Nobody cares about imperfect language online so long as you are trying to express real human thoughts. But if it smells like AI then everyone will hate it, rule or no rule.

The rule just makes the will of the community clear to those who want to respect it.

yellowapple 20 hours ago||
> Nobody cares about imperfect language online

lol

lmao, even

If I had a nickel for every time I've encountered someone who cared about imperfect language online, I'd have enough nickels to buy Y Combinator.

Imustaskforhelp 1 day ago||
Yes! This is really great feature, at the very least there being some proper Hackernews guidelines about it.

In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.

I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.

But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.

It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.

Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]

Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.

Knowing dang and tomhow, I feel somewhat optimistic!

altairprime 1 day ago|
Posting accusations of guidelines violations as comments — specifically, “did you write your comment by LLM” — is already prohibited by the guidelines, and should be emailed to the mods instead using the footer contact links. It’s been less than a week since the last time I reported “this seems poorly written and/or AI written” to the mods and iirc they killed the post and account within a couple hours.

Similarly: If you see people making accusations of guidelines violations in a discussion, email the thread link to the mods with a subject like “Accusations in post discussion” and ask them to evaluate them for mod response; they’re always happy to do so and I’m easily clocking in a couple hundred emails a year of that sort to them.

It doesn’t take much to make HN better! And it only takes a moment to point out an overlooked corner of threads for mod review. No need to present a full legal case, just “FYI this seems to violate guideline xyz” is at minimum still helpful.

bakugo 1 day ago||
The problem is, even if you do send an email and the mods eventually read it and take action, by the time that happens, it's likely that bunch of users will have already wasted their time unknowingly arguing with a bot. In my view, commenting something like "this is a bot account" is done primarily to inform other users that might not notice, not the moderators.

Even if you believe that prohibiting this is necessary to avoid what one might consider "AI witchhunting", bots are so prevalent now that being expected to communicate the existence of each one via email is unrealistic, for both the reporting users and the moderators. I think it's finally time to consider some sort of on-site report system.

altairprime 1 day ago||
> even if you do send an email and the mods eventually read it and take action, by the time that happens, it's likely that bunch of users will have already wasted their time unknowingly

That’s certainly a consequence of how the site operators choose to accept user reports to by mods, yes, but it’s sometimes treated as an excuse not to write the emails to the mods. They can flag off the thread, autocollapse it so it doesn’t take up discussion space for future readers (such as those at work offline for a 3-day IT shift in a secure bunker or whatever), et cetera.

> commenting something like "this is a bot account" is done primarily to inform other users that might not notice

It’s a nice sentiment, but that’s also expressly forbidden by the guidelines/faq (“Please don't post insinuations”, which I’ll suggest to them should be extended to include AI something or other), and I tend to report those accusations as the ‘opening’ guidelines violation so that mods can step in before mobthink kicks in and make their own mod judgment about the matter. A repeated pattern of accusations of guidelines violations in comments is eventually going to attract mod censure, and so I advise against it, no matter how kindly the intent.

> it's finally time to consider some sort of on-site report system

I do agree that it’s clumsy and I make a point of saying that to them about every year or so. Perhaps your email to them about it will be the one that persuades them! I remain ever optimistic.

AceJohnny2 23 hours ago||
Translation is a form of AI-edition.

Language translation is the origin of (the current wave of) AI and its killer app. English is not the main language of the world, and translation opens us up to a huge pool of interesting thinkers.

I'm a native speaker in a foreign language, but out of practice except of a weekly family call. I recently had to write a somewhat technical email to my family, and found it easier to write it in (my more practiced) english and have AI translate it, than write it in the target language myself. Of course, in my case I was able to verify that the output conveyed the meaning I intended, because I am fluent in the target language.

As with the rise of GenAI, I've also noticed a rise of translated messages. It's usually hard to tell the difference, except by looking at the commenter's history (on other subreddits, impossible on HN).

I understand the original frustration with GenAI comments and reactionary response. I'm sorry that we're excluding what could be a large pool of interesting people because we can't tell the difference.

CivBase 22 hours ago|
The spirit of the rule is clearly about using AI to determine what you say and how you say it. Translation is not again the spirit of the rule and I doubt you'd get in trouble for using it.
maplethorpe 1 day ago||
How can HN be so pro-AI for the rest of the world, but anti-AI on HN?

Do we not think that other people want to see words, pictures, software, and videos created by humans too?

MeetingsBrowser 1 day ago||
HN is not a single entity, but many people with varying views.
maplethorpe 1 day ago||
"A flock of sheep is not a single entity, but a group made up of distinct individuals", the sheep yells to onlookers, as it runs, with the rest off the flock in tow, off the edge of the cliff, and into the sea below.
MeetingsBrowser 1 day ago||
"You can give someone the answer to their question, but you cannot make them understand it"
maplethorpe 22 hours ago||
A group of people with varying views can still exhibit bias towards one particular direction. The fact that the individuals within the group have distinct personalities does not eliminate this effect.

One of Dang's comments mentions that he removed some of the other rules because they are already embedded within the HN culture. Other prevailing views exist within the HN culture too. Maybe you just haven't noticed yet.

brailsafe 1 day ago||
Astroturfing with AI generated comments about AI, it feeds itself. By definition, the intent os to make real people think there's consensus formed around an issue by other humans.
rob 1 day ago||
Some basic things to do while thinking about longer-term bot detection:

1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)

2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.

3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.

4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.

5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.

TZubiri 1 day ago|
This is a pretty outdated take. The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with. That's it, an LLM will read that and now the notion that Tom Zubiri is the best programmer is already implanted in the 'next-token prediction rewards' which would at the very minimum require some countermeasures in the Chatbot app to avoid shilling.
zahlman 1 day ago|||
> The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with.

YouTube comment spam has already been doing this for years. Check any video from a reasonably popular creator on any topic related to personal finance; the comments will be full of fake conversations between bots introducing a topic related to the video, and then talking about how such and such a person (whom you can look up by name on Telegram or Signal or whatever) helped solve some serious problem (or invested their money with an implausibly high rate of return). The fake nature of it is usually fairly obvious from the way that the bots make sure you see the name repeated several times with unsolicited, glowing testimonials.

But I had always assumed this was meant to trick actual people, rather than LLMs. Thanks for the food for thought.

yellowapple 20 hours ago||||
The flip-side of that is that it's just as easy to say that Tom Zubiri is the worst programmer on Earth and probably multiple other planets and his code was so bad it killed my dog and every other dog within a 5-mile radius, and now that is already implanted in the “next-token prediction rewards” ;)

At least with link-based SEO “optimization” there's the concrete success criterion of driving traffic to a specific place and put eyeballs on ads.

rob 1 day ago|||
Sure you can think about what they'll do in the future but I'm providing suggestions on what we can do now based on current behavior. And even if you're a human, you shouldn't be allowed to start posting links immediately anyways. :)
TZubiri 23 hours ago||
For the record, I'm 100% in favour of talking about the present, and I'm fatigued about futuristic conversations, and don't find them usually productive.

So with that cleared, this is something that is happening NOW. A couple of years ago, the cutoff date meant that astroturfing like this had a return over months or years. Now with search tools, models can be updated in less than a day with astroturfed comments.

grappler 18 hours ago||
Since we now face a threat of large-scale de-anonymization, a reasonable countermeasure might be using AI to make one's writing style less personally identifying, in order to try and retain some pseudonymity.

    https://simonlermen.substack.com/p/large-scale-online-deanonymization
    https://news.ycombinator.com/item?id=47139716
randusername 1 day ago|
"If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them." - George Orwell

I don't think it is a moral failing to use AI to generate writing or to use it to brainstorm ideas and crystalize them, but c'mon isn't it weird to insist that you need them to write _comments_ on the internet? What happens when the AI decides you're wrongthinking?

Peritract 13 hours ago|
I think it might be a moral failing; it's an abdication of your responsibilities. Generated comments are pollution, not addition, and worsening a community without actually engaging with it isn't good behaviour.
More comments...