Top
Best
New

Posted by usefulposter 1 day ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4118 points | 1612 commentspage 12
aicoldtrail 23 hours ago|
I don't think I'm going to spend the time to paraphrase my worthwhile AI-applied work for such hypocritical rules.

So develop and fund and use AI but manually paraphrase things and don't cite AI?

It is best to cite a source and/or a method.

Do you think it is better to paraphrase and not cite AI?

I don't recall encountering posts on HN that I've wanted to flag as AI.

aicoldtrail 16 hours ago||
> It is best to cite a source and/or a method

Have you considered this?

If people do not cite their sources or methods when they use AI, then we will not know where error was introduced by paraphrasing AI.

westurner 16 hours ago||
Everyone that uses a search engine (with or without an "AI mode") is using AI and LLMs and software built and tested with AI.

If they say "no" to "did you use AI", they're probably not correct and/or lying.

But you may not cite or quote or link to AI generated work?

> If people do not cite their sources or methods when they use AI, then we will not know where error was introduced by paraphrasing AI.

aicoldtrail 16 hours ago||
I think that you have rallied hate for AI to falsely justify need for censorship. If HN takes a "hate and hunt" AI stance, I will not contribute to HN.
aicoldtrail 15 hours ago||
Here are alternate possible rules for this; though I don't agree that making such distinction for every post is called for:

1. No AI comments without a human in the loop.

2. Please cite. Please cite when you use AI so that others can trace the errors and evaluate the premises of the argument. An argument has premises and a logical form.

We should expect the frequency of AI errors like hallucinations to decrease and accuracy to increase over time.

You should always consider peer review and getting another opinion regardless of whether AI or ML were used.

Do you need to cite AI?

If scientific reproducibility is necessary or important for your application,

You should also cite search queries, search results at that time, the name and version and software package hash of each software tool, the configuration parameters for each software tool, the URL and hash of the data, and whether you used spell check or autocorrect or an AI grammar service.

If you use an (AI) grammar service, you should disclose the model name and version, model hash or Merkle hash, and the model parameters.

But most people don't even cite URLs here; it's just people making unsupported arguments.

rdiddly 1 day ago||
Great point! You are so right to call me out on that! Here's the no-nonsense, concise breakdown, it's coming soon I promise, right after this, here it comes, no fluff -- just facts!

(Sorry, couldn't resist.)

benbristow 1 day ago||
Just add a filter for emdashes, 99% of AI posts out the window already.
oramit 1 day ago||
If you didn't bother to write it, why should I bother to read it?
tyleo 1 day ago||
I find it interesting that AI edited comments aren’t allowed. Sometimes I just want it to help me make something polite.

I definitely agree with AI generated comments.

Whatever the rules are, I’m happy to play by them.

jacquesm 1 day ago|
> Whatever the rules are, I’m happy to play by them.

That's the spirit!

sholladay 1 day ago||
I assume that the inclusion of some AI generated content is ok, such as when discussing the performance of different models?
agrajag 1 day ago|
I’m sure it would be fine if it was quoted, but it seems obvious the policy is to not represent AI generated content as human
AyanamiKaine 21 hours ago||
Mhh while many argue they can recognise the AI in writing. I dont think Humans actually can judge if something is done by ai or not. Many times I saw people 100% believing that an artist created an AI artwork only for that artist to be bullied because they didnt admit it.

Only for them to showing undeniable prove that they actually did create their art themselves.

For someone to be allowed to judge another. He should be doing a test where he can identify AI comments first with high accuracy.

It would be a pain to see real human comments and ideas to be hidden or removed by a mob.

8cvor6j844qw_d6 1 day ago||
True that AI comments do degrade discussion. Though a forum enforcing human-only text also becomes an unusually clean training corpus. Both things can be true.
HanClinto 1 day ago||
I appreciate this being added to the guidelines.

That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.

Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.

munk-a 1 day ago||
You could mirror article postings and upvotes to another site and let AI play around there - if it's interesting to people maybe it will gain a following. I don't see any reason it'd need to happen in this specific forum as that'd likely just cause confusion.

At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.

Kim_Bruning 1 day ago||
https://news.clanker.ai/

This might be roughly what you're looking for?

phs318u 1 day ago|
What’s interesting to me is the number of commenters here making a case of the form “use your own words; grammar and spelling are not that important; we’ll know what you mean”, and yet it’s often the case that different discussions will often contain pedants going off-topic correcting someone else’s use of language.

Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)

Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.

More comments...