Posted by usefulposter 1 day ago
So develop and fund and use AI but manually paraphrase things and don't cite AI?
It is best to cite a source and/or a method.
Do you think it is better to paraphrase and not cite AI?
I don't recall encountering posts on HN that I've wanted to flag as AI.
Have you considered this?
If people do not cite their sources or methods when they use AI, then we will not know where error was introduced by paraphrasing AI.
If they say "no" to "did you use AI", they're probably not correct and/or lying.
But you may not cite or quote or link to AI generated work?
> If people do not cite their sources or methods when they use AI, then we will not know where error was introduced by paraphrasing AI.
1. No AI comments without a human in the loop.
2. Please cite. Please cite when you use AI so that others can trace the errors and evaluate the premises of the argument. An argument has premises and a logical form.
We should expect the frequency of AI errors like hallucinations to decrease and accuracy to increase over time.
You should always consider peer review and getting another opinion regardless of whether AI or ML were used.
Do you need to cite AI?
If scientific reproducibility is necessary or important for your application,
You should also cite search queries, search results at that time, the name and version and software package hash of each software tool, the configuration parameters for each software tool, the URL and hash of the data, and whether you used spell check or autocorrect or an AI grammar service.
If you use an (AI) grammar service, you should disclose the model name and version, model hash or Merkle hash, and the model parameters.
But most people don't even cite URLs here; it's just people making unsupported arguments.
(Sorry, couldn't resist.)
I definitely agree with AI generated comments.
Whatever the rules are, I’m happy to play by them.
That's the spirit!
Only for them to showing undeniable prove that they actually did create their art themselves.
For someone to be allowed to judge another. He should be doing a test where he can identify AI comments first with high accuracy.
It would be a pain to see real human comments and ideas to be hidden or removed by a mob.
That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.
Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.
At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.
This might be roughly what you're looking for?
Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)
Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.