Top
Best
New

Posted by usefulposter 1 day ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4102 points | 1582 commentspage 8
fudged71 1 day ago|
What I think would actually be useful is a version of what was implemented on /r/ClaudAI which is an official bot which summarizes the discussion (and updates after x number of comments have been added). I think this level of synthesis has a compounding effect on discussion quality and pruning redundant arguments/topics.

Example: https://www.reddit.com/r/ClaudeAI/s/BJKLxzJA16

dddgghhbbfblk 1 day ago||
I don't spend much time on that subreddit, but I've seen that bot on a couple posts I've read and have been pleasantly surprised by how useful it seemed. I may eat my words on this later, but to me this is exactly the kind of application of AI that I have always thought was the most promising.
sumeno 1 day ago||
Just read the posts instead of an AI slop summary
randomNumber7 9 hours ago||
The problem is that there is now way to distinguish AI generated content from s.th. a human has written.
chrystianpl 1 day ago||
As English is my second language and as I have dyslexia. I was just wondering what do you mean by "AI-edited comments"? I can't ask an llm to check if I have made correct grammar/fix it and then as I was on other account, down-voted because of my styling/grammar, not because of the content?
tartoran 1 day ago||
You could always tell your LLMs to just fix your grammar but not embelish, add new ideas, etc..
shnpln 1 day ago|||
This is what I do when using AI to read anything I write. Some prompt like "I am going to share with you something I have written and I don't want you to change my voice at all. Can you look for structural issues, grammar or punctuation errors, and things like that". Claude is an amazing editor and I never feel like my writing has been taken from me doing this.
giancarlostoro 1 day ago||||
I usually tell it not to rewrite my words, my words are my own. If it has suggestions to tell me what those are, but only fix or show me grammar fixes instead.
113 1 day ago|||
Does that work?
simonw 1 day ago|||
It works really well. I've been using this prompt to find spelling and grammar errors for about a year now: https://simonwillison.net/guides/agentic-engineering-pattern...
nablaone 1 day ago|||
"fix english" is the prompt i wish to turn into a button
surround 1 day ago|||
Trust your own style, even if you aren't a native English speaker. Here's an example where a non-native speaker used an LLM to polish his post. The general consensus was that his own writing was preferable to the LLM's edited version.

https://news.ycombinator.com/item?id=45591707

For dyslexia, use a spell-checker. For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s. But don't let a style-checker or an LLM rob you of your own voice.

yellowapple 22 hours ago||
> The general consensus was that his own writing was preferable to the LLM's edited version.

I don't believe a single one of those people.

> For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s.

Those are notorious for false-positives, false-negatives, and generally nonsensical advice. Not that the LLM-based alternatives are much better (looking at you, Grammarly), but still.

nottorp 1 day ago|||
"Please don't post shallow dismissals, especially of other people's work."

I wonder if an explicit expansion of that rule would help. Maybe in all caps. Saying "picking on grammar is a shallow dismissal".

rdiddly 1 day ago|||
I don't believe that's always true, and I suspect it was left out of the guidelines deliberately, and I wish people receiving suggestions would stop interpreting it that way. Of course people suggesting grammar corrections and treating it like they just demolished and eviscerated your argument are part of the problem. But what about people out here just trying to help? Grammar is important, as it's the syntax of the programming language we all use with each other. People act as if bad grammar is something you're born with, and can't change. Like learning grammar is impossible, and those who don't bother should be a protected class. I'm just trying to help man. Or I was anyway, before I stopped. But if I'm trying to engage with someone's main point, it should be obvious. Whereas a quick grammar correction is just that. But it's a tangent, and not interesting (especially if you already know), and supposedly grammar is "not a technical topic" (despite daily use) so it ends up deemed a "low value comment" and gets downvoted to oblivion.
nottorp 1 day ago||
> I wish people receiving suggestions would stop interpreting it that way

The specific problem here was that the poster was being downvoted for grammar. Of course, that's how he could have read it.

yellowapple 22 hours ago|||
Picking on LLM use is a shallow dismissal, too.
nottorp 2 hours ago||
LLM use is what LLMs are best at: spam.
johndough 1 day ago|||
Likewise, I sometimes use https://www.deepl.com/en/write to fix my unidiomatic sentences.

But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!" Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.

Adiqq 1 day ago||
Isn't the whole point to understand? If the task is to write and you expect only final result, but you question if it looks legit enough, how is it fair judgement? People can deliver partial results and show progress as well, but you won't see it in some comments on the internet, but if something is expected to take many days, it's easy to show different stages of work. It's easy to accuse people of plagiarism or not thinking for themselves, and of course there are indicators when someone uses AI, but the problem is that you can't distinguish in reliable way, if something was created by AI or not.

Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?

For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.

Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?

In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.

johndough 1 day ago||
I am not saying that completely disallowing AI is the right decision. But if you see text that is clearly generated by AI and does not make any sense, it sure would be nice if you could just tell the students to actually read their sources instead of having to argue with them why they should do so. Similarly, I can see why HN moderators do not want to argue with the 100s of spam posters per day on /newest.

Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.

Adiqq 1 day ago||
On the other hand you can make good, but controversial argument and if you use AI in any way, it might be rejected by moderator, just because some places have strict rules on AI. In some cases it might be rejected, even if no AI was involved, if any fragment of your text might look like not written by human and if they don't like your text.

At certain point it's no longer about AI specifically, but about power and showing who makes decisions.

I agree that there might be some threshold for obvious spam, but if you're making argument in good faith and you don't claim to have authority on some matter, there will be always people that think differently or disagree with you, because they have different interpretation or they need better sources, more evidence. It's actually typical, because different people use different perspectives, different assumptions, different tools. I don't believe that rules should be used to silence people that have different opinions and that's the biggest risk I see, because penalty for not following such rules, which are hard to measure correctly, creates power imbalance.

At some points it becomes dogma, not fair debate and not everyone likes to stick to dogma and it's hard to do creative or innovative work, if your work has to meet strict, but subjective, possibly incomplete criteria, to be considered valid work at all.

chorkpop 1 day ago|||
Dyslexia was my first thought as well. The intent is great, but I don't know if this is keeping with the social model of disability. Disability is created when you remove access and this is exactly that.
3rodents 1 day ago||
The internet has been full of brilliant dyslexics since the start, just as it has been full of brilliant blind people. Dyslexic people feeling that they must use AI to produce perfect prose lest they burden the lexics with clumsy spelling or grammar is far more hostile. We didn’t have slop machines 5 years ago.
yellowapple 22 hours ago||
> The internet has been full of brilliant dyslexics since the start

And they've been nitpicked to death for just as long. Now they have better tools to preempt that nitpicking, only to now be nitpicked over choosing to use those tools. Go figure.

Adiqq 1 day ago|||
I don't really see the issue, as long as there's human thought behind whatever anyone posts. It's frustrating to argue against someone that lazily uses AI, but if argument is fair, then I don't care if that's written by AI or human, what difference does it make? It's frustrating, if someone is incoherent and makes dumb argument, but again, I don't care if it's dumb argument from human or machine.

For me it sounds just as yet another form of gatekeeping, so either you sound human or you're not good enough to post/comment. Like, really? How isn't that genetic fallacy? It doesn't matter what someone thinks, because someone used AI to make their thought clearer, so their whole argument is trash? Like it has to hurt to read and write, if you're not using English perfectly and your work is seen as inferior based on superficial factors like proper grammar and style?

It's dumb crusade, I did not use AI to write this comment, but I hate when people try to monopolize the truth and tell who is "better, smarter" based on irrelevant facts. Not using AI doesn't make anyone superior. Using AI also doesn't make you superior. Focus on what you mean, because that's what matters.

throwpoaster 1 day ago|||
No worries, it’s unenforceable.
desireco42 1 day ago|||
I don't have dyslexia but I feel your pain. I mean it is what it is. I would rather have it raw then have to use AI to filter to comments that make sense.
jonathrg 1 day ago|||
How do you know what you were downvoted for?
whynotmaybe 1 day ago|||
I guess he was told because otherwise you don't know whether you said something inherently wrong or misleading or you hurt someone 's feeling.

That's the richness behind the upvote/downvote that also tend to create echo chambers because you soon learn what causes downvotes.

I've personally noticed downvote whenever I mentioned apple negatively.

Imustaskforhelp 1 day ago|||
Oof although I feel this pain a lot. What I like to do is respond to them politely if someone talks about such thing. Although it takes time and this does sometimes make you want to dis-incentivize/dis-engange.

But at some point, the rationale behind it is that your comments are your words and I find it liberating. Some people won't appreciate it and some people would but this goes the same for AI-edited posts too.

(I would recommend to add that if you are still worried, then within your hackernews profile, please talk about you having dyslexia as people might be so much more forgiving when they get more context. We are all humans after all and I would like to think that we understand each other's struggles)

nonameiguess 1 day ago|||
I don't see how you can know why you were downvoted. Even if one person says something, they won't all. Your comment right here has some rough patches, but I can tell what you're saying. Humans are terrific at extracting signal from noise. I would say be who you are, tough as it may be, and it'll encourage the rest of the world in the future to do the same. We're all unique in some way or another and have flaws and we'd be better off if we were knew others had them too because they weren't constantly trying to hide it and we wouldn't feel so bad thinking we're the only ones. I hope it doesn't sound unsympathetic. I understand where you're coming from intellectually, but don't have any real experience being ridiculed or bullied. I know kids can be brutal and probably scarred you, and unfortunately, adults aren't much better, but we should be, and I think at least Hacker News is better than most places full of human adults. We know there's a huge world out there. I think I'm reasonable well-spoken in English but can't speak a lick of any other language at all. The fact that you can produce intelligible English already puts you above me in my book. You're a person. I can respect you, esteem you, potentially love you, not in spite of your flaws, but because they don't matter. Every single person on the planet has them, and if they're not moral flaws, nobody should give a shit. I can't respect or love a machine any more than I can a rock. And I don't want to talk to one, either.
nsxwolf 1 day ago|||
I have never downvoted for this, and I hope no one else would do that either. If anyone here does that, please stop.
wetpaws 1 day ago|||
[dead]
metalman 1 day ago||
[flagged]
jacquesm 1 day ago||
> boooooooo, hu, baby

> stump along, cut your own path, or fuck right off

> real life will eat you otherwise

> I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓

You deserve a ban for this.

adamgordonbell 1 day ago||
This list of Do and Don'ts now reads like a bad Claude.md file to me.

   Don't insinuate that someone else must have broken that. It was you. 
   Do run the linter
   Don't commit throw-away code
   Do write a test case
   Don't write a comment describing every single function
   Seriously, run the linter. And fix the issues. 
   It is your fault.
stalfie 12 hours ago||
Without a technical means to enforce this, the only result of this policy will be a culture of paranoia and a lot of false positives.
bayindirh 12 hours ago|
I'll kindly disagree, even me, as someone who doesn't use any "Chat" tools from big three, can feel if something is AI generated. We're slowly being educated into detecting it. This is why human brain is awesome.

Every model, every computer generation has a subtle signature, and we (as in humans) can understand it.

Moreover, here is a very human-enforced place. Many of us already doesn't like to be answered by a bot here, so community is also a deterrent. Plus, having an official guideline will multiply that deterrent.

Not everything is lost. Have some faith in your fellow humans.

ddtaylor 1 day ago||
This is a welcome change and do will update Ethos [1] in the future with an AI sentiment score. I created a separate project called LLaMaudit [2] that attempts to detect if an LLM was used to generate text, but it needs to be improved.

[1]: https//ethos.devrupt.io [2]: https://github.com/devrupt-io/LLaMAudit

capricio_one 1 day ago||
Real talk: who is this guideline going to stop? People are already doing this and they will continue. Even if you find them, they’ll just make more accounts and continue.
nwhnwh 1 day ago|
So? Say it. Go ahead few steps further.
capricio_one 1 day ago||
Say what? It’s a genuine question. What is the actual repercussion for not following this?

It came up a few weeks ago. Show HN is already disabled for new accounts as of this week I think(?), but IMHO stricter measures need to be placed for account creation otherwise there’s no real enforcement.

nwhnwh 1 day ago||
> Say what?

Say what it means. I know it is a genuine question.

There is no solution, and that means something about the web is dead now, whether we like it or not.

dathinab 12 hours ago|
What is meant with AI-edited??

AI can do a grate job for grammar, spell and formulation checking/fixing without changing any content. I.e. just adding as a fancy version of extended spell checking.

While I do currently not use it like that there shouldn't be any reason to ban it.

And tbh. given some recent comments I have been really wondering if I should use it, because either there are quite a bunch of people with lacking reading comprehension or quite a bunch of people with prejudice against people struggling with English spelling and grammar.

Either way using AI as extended spell checker does would help with getting the message through to both groups as

- it helps with spelling, grammar in ways where traditional spell checker fail hard

- it tends to recommend very easy to read sentence structure and information density

layer8 12 hours ago||
> without changing any content

It absolutely will change content if you ask it to reformulate or fix language style.

dathinab 11 hours ago||
there is tools out there which can use in ways where it normally won't change the content. And it's not that you are blindly posting the output of it.

It's also about fixing grammar, spelling, formulation issues. It's not about giving it pullet points and it writing the text for you.

gosub100 12 hours ago||
It doesn't help anyone. The user just depends on it to fix their English. And it makes a monoculture where every ESL user sounds exactly the same.
dathinab 11 hours ago||
except you can nudge LLMs to use different stiles more similar to your writing

they aren't good at it but viable

and more important this is about LLMs fixing grammar, spelling and pointing out bad formulations with change recommendations. This is not about giving them pullet points and telling them to write text for you.

More comments...