Posted by usefulposter 1 day ago
https://news.ycombinator.com/item?id=45591707
For dyslexia, use a spell-checker. For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s. But don't let a style-checker or an LLM rob you of your own voice.
I don't believe a single one of those people.
> For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s.
Those are notorious for false-positives, false-negatives, and generally nonsensical advice. Not that the LLM-based alternatives are much better (looking at you, Grammarly), but still.
I wonder if an explicit expansion of that rule would help. Maybe in all caps. Saying "picking on grammar is a shallow dismissal".
The specific problem here was that the poster was being downvoted for grammar. Of course, that's how he could have read it.
But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!" Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.
Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?
For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.
Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?
In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.
Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.
At certain point it's no longer about AI specifically, but about power and showing who makes decisions.
I agree that there might be some threshold for obvious spam, but if you're making argument in good faith and you don't claim to have authority on some matter, there will be always people that think differently or disagree with you, because they have different interpretation or they need better sources, more evidence. It's actually typical, because different people use different perspectives, different assumptions, different tools. I don't believe that rules should be used to silence people that have different opinions and that's the biggest risk I see, because penalty for not following such rules, which are hard to measure correctly, creates power imbalance.
At some points it becomes dogma, not fair debate and not everyone likes to stick to dogma and it's hard to do creative or innovative work, if your work has to meet strict, but subjective, possibly incomplete criteria, to be considered valid work at all.
And they've been nitpicked to death for just as long. Now they have better tools to preempt that nitpicking, only to now be nitpicked over choosing to use those tools. Go figure.
For me it sounds just as yet another form of gatekeeping, so either you sound human or you're not good enough to post/comment. Like, really? How isn't that genetic fallacy? It doesn't matter what someone thinks, because someone used AI to make their thought clearer, so their whole argument is trash? Like it has to hurt to read and write, if you're not using English perfectly and your work is seen as inferior based on superficial factors like proper grammar and style?
It's dumb crusade, I did not use AI to write this comment, but I hate when people try to monopolize the truth and tell who is "better, smarter" based on irrelevant facts. Not using AI doesn't make anyone superior. Using AI also doesn't make you superior. Focus on what you mean, because that's what matters.
That's the richness behind the upvote/downvote that also tend to create echo chambers because you soon learn what causes downvotes.
I've personally noticed downvote whenever I mentioned apple negatively.
But at some point, the rationale behind it is that your comments are your words and I find it liberating. Some people won't appreciate it and some people would but this goes the same for AI-edited posts too.
(I would recommend to add that if you are still worried, then within your hackernews profile, please talk about you having dyslexia as people might be so much more forgiving when they get more context. We are all humans after all and I would like to think that we understand each other's struggles)
> stump along, cut your own path, or fuck right off
> real life will eat you otherwise
> I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓
You deserve a ban for this.
Don't insinuate that someone else must have broken that. It was you.
Do run the linter
Don't commit throw-away code
Do write a test case
Don't write a comment describing every single function
Seriously, run the linter. And fix the issues.
It is your fault.Every model, every computer generation has a subtle signature, and we (as in humans) can understand it.
Moreover, here is a very human-enforced place. Many of us already doesn't like to be answered by a bot here, so community is also a deterrent. Plus, having an official guideline will multiply that deterrent.
Not everything is lost. Have some faith in your fellow humans.
[1]: https//ethos.devrupt.io [2]: https://github.com/devrupt-io/LLaMAudit
It came up a few weeks ago. Show HN is already disabled for new accounts as of this week I think(?), but IMHO stricter measures need to be placed for account creation otherwise there’s no real enforcement.
Say what it means. I know it is a genuine question.
There is no solution, and that means something about the web is dead now, whether we like it or not.
AI can do a grate job for grammar, spell and formulation checking/fixing without changing any content. I.e. just adding as a fancy version of extended spell checking.
While I do currently not use it like that there shouldn't be any reason to ban it.
And tbh. given some recent comments I have been really wondering if I should use it, because either there are quite a bunch of people with lacking reading comprehension or quite a bunch of people with prejudice against people struggling with English spelling and grammar.
Either way using AI as extended spell checker does would help with getting the message through to both groups as
- it helps with spelling, grammar in ways where traditional spell checker fail hard
- it tends to recommend very easy to read sentence structure and information density
It absolutely will change content if you ask it to reformulate or fix language style.
It's also about fixing grammar, spelling, formulation issues. It's not about giving it pullet points and it writing the text for you.
they aren't good at it but viable
and more important this is about LLMs fixing grammar, spelling and pointing out bad formulations with change recommendations. This is not about giving them pullet points and telling them to write text for you.