Posted by usefulposter 1 day ago
Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.
The practical approach is the one HN has always used: judge the content.
Btw, this was co written with ChatGPT. Does that make any difference to anyone?
J/K, actually it was not co written by ChatGPT.
Or maybe it was…
I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.
Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.
I think we are overwhelmingly utilizing negative reinforcement for AI generated content; where there are consequences for engaging in this behavior. On the other hand, positive reinforcement would encourage authenticity and greater human content. The reality of the situation is that AI generated content won't go away and it's become a game of who can hide their artificial content the best. Thus, I believe that positive reinforcement is the solution.
I think we must instead encourage human created content instead of policing AI generation. There are so many rules to follow already that by the time I create the content, I've gone through enough if/then logic that it feels like AI anyway.
My experience is that it is quite rare. Occasionally high 90's for simple things of low value, 60's or less for things that approximate "thinking". At best it feels like a new search channel that amalgamates data better, and hasn't been thoroughly polluted by ads and SEO - yet.
Then less motivation to jump out to external LLM to even get comments on your content which can temptingly lead to editing/generation.