Top
Best
New

Posted by usefulposter 1 day ago

Don't post generated/AI-edited comments. HN is for conversation between humans(news.ycombinator.com)
4067 points | 1552 commentspage 5
Normal_gaussian 1 day ago|
This rule is very important. Like many of the other rules, it is open to interpretation, but it is a line in the sand that defines allowable behaviour and disallowable behaviour.

This rule will have an effect on the behaviour of the 'good players', and make the 'bad players' a lot easier to spot. Moderation needs this. I see this as stopping a race-to-the-bottom on value extraction from HN as a platform.

Nevermark 16 hours ago||
This is a wonderful rule.

It also points out the need for AI writing tools that very strictly just:

1. Point out misspellings and typos.

2. Point our grammar mistakes, if they confuse the point.

3. Point out weaknesses of argument, without injecting their own reasoning.

I.e. help "prompt" humans to improve their writing, without doing the improvement for them.

In fact, I would like a reliable version of that approach for many types of tasks where my creativity or thought processes are the point, and quality-control feedback (but not assistance), is helpful.

This is a mode where models could push humans to work harder, think deeper, without enabling us to slack off.

cobbzilla 16 hours ago|
I don’t want to read AI slop, but how do you feel about translations?

I don’t mind when non-native speakers use it to express themselves, especially if disclaimed (but I give a pass even if not). Does it bother you?

thezipcreator 15 hours ago|||
We've had machine translation for a while and I don't think anybody particularly thinks of it as a bad thing? Writing something and then having a machine directly translate it (possibly imperfectly) is a lot different than a machine writing the thing.

Personally I would like people to try learning other languages more (it's hard but rewarding) but you can't learn every language ever, and it is really hard to learn a language to fluency.

lifthrasiir 12 hours ago||
> We've had machine translation for a while and I don't think anybody particularly thinks of it as a bad thing?

Not all, but some machine translators can be comically (if not horrifically) bad sometimes. Search Twitter-become-X for examples. Native writers can't pick a working machine translator unless they are explicitly allowed to do so themselves.

Nevermark 15 hours ago|||
I think it makes perfect sense.

But that a site might still want to discourage it, to avoid general degradation. It is a tradeoff.

If someone can write in the target language, just not well, a model could be asked to point out problems for the writer to fix. Rewrite a difficult sentence.

cobbzilla 15 hours ago||
I suppose for me, it is the difference between a true “translation“ and having an LLM reinterpret intent and state “its” words.

Ideally, I want the speaker’s words translated “verbatim” to English, to the extent possible.

kcguyu 1 day ago||
Absolutely love this. If people are relying on AI for a 30-45 word comment, I don’t want to waste my time reading it. And everyone using AI for discussions will end up coming to the same conclusion. Use your own ideas !
iammjm 1 day ago||
I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years, especially without sacrificing people's right to privacy and anonymity in the process.
safog 1 day ago||
I hope I'm wrong but I don't think a privacy friendly alternative is going to exist. It's going to go the way of show me your drivers license to use my site.
throwaway2027 1 day ago|||
Why wouldn't criminals like they do now just use stolen identities? If someone verifies they are a person that doesn't mean they're not leaving their PC on with some AI that uses their credentials either.
kace91 1 day ago||
The point of these systems is not to ban any possibility of fake accounts. The point is to add friction so that creating accounts is harder than banning them, so criminals can’t recreate them at scale. Otherwise bans take seconds to overcome and a single person can run 10000 automated identities.
OkayPhysicist 1 day ago||||
Invite trees approximately solve this problem. I don't need to know who you are to know that someone in good standing in the community invited you.
jacquesm 23 hours ago||
And that if you misbehave you get booted out and whoever invited you gets dinged. If they get dinged enough they become a leaf rather than a branch.
iamnafets 1 day ago||||
No credential will be sufficient, this is basically an unsolvable enforcement problem. That doesn't obviate the utility of rules and norms, but there's no airtight system which will hold back AI generated content.
Karrot_Kream 1 day ago||
Verifiable credentials have been an idea for a long time now. It wouldn't be that hard to solve. Sign everything you post with a verifiable credential. Implement support on all social media sites. The question is whether the forum implementers, governing bodies, and social media site owners want to try to build a solution like this or not.
degamad 1 day ago||
How will a verifiable credential stop people posting AI slop? You can already give the AI agents access to your digital identities to interact with?
JimDabell 18 hours ago|||
It doesn’t stop people posting AI slop, it stops people from posting AI slop more than once. If you ban somebody for spamming today, they just create a new account and keep on spamming. If you can determine they are the same person you banned before using verifiable credentials, it makes the ban actually effective.
Karrot_Kream 23 hours ago|||
Layer on captchas. It won't completely stop slop but it's an incentive against slop flooding. And I mean, nothing is stopping a human from just going into ChatGPT by hand and asking for output and copy/pasting that into an HN post box.
rlt 22 hours ago||||
I feel like we need a distributed system/protocol that allows people to have pseudonyms not linked to their real identity, but with a shared reputation/trust score, so if you’re a bad actor using a pseudonym your real identity and all your other sock puppets are penalized too.

I know very little about this but sense that some combination of buzzwords like homomorphic encryption, zk-snarks, and yes, blockchains could be useful.

Of course this would present problems if any of your identities were ever compromised and your reputation destroyed.

nacozarina 22 hours ago||
Driving everything by reputation-weighted identities just creates echo-chambers you then cannot escape.

The most useful time for the blowhard spout off at me is at the moment it makes me most uncomfortable. Because the blowhard probably has a valid point at some level, he’s just being an ass about it.

When we meet that moment with discipline, are able to identify and respond to the kernels of truth and ignore the chaff belted out, focus on the merits of the argument irrespective of the source of an adversarial viewpoint, we thrive.

I like the blowhards just the way they are, unruly and insolent.

cindyllm 21 hours ago||
[dead]
morkalork 1 day ago||||
Problem is, if a token is anonymous, then it follows that it can be bought and sold. Which breaks the original use case of the token, right?
k33n 1 day ago|||
That is exactly what will happen. The sad thing is, it needs to happen. I've found myself advocating for this lately, when 10 years ago, I wouldn't have even considered taking that position.

If Web3-like session-signing had taken off enough to become OS or even browser-native, we would have had a fighting chance of remaining mostly anonymous. But that just didn't happen, and isn't going to happen. Mostly because fraud ruined Web3.

MaKey 1 day ago||
>The sad thing is, it needs to happen.

No, it doesn't.

k33n 21 hours ago||
There's literally no other way to combat rampant botting, child abuse, and nation-state originating disinformation campaigns and the intentional creation of public discord.
aprentic 1 day ago|||
I think we're going to have to make some choices.

A completely anonymous stranger has no way to prove that they're human that can't be imitated by an AI. We've even seen that, in some cases, AIs can look more human to humans than real humans do.

The only solution I can think of to that problem is some sort of provenance system. Even before AI, if some random person told me a thing, I'd ignore them; If my most trusted friend told me something, I'd believe them.

We're going to need a digital equivalent. If I see a post/article/comment I need my tech to automatically check the author and rank it based on their position in my trust network. I don't necessarily need to know their identity, but I do need to know their identity relative to me.

OkayPhysicist 1 day ago||
Reputation tracking is the key. The most simple option is open-invite invite-only spaces: Any user can invite more users, but only users with an invite can participate. Most Discord servers work like this, secret societies like the Oddfellows do, as does the other site.

If you keep track of the invite tree, you can "prune" it as needed to reduce moderation load: low quality users don't tend to be the source of high-quality users, and in the cases where they are, those high quality users tend find other people willing to vouch for them faster than their inviter catches a ban.

aprentic 23 hours ago|||
The open-invite system works well in many cases. It works particularly well in-person but even there you can get drift over time. Our fraternity unanimously agreed on every single initiate who joined; the cohort today is still very different from the one 20 years ago.

In online systems the scales quickly get too big for open-invite. There needs to be a way to automatically update the trust network at a fine grain.

The one that jumps to mind is an inference system; when I +/- a comment, I'm really noting that I trust or distrust the author. It can be general or on a specific topic (eg I trust the author to tell the truth or I trust the author to make me laugh). I could also infer that other people with similar trust patterns are likely trustworthy. And I could likely infer that people who are trusted by people I trust are trustworthy.

avadodin 22 hours ago||||
reputable ugly bags of mostly water society
Barrin92 21 hours ago|||
>secret societies like the Oddfellows do

yes and they're all full of suckers. In the best case which is already bad you get a pretentious online night club like Clubhouse, in the worst case you get Epstein's island.

These walled off societies always attract people who are drawn to exclusivity, are run like dystopian island communities or high school cliques and tend to, in a William Gibon 'anti-marketing', way be paradoxically even more vapid.

No you need actual open access and reputation systems. A good blueprint is something like well functioning academic communities. It's a combination of eliminating commercial motives, strict rules, high importance on reputation and correctness, peer review, and arguably also real identities and faces.

wvenable 1 day ago|||
I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much.

The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.

Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.

bigstrat2003 22 hours ago|||
> Someone using an LLM is craft a reply is not a problem on it's own.

No, someone using an LLM to craft a reply is a problem in its own. I want to hear what a human has to say, not a human filtered through a computer program. No grammar editing, nothing. Give me your actual writing or I'm not interested.

wvenable 22 hours ago||
Do you though? Like what real difference does it make to you? Can you even tell if this has been passed through an LLM or not? If you can't tell, why does it matter?

I don't want to be robo-slopped at en masse or be fed complete fabrications but neither of those actually require an LLM. If you're going to use an LLM to gather your thoughts, I don't see a problem with that.

Barrin92 21 hours ago||
>Like what real difference does it make to you?

the difference is that you get to see the unfiltered, unique perspective of a real human being. Just like I don't want to talk to anyone through an instagram or tiktok beauty filter or accent remover. If your thoughts are unordered, it's okay I'll take your unordered thoughts over some smoothed over crap.

Do people have really such a low opinion of themselves that they have to push every single thing through some kind of layer of artifice?

wvenable 20 hours ago||
> the difference is that you get to see the unfiltered, unique perspective of a real human being.

The implicit unfounded assumption is whether that's actually worth more than a well written orderly response. Most comments are kind of crap.

Not everyone is good at writing. In some cases, it might even be a disability aid. And if their comments aren't good, we have a system in place to rank them accordingly. Again, I think the only problem is quantity. If we're overrun with low-effort posts, no amount of ranking will help that.

munificent 19 hours ago||
> The implicit unfounded assumption is whether that's actually worth more than a well written orderly response.

It's not implicit or unfounded. The parent comment is explicitly saying that's what they prefer. And, as an actual human, their preference is intrinsically valid for them.

If I like my kid's crappy cooking over a Michelin-star meal made by a robot... then I get to like my kid's crappy cooking more. I have that right. There is no social consensus when it comes to what I want. You can't argue whether my preference is correct or not, it's my preference.

wvenable 15 hours ago||
As a software developer and human being, I know people often say they prefer one thing while actually preferring something else. That's human nature.

People have strong feelings about AI in general and that can definitely cloud what they will say about it. Everybody hates AI but, like CGI in movies, they only likely hate the AI or CGI that they notice.

munificent 4 hours ago||
Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.

To say otherwise is to say that worrying about lung cancer is clouding one's view of smoking.

> they only likely hate the AI or CGI that they notice.

No, this is simply not true at all. I dislike use of AI even more when I don't notice it. My goal getting on the Internet is to connect with other actual people and their creativity. I want actual people to be more connected to each other, and AI makes that worse, especially when it's good enough that people don't even realize their are being intermediated by corporations pumping out simulated humanity.

wvenable 2 hours ago||
> Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.

That's fine. Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.

> My goal getting on the Internet is to connect with other actual people and their creativity.

It's too bad your goal doesn't include interacting with people who don't speak your language and use AI to translate for them. Or people who struggle with writing in general. I don't think it's as black and white as you make it out to be.

ffsm8 1 day ago||||
If you had the LLM write the comment, then it wasn't your thoughts.

I sometimes wonder if people aren't forgetting why we're on this platform.

The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN

wvenable 23 hours ago||
> If you had the LLM write the comment, then it wasn't your thoughts.

But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts.

Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point.

If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend?

meatmanek 22 hours ago||
I like to think about it in terms of output-to-prompt ratio. For HN comments, I think an output ratio of 1 or less is _probably_ fine. Examples:

    - translating (relatively) literally from one language to another would be ~1:1.
    - automatic spelling/grammar correction is ~1:1
    - Using an LLM to help you find a concise way of expressing what you mean, i.e. giving it extra content to help it suggest a way of phrasing something that has the connotation you want, would be <1:1
Expansion (output > prompt) is where it gets problematic, at least for HN comments: if you give it an 8 word prompt and it expands it to 50, you've just wasted the reader's time -- they could've read the prompt and gotten the same information.

(expansion is perfectly fine in a coding context -- it often takes way fewer words to express what you want the program to do than the generated code will contain.)

wvenable 22 hours ago||
I think all your examples are all perfectly fine.

As for expansion, that might just be the risk we take. I been downvoted on reddit for being "too verbose" in my replies and I'm a human. And perhaps just reading the prompt in that case wouldn't give you more information; the LLM might actually have some insight that is relevant to the conversation. What's the difference between that and googling for something and pasting it in?

ffsm8 14 hours ago||
The linked rule does not make such a distinction, and I don't see how this rule could be enforced with such a caveat, either.

Hence no, none of these examples should be okay. Even if pure translation and grammar check is gonna be effectively impossible to detect too, so likely pointless to talk about

And the last one is often detectable and very clearly against it - I'm not sure how you can come to any other conclusion

wvenable 14 hours ago||
> I don't see how this rule could be enforced with such a caveat

I don't see how this rule is going to be enforced anyway. Many people posting with AI help won't get noticed at all and about 100 times a many people are going to be accused of using AI because they use proper grammar.

malfist 23 hours ago|||
Amusingly your comment carries some of the tropes of AI authorship ("is not a problem on it's own....is the problem") but it's not shaped like a profound insight is being discovered in every line is what makes it human.

How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content.

Not sure where my comment is going, I just kinda rambled.

wvenable 23 hours ago||
> Amusingly your comment carries some of the tropes of AI authorship

It was trained on 30 years of my posts on the Internet, I'm sure some part of it sounds just like me.

munk-a 1 day ago|||
I'm going to guess we'll eventually settle onto a psuedo-anonymous cert system like HTTPS where some companies are entrusted with verification and if that company says "That's definitely a human" it'll fly - not a great solution, of course, but I really can't see a non-chain-of-custody/trust based approach to the problem and those might only slightly compromise anonymity in optimal scenarios but some compromise is inevitable.
WD-42 1 day ago|||
Will it be? Or is the solution to move to smaller, trusted networks where there's less need for proof. Unfortunately I think the age of large scale open discussion forums like HN is coming to an end.
thewebguyd 1 day ago|||
I think this is the most likely and best path. There's no stopping the flood of bots, the dead internet theory is beyond just a theory at this point.

Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.

I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.

Moving more and more into private communities removes that, and that is a great loss IMO.

bluefirebrand 23 hours ago||
> Moving more and more into private communities removes that, and that is a great loss IMO

It is a great loss. Unfortunately this is a result of unchecked greed and an attitude of technological progress at any cost. Frankly we enabled this abuse by naively trying to maintain a free and open internet for people. Maybe we should have been much more aggressively closed off from the start, and not used the internet to share so freely.

gdulli 1 day ago|||
The utility of those larger sites is coming to an end, but most people aren't discerning or ambitious enough to leave and seek out the smaller places you mentioned. Places like this will remain but will join Facebook, Reddit, and Twitter as shadows of their prior useful selves. The smaller, better sites won't have to worry about attracting the masses and therefore worsening, because the masses have finally settled.
agile-gift0262 1 day ago|||
just scan your eye in this orb to prove you are human. I'll give you some sh*tcoins in excgange
jsheard 1 day ago|||
Sam Altman would love to sell you a solution to the fire that he dumped gasoline on.

https://en.wikipedia.org/wiki/World_(blockchain)

shit_game 1 day ago|||
This issue (human attestation) is the product of these AI companies. They are poisoning the well, only to sell the cure. This may not have initially been the plan of many of these companies, but it is the eventual end goal of all of them. Very similar to war profiteers, selling both the problem and the solution simultaneously has yet to be illegalized, but has long been masterfully capitalized, and will be vigourously because nobody will stop it.

Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.

After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.

This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.

These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.

pear01 1 day ago||||
One should highlight the best part of this: https://www.toolsforhumanity.com/orb

An orb that scans your eyeballs for "proof of human".

rationalist 1 day ago|||
You just need to pay someone 1 cent every time they scan their eye for you. You will have people sitting at home and giving their eye scans to AIs to use.
SchemaLoad 22 hours ago||
You'd still burn through IDs. Eventually the people selling their ID would just end up blacklisted from signing up for new accounts.
antonvs 1 day ago||||
Negative, I am a meat popsicle
tomalbrc 1 day ago|||
I fully expected this to be a meme. Eerie
levkk 1 day ago||||
It's not clear to me how this is verifiable without constant hardware supervision. Even that'll get cracked, just like DVD encryption back in the day.

You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.

intrasight 1 day ago|||
I started promoting the idea of hardware verification about 6 years ago. Didn't get any traction and I doubt I ever will.

I think Apple is the only company that would even be able to do that. You have to control the full stack to the pixels or speaker.

degamad 23 hours ago|||
One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.
wasmitnetzen 22 hours ago|||
We will just have to fucking swear all the time. The corporate-speak LLM won't do that.
SchemaLoad 22 hours ago||
Grok will post CP on twitter, you think it won't swear?
apitman 1 day ago|||
Maybe it will push people to seek out more in-person interactions, which would be a good thing.
Asmod4n 1 day ago|||
you could sell physical items at any store where you have to show your ID and you get one for the age group you are.

that kills two birds with one stone, you can then show everywhere online you are human and how old you are without the services needing any personal information about you, and the sellers don't know what you use that id tag for.

lich_king 1 day ago|||
People who are posting AI comments or setting up AI bots are... people. They can show their ID. If a website owner doesn't have a way to ban that specific human and the bad guy can always get another voucher, it's sort of meaningless.

In fact, even if you can ban the human for life, I'm not sure it solves anything. There are billions of people out there and there's money to be made by monetizing attention. AI-generated content is a way to do that, so there's plenty of takers who don't mind the risk of getting booted from some platform once in a blue moon if it makes them $5k/month without requiring any effort or skill.

djeastm 23 hours ago||||
Perhaps not only just show your id to get your "Over age X verification object", but your ID also gets irreversibly altered (like a punch card) that makes it one-time-use only.

That might make it less likely someone would ever sell it because to get a new one might take a very long "cool-down" time and it'd severely hamper the seller.

stetrain 1 day ago||||
I'll sell you my proof-of-human-age badge for $1,000.
Dylan16807 1 day ago||
I would be overjoyed if a human-level amount of spam cost $1000 per year-or-until-caught.
MattRix 1 day ago||||
what’s to keep people from selling or giving away those id tags? seems like a nefarious entity could buy them in bulk
vova_hn2 1 day ago|||
It's already sorta happening with SIM-cards/phone numbers that are sometimes used for similar purposes.
close04 1 day ago||||
Same thing that keeps me from letting my agent do the online talking for me. That is to say… nothing.
Asmod4n 1 day ago|||
law enforcement.
LoomyBunny 1 day ago|||
[dead]
sebastiennight 1 day ago|||
> especially without sacrificing people's right to privacy and anonymity in the process

I'm afraid the ship has sailed on this one. What other solutions have you heard of apart from the dystopian eyeball-scanning, ID-uploading, biometrics-profiling obvious ones?

(knowing that of course, neither of those actually solve the problem)

TacticalCoder 23 hours ago|||
> I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years

On a site like HN it's kinda easy to vet for at least those that already had thousands of karma before ChatGPT had its breakthrough moment a few years ago.

Now an AI could be asked to "Use my HN account and only write in my style" and probably fool people but I take it old-timers (HN account wise) wouldn't, for the most part, bother doing something that low. Especially not if the community says it's against the guidelines.

shadowgovt 1 day ago|||
If it becomes one, then that will be the end of sites like Hacker News.

This site, at its core, is fundamentally too low-bandwidth, too text-only, and too hands-off-moderated to be able to shoulder the burden of distinguishing real human-sourced dialog from text generated by machines that are optimized to generate dialog that looks human-sourced. Expect the consequence to be that the experience you are having right now will drastically shift.

My personal guess: sites like this will slop up and human beings will ship out, going to sites where they have some mechanism for trust establishment, even if that mechanism is as simple and lo-fi as "The only people who can connect to this site are ones the admin, who is Steve and we all know Steve, personally set up an account for." This has, of course, sacrificed anonymity. But I fundamentally don't see an attestation-of-humanity model that doesn't sacrifice anonymity at some layer; the whole point of anonymity on the Internet was that nobody knew you were a dog (or, in this case, a lobster), and if we now care deeply about a commenter's nephropid (or canid) qualities, we'll probably have to sacrifice that feature.

I'd rather keep the feature, pesonally.

toomuchtodo 1 day ago|||
I like Mitchell's Vouch idea. At the end of the day, it's all about trust. Anything else is an abstraction attempting to replicate some spectrum of trust.

https://news.ycombinator.com/item?id=46930961

https://github.com/mitchellh/vouch

grufkork 1 day ago||
I think we’ll see a return to smaller groups and implementing a lot of systems the way we do it IRL. I think you could definitely do a more fine-grained system that progressively adds less score to contacts the further away they are. In combination with some type of accumulating reputation system, you’d have both a force to keep out unknown IDs, but also a reason for one to stick to their current ID even though it’s anonymous.

Adding this type of rep system would destroy a lot of what is so cool about the internet though. There’d probably be segregation based on rep if it’s very visible, new IDs drowning in a sea of noise. Being anonymous but with a record isn’t the same as posting for the very first time as a completely blank identity and still being given an audience. Making online comms more like real life would alleviate some problems but would also lose part of the reason they’re used in the first place. I don’t see much any other way to do it besides maybe a state-provided anonymous identity provider (though that’s risky for a number of reasons), but it’s going to be sad to see things go.

khazhoux 1 day ago||
[flagged]
vova_hn2 1 day ago|||
People seem yo be unable to read your irony...
floxy 1 day ago|||
Yo! Apparently not enough em-dashes or bullet points.
blast 23 hours ago|||
The joke has been old for a while already.
khazhoux 23 hours ago||
I like to think mine brought a certain je ne sais quoi to the public discourse.
skeledrew 1 day ago|||
Why?
sam345 17 hours ago||
Good addition but to be fair HN guidelines have become so quaint particularly as they are now rarely enforced or even acknowledged. E.g. "Eschew flamebait. Avoid generic tangents. Omit internet tropes. " And " Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic. " These are violated every day without consequence.
altairprime 15 hours ago|
How often do you report the violations you see every day to the mods? (The ‘flag’ button is not yet suitable for that purpose.)
ezst 1 day ago||
Does that extend to generated/AI-edited articles? I don't see why the same rationale wouldn't apply.
travisgriggs 18 hours ago||
TIL: definition fulminate

fulminated, fulminating to explode with a loud noise; detonate. to issue denunciations or the like (usually followed byagainst ).

(Because “don’t fulminate” is the rule that follows the referenced one :) )

caditinpiscinam 18 hours ago|
Same. I vaguely remembered "fulmen" from Latin class but I didn't know there was a derived English word.

> from Latin fulminatus, past participle of fulminare "hurl lightning, lighten," figuratively "to thunder," from fulmen (genitive fulminis) "lightning flash," -- from etymonline.com

CactusBlue 1 day ago||
Slightly tangential, but this paragraph is the only one on the rules page with a "id" attr set, so that you can link to this specific rule
resiros 1 day ago||
Not sure I agree with the AI edited comments. Using AI to improve the readability and clarity is fine. Sometimes a well structured comment is much better than a braindump that reads like ramblings. And AI is quite good at it (and probably will get better). To make the point, here is how this comment would have looked if edited:

"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"

dustycyanide 1 day ago||
I prefer your non-edited version. My brain automatically starts to zone out with the AI edited version, side effect of having read way too much AI text
danbrooks 1 day ago||
I also prefer the original version - the AI version has a strange vibe.
data-ottawa 1 day ago|||
Not to take away from your point, but I like your original one better.
cityofdelusion 1 day ago|||
Non-edited is better. It flows and reads faster. The AI sentences they feel clinical and sterile. They feel, well, like AI.
a_victorp 1 day ago||
I had never noticed the flow of AI text. They do make the flow of reading feel weird with a lot of pauses! Thanks for pointing it out
xxs 1 day ago|||
The edited version is an example of a sterile/canned response. No one talks like that.

While I do edit my comments to fix typos, certain spelling oddities and other peculiarities would be present.

yellowapple 19 hours ago|||
For all the people saying they prefer the non-edited version: would y'all be saying that if you didn't already know which one was the non-edited version? Be honest.
yesfitz 1 day ago|||
It's a matter of taste, but your original writing is way better. Your writing has your voice. Like dropping the "I am" from your first sentence, using parentheticals, couching your point in understatement (e.g "sometimes" meaning often instead of just saying "often").

The AI comment might be clear, but it sounds like a press release, not a person, and there's nothing to engage with.

Sharlin 1 day ago||
There's nothing inherently better about the edited version. It's just saying the same thing with synonyms substituted, at a slightly more formal but less personal register. HN comments are not academic text, colloquial turns of phrase are perfectly fine and expected.
BeetleB 1 day ago||
> There's nothing inherently better about the edited version.

Easier to read ==> More likely to be read.

No, it's not saying the same thing, especially if the tool is telling you that your statement is ambiguous and should be rephrased.

xxs 1 day ago|||
Easier to read is mostly related with predictability of the text. Any time the brain mispredicts the next word, you'd have to go back and re-read.

Unless you are purposely train on that specific way to expression, it ain't easier to read.

BeetleB 23 hours ago||
I don't know why this is confusing. If I forget to put the "not" qualifier in a sentence, do we agree that it can confuse (or worse, mislead) the reader?
xxs 12 hours ago||
I never said - confusing. Just not easier to read as in relative term.
mkl 1 day ago||||
I don't think the edited version is easier to read.
BeetleB 23 hours ago||
I'll ask the same question I asked someone else:

https://news.ycombinator.com/item?id=47342324

You're saying removing ambiguity does not make it easier to read? You're saying using a word that means nothing like what you meant to say is easier to read than using the correct word?

Really?

Sharlin 21 hours ago||
What are you referring to? What word did the GP use that means nothing like what they meant to say?
BeetleB 20 hours ago||
OK. My brain farted, and I misunderstood the top post to be saying something else, and your and others' criticisms were misinterpreted by me.

Now here's the thing. I wrote all my prior comments on a machine with no LLM access. On my personal machine, I had a while ago installed a TamperMonkey script that sends my draft, along with all the parents (to the root) to an LLM for feedback (with a specific prompt). All it does is give feedback (logical errors, etc). So I tried again with one of my comments, and its feedback found several flaws with my comment, and ended it with this suggestion:

"Considering all this, it might be BETTER to either not reply ..."

Had I had this advice when I was writing those comments, it would have saved me and others a fair amount of time.

This is (mildly) useful. It'd be sad to ban such use.

Sharlin 1 day ago|||
More formal register doesn’t mean easier to read or understand. To many people the exact opposite is the case.
BeetleB 23 hours ago||
> More formal register doesn’t mean easier to read or understand.

And who is advocating for a more formal register?

Havoc 12 hours ago|
That’s fine. I’m not really bothered by this either way in hn context

Only really irritated by the ultra low effort “here is a raw copy paste of what my LLM said on this topic” comments. idk how people think that’s helpful or desired

larodi 12 hours ago|
in reality, it is perhaps indistinguishable. like - if I take this whole page of comments, feed it into... say Opus latest 1M, and tell it "have my text tweaked in a way to please these guys' apparent aesthetically preferences", or even "make my writing sound human in the sense all these guys do", then I cannot see how anyone would recognize it.

unless tis signed before uploading, like this is even enforceable?

More comments...