Top
Best
New

Posted by chwtutha 23 hours ago

Vouch(github.com)
https://x.com/mitchellh/status/2020252149117313349

https://nitter.net/mitchellh/status/2020252149117313349

https://github.com/ghostty-org/ghostty/pull/10559

560 points | 246 commentspage 3
max_ 3 hours ago|
If you like this, you may love Robin Hansons similar idea of vouching [0]

[0]: https://www.youtube.com/watch?v=rPdHXw05SvU

1a527dd5 7 hours ago||
I think denouncing is an incredibly bad idea especially as the foundation of VOUCH seems to be web of trust.

If you get denounced on a popular repo and everyone "inherits" that repo as a source of trust (e.g. think email providers - Google decides you are bad, good luck).

Couple with the fact that usually new contributors take some time to find their feet.

I've only been at this game (SWE) for ~10 years so not a long time. But I can tell you my first few contributions were clumsy and perhaps would have earned my a denouncement.

I'm not sure if I would have contributed to the AWS SDK, Sendgrid, Nunit, New Relic (easily my best experience) and my attempted contribution to Npgsql (easily my worst experience) would have definitely earned me a denouncement.

Concept is good, but I would omit the concept of denouncement entirely.

acjohnson55 7 hours ago||
I'm guessing denounce is for bad faith behavior, not just low quality contributions. I think it's actually critical to have a way to represent this in a reputation system. It can be abused, but abuse of denouncement is grounds for denouncement, and being denounced by someone who is denounced by trusted people should carry little weight.
ncr100 7 hours ago||
IDK about this implementation ...

OVER-Denouncing ought to be tracked, too, for a user's trustworthiness profile.

acjohnson55 22 minutes ago||
I'm pretty sure this project just does the storage model. It's up to communities that use it to determine the semantics and derive reputation and other higher level concepts from the data.
Rapzid 6 hours ago|||
Off topic but why was contributing to Npgsql a bad experience for you? I've contributed, admittedly minor stuff, to that ecosystem and it was pretty smooth.
mmooss 3 hours ago|||
Denounce also creates liability: you are slandering someone, explicitly harming their reputation and possibly their career.

I'd hesitate to create the denounce function without speaking to an attorney; when someone's reputation and career are torpedoed by the chain reaction you created - with the intent of torpedoing reputations - they may name you in the lawsuit for damages and/or to compel you to undo the 'denounce'.

Not vouching for someone seems safe. No reason to get negative.

mjr00 7 hours ago||
What value would this provide without the denouncement feature? The core purpose of the project, from what I can tell, is being able to stop the flood of AI slop coming from particular accounts, and the means to accomplish that is denouncing those accounts. Without denouncement you go from three states (vouched, neutral, denounced) to two (vouched and neutral). You could just make everyone who isn't vouched be put into the same bucket, but that seems counterproductive.
ashton314 19 hours ago||
Fediverse link: https://fosstodon.org/@mitchellh@hachyderm.io/11603152931120...
nmstoker 5 hours ago||
Interesting idea.

It spreads the effort for maintaining the list of trusted people, which is helpful. However I still see a potential firehose of randoms requesting to be vouched for. Various ways one might manage that, perhaps even some modest effort preceding step that would demonstrate understanding of the project / willingness to help, such as A/B triaging of several pairs of issues, kind of like a directed, project relevant CAPTCHA?

alexjurkiewicz 21 hours ago||
The Web of Trust failed for PGP 30 years ago. Why will it work here?

For a single organisation, a list of vouched users sounds great. GitHub permissions already support this.

My concern is with the "web" part. Once you have orgs trusting the vouch lists of other orgs, you end up with the classic problems of decentralised trust:

1. The level of trust is only as high as the lax-est person in your network 2. Nobody is particularly interested in vetting new users 3. Updating trust rarely happens

There _is_ a problem with AI Slop overrunning public repositories. But WoT has failed once, we don't need to try it again.

Animats 18 hours ago||
> The Web of Trust failed for PGP 30 years ago. Why will it work here?

It didn't work for links as reputation for search once "SEO" people started creating link farms. It's worse now. With LLMs, you can create fake identities with plausible backstories.

This idea won't work with anonymity. It's been tried.

ibrahima 7 hours ago||
I guess this is why Sam Altman wants to scan everyone's eyeballs.
javascripthater 20 hours ago|||
Web of Trust failed? If you saw that a close friend had signed someone else's PGP key, you would be pretty sure it was really that person.
BugsJustFindMe 8 hours ago||
Identity is a lot easier than forward trustworthiness. It can succeed for the former and fail for the latter.
chickensong 6 hours ago|||
I'm not convinced that just because something didn't work 30 years ago, there's no point in revisiting it.

There's likely no perfect solution, only layers and data points. Even if one of the layers only provides a level of trust as high as the most lax person in the network, it's still a signal of something. The internet will continue to evolve and fracture into segments with different requirements IMHO.

mijoharas 5 hours ago||
> The idea is based on the already successful system used by @badlogicgames in Pi. Thank you Mario.

This is from the twitter post referenced above, and he says the same thing in the ghostty issue. Can anyone link to discussion on that or elaborate?

(I briefly looked at the pi repo, and have looked around in the past but don't see any references to this vouching system.)

sebastianconcpt 3 hours ago||
https://www.lewissociety.org/innerring/
davidkwast 22 hours ago||
I think LLMs are accelerating us toward a Dune-like universe, where humans come before AI.
sph 19 hours ago||
You say that as if it’s a bad thing. The bad thing is that to get there we’ll have to go through the bloody revolution to topple the AI that have been put before the humans. That is, unless the machines prevail.

You might think this is science fiction, but the companies that brought you LLMs had the goal to pursue AGI and all its consequences. They failed today, but that has always been the end game.

ashton314 20 hours ago||
Got to go through the Butlerian Jihad first… not looking forward to that bit.

(EDIT: Thanks sparky_z for the correction of my spelling!)

sparky_z 19 hours ago|||
Close, but it's "Butlerian". Easy to remember if you know it's named after Samuel Butler.

https://en.wikipedia.org/wiki/Erewhon

Rumple22Stilk 6 hours ago|||
The alternative is far far worse.
skeptrune 17 hours ago||
I have a hard time trying to poke holes in this. Seems objectively good and like it, or some very similar version of it, will work long term.
nabilsaikaly 5 hours ago|
I believe interviewing devs before allowing them to contribute is a good strategy for the upcoming years. Let’s treat future OS contributors the same way companies/startups do when they want to hire new devs.
tedk-42 4 hours ago|
This adds friction, disincentivizes legitimate and high quality code commits and uses humans even more.
otterley 4 hours ago||
The entire point is to add friction. Accepting code into public projects used to be highly frictive. RMS and Linus Torvalds weren't just accepting anyone's code when they developed GNU and Linux; and to even be considered, you had to submit patches in the right way to a mailing list. And you had to write the code yourself!

GitHub and LLMs have reduced the friction to the point where it's overwhelming human reviewers. Removing that friction would be nice if it didn't cause problems of its own. It turns out that friction had some useful benefits, and that's why you're seeing the pendulum swing the other way.

More comments...