Posted by chwtutha 19 hours ago
Perhaps that is the plan?
I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.
Its just a layer to minimize noise.
Thing is, this system isn't supposed to be perfect. It is supposed to be better, while worth the hassle.
I doubt I'll get vouched anywhere (tho IMO it depends on context), but I firmly believe humanity (including me) will benefit from this system. And if you aren't a bad actor with bad intentions, I believe you will, too.
Only side effect is genuine contributors who aren't popular / in the know need to put in a little bit more effort. But again, that is part of worth the hassle. I'll take it for granted.
Think of this like a spam filter, not a "I met this person live and we signed each other's PGP keys" -level of trust.
It's not there to prevent long-con supply chain attacks by state level actors, it's there to keep Mr Slopinator 9000 from creating thousands of overly verbose useless pull requests on projects.
If PR is good, maintainer refunds you ;)
I noticed the same thing in communication. Communication is now so frictionless, that almost all the communication I receive is low quality. If it cost more to communicate, the quality would increase.
But the value of low quality communication is not zero: it is actively harmful, because it eats your time.
In that world there's a process called "staking" where you lock some tokens with a default lock expiry action and a method to unlock based on the signature from both participants.
It would work like this: Repo has a public key. Submitted uses a smart contract to sign the commit with along with the submission of a crypto. If the repo merges it then the smart contract returns the token to the submitter. Otherwise it goes to the repo.
It's technically quite elegant, and the infrastructure is all there (with some UX issues).
But don't do this!!!!
I did some work in crypto. It's made me realize that the love of money corrupts, and because crypto brings money so close to engineering it corrupts good product design.
We've seen it everywhere, in communication, in globalised manufacturing, now in code generation.
It takes nothing to throw something out there now; we're at a scale that there's no longer even a cost to personal reputation - everyone does it.
But a non-zero cost of communication can obviously also have negative effects. It's interesting to think about where the sweet spot would be. But it's probably very context specific. I'm okay with close people engaging in "low quality" communication with me. I'd love, on the other hand, if politicians would stop communicating via Twitter.
A poorly thought out hypothetical, just to illustrate: Make a connection at a dinner party? Sure, technically it costs 10¢ make that initial text message/phone call, then the next 5 messages are 1¢ each, but thereafter all the messages are free. Existing relationships: free. New relationships, extremely cheap. Spamming at scale: more expensive.
I have no idea if that's a good idea or not, but I think that's an ok representation of the idea.
I was specifically thinking about general communication. Comparing the quality of communication in physical letters (from a time when that was the only affordable way to communicate) to messages we send each other nowadays.
You can also integrate it in clients by adding payment/reward claim headers.
Let's say you're a one-of-a-kind kid that already is making useful contributions, but $1 is a lot of money for you, then suddenly your work becomes useless?
It feels weird to pay for providing work anyway. Even if its LLM gunk, you're paying to work (let alone pay for your LLM).
ie, if you want to contribute code, you must also contribute financially.
That would make not-refunding culturally crass unless it was warranted.
With manual options for:
0. (Default, refund)
1. (Default refund) + Auto-send discouragement response. (But allow it.)
2. (Default refund) + Block.
3. Do not refund
4. Do not refund + Auto-send discouragement response.
5. Do not refund + Block.
6. Do not refund + Block + Report SPAM
And typically use $1 fee, to discourage spam.
And $10 fee, for important, open, but high frequency addresses, as that covers the cost of reviewing high throughput email, so useful email did get identified and reviewed.
The latter would be very useful in enabling in-demand contact doors to remain completely open, without being overwhelmed. Think of a CEO or other well known person, who does want an open channel of feedback from anyone, ideally, but is going to have to have someone vet feedback for the most impactful comments, and summarize any important trend in the rest. $10 strongly disincentives low quality communication, and covers the cost of getting value out of communication (for everyone).
I get that AI is creating a ton of toil to maintainers but this is not the solution.
FOSS has turned into an exercise in scammer hunting.
Think denying access to production. But allowing changes to staging. Prove yourself in the lower environments (other repos, unlocked code paths) in order to get access to higher envs.
Hell, we already do this in the ops world.
Alternatively they might keep some things open (issues, discussions) while requiring a vouch for PRs. Then, if folks want to get vouched, they can ask for that in discussions. Or maybe you need to ask via email. Or contact maintainers via Discord. It could be anything. Linux isn't developed on GitHub, so how do you submit changes there? Well you do so by following the norms and channels which the project makes visible. Same with Vouch.
I even see people hopping on chat servers begging to 'contribute' just to get github clout. It's really annoying.
Not sure about the trust part. Ideally, you can evaluate the change on its own.
In my experience, I immediately know whether I want to close or merge a PR within a few seconds, and the hard part is writing the response to close it such that they don't come back again with the same stuff.
(I review a lot of PRs for openpilot - https://github.com/commaai/openpilot)
Even if I trust you, I still need to review your work before merging it.
Good people still make mistakes.
If you had left it at know you want to reject a PR within a few seconds, that'd be fine.
Although with safety critical systems I'd probably want each contributor to have some experience in the field too.
1. What’s the goal of this PR and how does it further our project’s goals?
2. Is this vaguely the correct implementation?
Evaluating those two takes a few seconds. Beyond that, yes it takes a while to review and merge even a few line diff.
You look at the PR and you know just by looking at it for a few seconds if it looks off or not.
Looks off -> "Want to close"
Write a polite response and close the issue.
Doesn't look off -> "Want to merge"
If we want to merge it, then of course you look at it more closely. Or label it and move on with the triage.
This is similar to real life: if you vouch for someone (in business for example), and they scam them, your own reputation suffers. So vouching carries risk. Similarly, if you going around someone is unreliable, but people find out they actually aren't, your reputation also suffers. If vouching or denouncing become free, it will become too easy to weaponize.
Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.
Good reason to be careful. Maybe there's a bit of an upside to: if you vouch for someone who does good work, then you get a little boost too. It's how personal relationships work anyway.
----------
I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…
So the really funny thing here is the first bitcoin exchange had a Web of Trust system, and while it had it's flaws IT WORKED PRETTY WELL. It used GPG and later on bitcoin signatures. Nobody talks about it unless they were there but the system is still online. Keep in mind, this was used before centralized exchanges and regulation. It did not use a blockchain to store ratings.
As a new trader, you basically could not do trades in their OTC channel without going through traders that specialized in new people coming in. Sock accounts could rate each other, but when you checked to see if one of those scammers were trustworthy, they would have no level-2 trust since none of the regular traders had positive ratings of them.
Here's a link to the system: https://bitcoin-otc.com/trust.php (on IRC, you would use a bot called gribble to authenticate)
Not easily, but I could imagine a project deciding to trust (to some degree) people vouched for by another project whose judgement they trust. Or, conversely, denouncing those endorsed by a project whose judgement they don't trust.
In general, it seems like a web of trust could cross projects in various ways.
- a problem already solved in TFA (you vouching for someone eventually denounced doesn't prevent you from being denounced, you can totally do it)
- a per-repo, or worse, global, blockchain to solve incrementing and decrementing integers (vouch vs. denounce)
- a lack of understanding that automated global scoring systems are an abuse vector and something people will avoid. (c.f. Black Mirror and social credit scores in China)
The same as when you vouch for your company to hire someone - because you will benefit from their help.
I think your suggestion is a good one.
Maybe your own vouch score goes up when someone you vouched for contributes to a project?
Then you have introverts that can be good but have no connections and won’t be able to get in.
So you’re kind of selecting for connected and good people.
Even with that risk I think a reputation based WoT is preferable to most alternatives. Put another way: in the current Wild West, there’s no way to identify, or track, or impose opportunity costs on transacting with (committing or using commits by) “Epstein but in code”.
This is a graph search. If the person you’re evaluating vouches for people those you vouch for denounce, then even if they aren’t denounced per se, you have gained information about how trustworthy you would find that person. (Same in reverse. If they vouch for people who your vouchers vouch for, that indirectly suggests trust even if they aren’t directly vouched for.)
One of my (admittedly half baked) ideas was a vouching similar with real world or physical incentives. Basically signing up requires someone vouching, similar to this one where there is actual physical interaction between the two. But I want to take it even further -- when you signup your real life details are "escrowed" in the system (somehow), and when you do something bad enough for a permaban+, you will get doxxed.
...or spam "RBL" lists which were often shared. https://en.wikipedia.org/wiki/Domain_Name_System_blocklist
why not use ai to help with the ai problem, why prefer this extra coordination effort and implementation?
I certainly have dropped off when projects have burdensome rules, even before ai slop fest