Posted by todsacerdoti 4/19/2025
This is yet another reason why we need to be wary of popular apps, add-ons, extensions, and so forth changing hands, by legitimate sale or more nefarious methods. Initially innocent utilities can be quickly coopted into being parts of this sort of scheme.
Google could do this. I'm sure Apple could as well. Third parties could for a small set of apps
But it just doesn't scale to internet size so I'm fucked if I know how we should fix it. We all have that cousin or dude in our highschool class who would do anything for a bit of money and introducing his 'friend' Paul who is in fact a bot whose owner paid for the lie. And not like enough money to make it a moral dilemma, just drinking money or enough for a new video game. So once you get past about 10,000 people you're pretty much back where we are right now.
At least, that's the way I've always imagined it working. Maybe I need to read up.
Binary "X trusts Y" statements, plus transitive closure, can lead to long trust paths that we probably shouldn't actually trust the endpoints of. Could we not instead assign probabilities like "X trusts Y 95%", multiply probabilities along paths starting from our own identity, and take the max at each vertex? We could then decide whether to finally trust some Z if its percentage is more than some threshold T%. (Other ways of combining in-edges may be more suitable than max(); it's just a simple and conservative choice.)
Perhaps a variant of backprop could be used to automatically update either (a) all or (b) just our own weights, given new information ("V has been discovered to be fraudulent").
How about restricting them to everyone-knows-everyone sized groups, of like a couple hundred people?
One can be a member of multiple groups so you're not actually limited. But the groups will be small enough to self regulate.
You want to chat with a Dunbar number of people get yourself a private discord or slack channel.
I'm hypothesising that any such large scale structure will be perverted by commercial interests, while having multiple Dunbar sized such structures will have a chance to be useful.
Maybe it's less convenient and more expensive and onerous. Do good things require hard work? Or did we expect everyone to ignore incentives forever while the trillion-dollar hyperscalers fought for an open and noble internet and then wrapped it in affordable consumer products to our delight?
It reminds me of the post here a few weeks ago about how Netflix used to be good and "maybe I want a faster horse" - we want things to be built for us, easily, cheaply, conveniently, by companies, and we want those companies not to succumb to enshittification - but somehow when the companies just follow the game theory and turn everything into a TikToky neural-networks-maximizing-engagement-infinite-scroll-experience, it's their fault, and not ours for going with the easy path while hoping the corporations would not take the easy path.
Do nothing, win.
They are the primary benefactors buying this data since they are the largest AI players.