Posted by nafnlj 12/12/2025
I'm annoyed that mine even shows up on Google.
No more Google. No more websites. A distributed swarm of ephemeral signed posts. Shared, rebroadcasted.
When you find someone like James and you like them, you follow them. Your local algorithm then prioritizes finding new content from them. You bookmark their author signature.
Like RSS but better. Fully distributed.
Your own local interest graph, but also the power of your peers' interest graphs.
Content is ephemeral but can also live forever if any nodes keep rebroadcasting it. Every post has a unique ID, so you can search for it later in the swarm or some persistent index utility.
The Internet should have become fully p2p. That would have been magical. But platforms stole the limelight just as the majority of the rest of the world got online.
If we nerds had but a few more years...
YouTube should get split out and then broken up. Google Search should get split out and broken up. etc.
This is not a problem you solve with code. This is a problem you solve with law.
When the DMCA was a bill, people were saying that the anti-circumvention provision was going to be used to monopolize playback devices. They were ignored, it was passed, and now it's being used to monopolize not just playback devices but also phones.
Here's the test for "can you rely on the government here": Have they repealed it yet? The answer is still no, so how can you expect them to do something about it when they're still actively making it worse?
Now try to imagine the world where the Free Software Foundation never existed, Berkeley never released the source code to BSD and Netscape was bought by Oracle instead of being forked into Firefox. As if the code doesn't matter.
Isn't what you're describing something like mastodon or usenet?
On the other side of the same coin there are already governments that will make you legally responsible of what your page's visitors write in comments. This renders any p2p internet legally unbearable (i.e. someone goes to your page, posts some bad word and you get jailed). So far they say "it's only for big companies" but it's a lie, just boiling frogs.
"cannot do anything" is relative. Google did something about it (at least for the first 10-15 years) but I am sure that was not their primary intention nor they were sure it will work. So "we have no clue what will work to reduce it" is more appropriate.
Now I think everybody has tools to build stuff easier (you could not make a television or a newspaper 50 years ago). That is just an observation of possibility, not a guarantee of success.
You know what else we need? We need food to be free. We need medicine to be free, especially medicines which end epidemics and transmissible disease. We need education to be free. We need to end homelessness. We need to end pollution. We need to end nationalism, racism, xenophobia, sexism. We need freedom of speech, religion, print, association. We need to end war.
There are a lot of things we as a society need. But we can't even make "p2p internet" work, and we already have it. (And please just forget the word 'distributed', because it's misleading you into thinking it's a transformative idea, when it's not)
Every family should be provided with a UBI that covers food and rent (not in the city). That is a more attainable goal and would solve the same problems (better, in fact).
(Not saying that UBI is a panacea, but I've lived in countries that have experimented with such and it seems the best of the alternatives)
I would settle for simpler, attainable things. Equal opportunity for next generation. Quality education for everybody. Focus on merit not other characteristics. Personal freedom if it does not infringe on the freedom of people around you (ex: there can't be such thing as a "freedom to pollute").
In my view Internet as p2p worked pretty well to improve the previous status quo in many areas (not all). But there will never be a "stable solution", life and humans are dynamic. We do have some good and free stuff on the Internet today because of the groundwork laid out 30 years ago by the open source movement. Any plan started today will have noticeable effect in many years. So "we can't even make" sounds more of an excuse to not start, rather than an honest take.
I feel that saying "we have the resources" ignores the difficulty of allocating them better, which is the hardest part. Compared to 20 years ago we have amazing software tools and hardware capabilities, and still many large projects fail - it's not because they don't have the resources...
What does this mean? I suppose it can't literally mean equal opportunity, because people aren't equal, and their circumstances aren't equal; but then, what does this mean?
Currently I know in many countries multiple measures/rules/policies that affect these 3 things in ways that I find damaging for the society overall on the long term. Companies complain they don't have work forces, governments complain the natality is low but there are many issues with raising a child. Financial incentives to parents do not seem to work (for example: https://www.bbc.com/news/world-europe-47192612)
Most efficient = cheaper. A lot of times cheaper sacrifices quality, and sometimes safety.
How do you think Google or Cloudflare actually work? One big server in San Francisco that runs the whole world, or lots of servers distributed all over?
Why do you think they're a monopoly in the first place? Obviously because they were more efficient than the competition and network effects took care of the rest. Having to make choices is a cost for the consumer - IOW consumers are lazy - so winners have staying power, too. It's a perfect storm for a winner-takes-all centralization since a good centralized service is the most efficient utility-wise ('I know I'm getting what I need') and decision-cost-wise ('I don't need to search for alternatives') for consumers until it switches to rent seeking, which is where the anti-monopoly laws should kick in.
In other words, open source decentralized systems are the most efficient because you don't have to reduplicate a competitor's effort when you can just use the same code.
> Obviously because they were more efficient than the competition and network effects took care of the rest.
In most cases it's just the network effect, and whether it was a proprietary or open system in any given case is no more than the historical accident of which one happened to gain traction first.
> Having to make choices is a cost for the consumer
If you want an email address you can choose between a few huge providers and a thousand smaller ones, but that doesn't seem to prevent anyone from using it.
> until it switches to rent seeking
If it wasn't an open system from the beginning then that was always the end state and there is no point in waiting for someone to lock the door before trying to remove yourself from the cage.
This is the great lie. Approximately zero end consumers care about code, the product they consume is the service, and if the marginal cost of switching the service provider is zero, it's enough to be 1% better to take 99% of the market.
Most people don't care about reading it. They very much care about what it does.
Also, it's not "approximately zero" at all. It's millions or tens of millions of people out of billions, and when a small minority of people improve the code -- because they have the ability to -- it improves the code for all the rest too. Which is why they should have a preference for the ability to do it even if they're not going to be the one to exercise it themselves.
> if the marginal cost of switching the service provider is zero, it's enough to be 1% better to take 99% of the market.
Except that you'd then need to be 1% better across all dimensions for different people to not have different preferences, and everyone else is trying to carve out a share of the market too. Meanwhile if you were doing something that actually did cause 99% of people to prefer a service that does that then everybody else would start doing it.
There are two main things that cause monopolies. The first is a network effect, which is why those things all need to be open systems. The second is that one company gets a monopoly somewhere (often because of the first, sometimes through anti-competitive practices like dumping) and then leverages it in order to monopolize the supply chain before and after that thing, so that competing with them now requires not just competing with the original monopoly but also reproducing the rest of the supply chain which is now no longer available as independent commodities.
This is why we need antitrust laws, but also why we need to recognize that antitrust laws are never perfect and do everything possible to stamp out anything that starts to look like one of those markets through development of open systems and promoting consumer aversion to products that are inevitably going to ensnare them.
"People don't want X" based on observed behavior is a bunch of nonsense. People's preferences depend on their level of information. If they don't realize they're walking into a trap then they're going to step right into it. That isn't the same thing as "people prefer walking into a trap". They need to be educated about what a trap looks like so they don't keep ending up hanging upside down by their ankles as all the money gets shaken out of their pockets.
From what you've described, you've just re-invented webrings.
Suggestion: Remember that many large companies are emergently shitty, with shitty processes, and individuals motivated to act in shitty ways.
When a company is so powerful, this might be a time to think about solidarity.
When you're feeling an unexplained injustice from them, sometimes saying "but you let X do it" could just throw X under the bus.
Whether because a fickle process or petty actor simply missed X before, or because now they have new reason to double down and also punish the others (to CYA consistency, or, if petty, to assert their power now that you've questioned it).
Request URL: https://journal.james-zhan.com/google-de-indexed-my-entire-b...
Request Method: GET
Status Code: 304 Not Modified
So maybe it's the status code? Shouldn't that page return a 200 ok?
When I go to blog.james..., I first get a 301 moved permanently, and then journal.james... loads, but it returns a 304 not modified, even if i then reload the page.
Only when I fully sumbit the URL again in the URL-bar, it responds with a 200.
Maybe crawling also returns a 304, and Google won't index that?
Maybe prompt: "why would a 301 redirect lead to a 304 not modified instead of a 200 ok?", "would this 'break' Google's crawler?"
> When Google's crawler follows the 301 to the new URL and receives a 304, it gets no content body. The 304 response basically says "use what you cached"—but the crawler's cache might be empty or stale for that specific URL location, leaving Google with nothing to index.
Your LLM prompt and response are worthless.
Request URL: https://news.ycombinator.com/item?id=46196076
Request Method: GET
Status Code: 200 OK (from disk cache)
I just thought that it would be worthwhile investigating in that direction.
UPD: Sorry, never mind, I inspected a wrong response.
And no <meta name="robots"> in the HTML either.
What URL are you seeing that on? And what tool are you using to detect that?
Edit: cURL similarly shows no such header for me:
curl -s -D - -o /dev/null https://journal.james-zhan.com/google-de-indexed-my-entire-bear-blog-and-i-dont-know-why/