Top
Best
New

Posted by speckx 8 hours ago

A simple web we own(rsdoiel.github.io)
154 points | 99 commentspage 2
born-jre 8 hours ago|
I kind of resonate with a lot of things in the article. My own personal view is that we should make hosting stuff vastly simpler; that's one of the goals of my project, at least my attempt (self promo)

https://github.com/blue-monads/potatoverse

nine_k 7 hours ago||
Potatoverse is a great name :)) BTW do you remember Sandstorm.io?
born-jre 7 hours ago||
Thanks cap'n-py. Yeah, I love Sandstorm. My goal is to be more portable, lighter, and a 'download binary and run' kind of tool. There are also other attempts around what I call the 'packaging with Docker' approach (Coolify, etc.), which are more attempts at packaging existing apps. But my approach—the platform—gives a bunch of stuff you can use to make apps faster, but you have to bend to its idiosyncrasies. In turn, you do not need a beefy home lab to run it (not everyone is a tinkerer). It's more focused, so it will be easier for the end user running it than for the developer.
inigyou 7 hours ago||
[dead]
thefounder 8 hours ago||
I think the main issue with federated apps is the identity and moderation. Without identity verification is hard to moderate so you end up with closed systems where some big CO does the moderation at an acceptable level
sowbug 7 hours ago|
This is only half a thought.

The current wave of AI agents is diminishing the value of identity as a DDOS or content-moderation signal. The formula until now included bot = bad, but unless your service wants to exclude everyone using OpenClaw and friends, that's no longer a valid heuristic.

If identity is no longer a strong signal, then the internet must move away from CAPTCHAs and logins and reputation, and focus more on the proposed content or action instead. Which might not be so bad. After all, if I read a thought-provoking, original, enriching comment on HN, do I really care if it was actually written by a dog?

We might finally be getting close to https://xkcd.com/810/.

One more half thought: what if the solution to the Sybil problem is deciding that it's not a problem? Go ahead and spin up your bot network, join the party. If we can design systems that assign zero value to uniqueness and require originality or creativity for a contribution to matter, then successful Sybil "attacks" are no longer attacks, but free work donated by the attacker.

caconym_ 6 hours ago||
> if I read a thought-provoking, original, enriching comment on HN, do I really care if it was actually written by a dog?

I would rather just read the thought as it was originally expressed by a human somewhere in the AI's training data, rather than a version of it that's been laundered through AI and deployed according to the separate, hidden intent of the AI's operator.

RajT88 6 hours ago||
The only way we own a web of our own is to develop much more of a culture of leaving smallish machines online all the time. Imagine something like Tor or BitTorrent, but everyone has a very simple way of running their own node for content hosting.

That always-on device? To get critical mass, instead of just the nerds, you'd need it to ship with devices which are always-on, like routers/gateways, smart TV's. Then you're back to being at the mercy of centralized companies who also don't love patching their security vulnerabilities.

nine_k 6 hours ago||
This is very right. There are two obstacles.

(1) Security. An always-on, externally accessible device will always be a target for breaking in. You want the device to be bulletproof, and to have defense in depth, so that breaking into one service does not affect anything else. Something like Proxmox that works on low-end hardware and is as easy to administer as a mobile phone would do. We are somehow far from this yet. A very limited thing like a static site may be made both easy and bulletproof though.

(2) Connectivity providers should allow that. Most home routers don't get a static IP, or even a globally routable IPv4 at all. Or even a stable IPv6. This complicates the DNS setup, and without DNS such resources are basically invisible.

From the pure resilience POV, it seems more important to keep control of your domain, and have an automated way to deploy your site / app on whatever new host, which is regularly tested. Then use free or cheap DNS and VM hosting of convenience. It takes some technical chops, but can likely be simplified and made relatively error-proof with a concerted effort.

swiftcoder 6 hours ago||
Both or those are solved by having a tunnel and a cache that is hosted in the cloud. Something like tailscale or cloudflare provides this pretty much out of the box, but wireguard + nginx on a cheap VPS would accomplish much the same if you are serious about avoiding the big guys.
nine_k 2 hours ago||
If you already pay for a cheap VPS, why not host the whole thing there? It's the simple Web. (As has been noted in comments elsewhere.)
jejeyyy77 6 hours ago||
if only we all had a little device that was always on and and connected….
amarant 6 hours ago||
If I'm reading the implication right, you're having a pretty terrible idea. Glossing over what running a server would do to your battery, it would never work because of the routing issues you'll run into.

With IPv6 it would theoretically be possible, but currently with ipv4 and NATs everywhere, your website would almost never be reachable, even with fancy workarounds like dynDNS

ted537 8 hours ago||
Unfortunately the transparency of the IP stack means that unless u want whole world to know where u live via one DNS query, you'd need to use a service to proxy back to urself. And if ur paying for remote compute anyways, you could probably just host ur stuff there. Any machine that can proxy traffic back to you is just as capable of hosting ur static stuff there.
nickorlow 8 hours ago|
It only gives a pretty rough estimation, not a street address. I don't think many self-hosters have run into issue w/ this.
evanevan 6 hours ago||
I really like this model for individual services.

The challenge I've always felt, is shared services -- if I'm running infra myself, I can depend upon it, but if someone else is running it, I'm never really sure if I can, which makes external services really hard to rely on and invest into.

Maybe you can get further than expected with individual services? But shared services at some point seem really useful.

I think web2 solved that in an unfortunate way, where you know the corporations operating the services / networks are aligned in some ways but not in others.

But would be great to have shared services that do have better guarantees. Disclaimer, we're working on something in that direction, but really curious what others have seen or thinking in this area.

asim 6 hours ago||
We got here iteratively..not all at once. So the path back...it's iterative. I shouldn't even say back. We're not going back. We have to go in a new direction. And again it's evolutionary. So ultimately a lot of these big systems and big tech companies aren't going anywhere and they will be integral to all infrastructure for the foreseeable future whether that be technical, financial or related to public services. But as individuals we can slowly shift some of our efforts elsewhere in ways that it might matter.

Here's my small contribution to that. https://github.com/micro/mu - an app platform without ads, algorithms or tracking.

liveoneggs 7 hours ago||
This guy has been around long enough to know about NNTP, which is the original distributed people-focused web, but talks about how HTML is some kind of barrier to entry.

HTTP requires always-on + always-discoverable infrastructure

It's all over the place.

sagaro 7 hours ago||
I agree with the point that big companies have persuaded people that only they can offer ease of publishing content. most of my friends publish on Facebook, X, Instagram etc.

I have tried to get them to publish markdown sites using GitHub pages, but the pain of having to git commit and do it via desktop was the blocker.

So I recently made them a mobile app called JekyllPress [0] with which they can publish their posts similar to WordPress mobile app. And now a bunch of them regularly publish on GitHub pages. I think with more tools to simplify the publishing process, more people will start using GitHub pages (my app still requires some painful onboarding like creating a repo, enabling GitHub pages and getting PAT, no oAuth as I don't have any server).

[0] https://www.gapp.in/projects/jekyllpress/

righthand 6 hours ago|
Isn’t publishing on Github Pages still posting to a corporate centrally owned entity and not a solution to the problem described?
sagaro 6 hours ago||
But it is portable. It is essentially markdown files. You can download your repo, compile the Jekyll to static pages and publish them anywhere.

When you publish to Facebook, WordPress etc you can't easily get your stuff out. You will have to process them even if they allow you to download your content as a zip folder. The images will be broken. Links between pages won't work etc.

righthand 5 hours ago||
Facebook provides a data export service which gives you a zip file with a web version of all your content. I’m not sure what the difference is then between that and a Github hosted repository of all your content as a webpage.
sagaro 5 hours ago||
The main difference is the data structure and the intent of the export. Facebook's tool is built for data compliance and local offline viewing, not web portability. If you open that Facebook zip file, the HTML version is just a massive dump of proprietary markup. To actually migrate those posts to a new blog, you'd have to write a custom scraper just to extract your own text from their messy div tags. If you use their JSON export, you still have to write a custom script to parse their specific schema and remap all the hardcoded local image paths so they work on a live server. With a Github Pages repo, your content is already sitting there as raw, standardized Markdown. You can just take that folder of .md files, drop it into Hugo, 11ty, or any other static site generator, and it just works. No scraping or data-wrangling required.
pyrolistical 3 hours ago||
This is all fine and dandy for websites but what we’ve really been locked out of is email.

You can’t run your own email server. All other large email providers will consider your self hosted emails as spam by default. It understandable why they took this stance (due to actual spam) but it is also awfully convenient it also increases their market power.

We are now at the whim of large corps even if we get a custom domain with them.

themacguffinman 6 hours ago|
I think this mostly misses the biggest reason why writers would choose big tech platforms or other big platforms: discovery and aggregation. If you want to speak to be heard and not just for its own sake, then you want to go where the people are hanging out and where they could actually find your content.

This is like talking about how book authors don't need Amazon when you have a printer and glue at home.

More comments...