What stops me is security. I simply do not know enough about securing a self-hosted site on real hardware in my home and despite actively continuing to learn, it seems like the more I learn about it, the more questions I have. My identity is fairly public at this point, so if I say the wrong thing to the wrong person on HN or whatever, do I need to worry about someone much smarter than me setting up camp on my home network and ruining my life? That may sound really stupid to many of you, but this is the type of anxiety that stops the under-informed from trying stuff like this and turning to services like Akamai/Linode or DO that make things fairly painless in terms of setup, monitoring and protection.
That said, I'm 110% open to reading/watching any resources people have that help teach newbies how to protect their assets when self-hosting.
[1] https://www.bbb.org/article/news-releases/20509-amazon-brush... [2] https://www.reddit.com/r/tulsa/comments/hpe8s1/just_got_a_su...
The package came from a US company in Texas not China. Not directly, the mask could have been made anywhere, but the package did not contain any other mail labels like when you get something from China. And never happened before, never happened again, and was literally only a single mask.
Still, seems to fit anyway because the brushing descriptions do vary in the details a little. My example still fits.
Or maybe it still was the hn guy and this just the method they used because they knew about it.
Anyway thank you.
Without getting too deep into it, there are some things I know how to do with computers that I probably shouldn't, so my thought is this; if I, a random idiot who just happened to learn a few things, can do X, then someone smarter than me who learned how to attack a target in an organized way probably has methods that I cannot even conceive of, can do it easier, and possibly without me even knowing. It's this weird vacillation between paranoia and prudence.
For me, it's really about acknowledging what I know I don't know. I do some free courses, muck about with security puzzles, etc, even try my own experiments on my own machines, but the more I learn, the more I realize I don't know. I suppose that's the draw! The problem is when you learn these things in an unstructured way, it's hard to piece it all together and feel certain that you have covered all your vulnerable spots.
My current setup is to rent a cheap $5/month VPS running nginx. I then reverse ssh from my home to the vps, with each app on a different port. It works great until my electric goes out and comes back on the apps become unavailable. I haven't gotten the restart script to work 100% of the time.
But, I'd love to hear thoughts on security of reverse SSH from those that know.
Nginx handles proxying and TLSing all HTTP traffic. It also enforces access rules: my services can only be reached from my home subnet or VPN subnet. Everywhere else gets a 403.
Because since my new provider only provides cg-nat, I've been using a cheap server, but actually having the server at home would be nice.
Right now my "servers" are Dell micro i5s. I've have used RPI 3 and 4 in the past. My initial foray into self-hosting were actual servers. Too hot, too noisy and too expensive to run continuously for my needs, but I did learn a lot. I still do even with the micros and pis.
Block port 22, secure SSH with certificates only. Allow port 443 and configure your web server as a reverse proxy with a private backend.
You don't need an IDS, you don't need a WAF and you don't need Cloudflare.
Unless you become the next Facebook that's when you start to become concerned about security.
I've contented myself using TLS client certs on my family's Android phones (which do not work at all on iOS for something like Home Assistant).
So you don't self-host at home, right?
I have been considering setting up a physical DMZ at home, with two routers (each with its own firewall), such that my LAN stays unmodified and my server can run between both routers. Then it feels like it would be similar to having a VPS in terms of security, maybe?
With four jails, each running their own bHyve VMs they run another FreeBSD OS allowing me to host jails for different services. Email, web and game servers.
I'm not a fan of DMZ as they get messy as you then have to ensure your host is protected correctly. So I use bridges, I have two bridges an outer and inner.
Services requiring outbound internet access are tapped to the outer bridge which are throttled and if required can then load balance between and the inner bridge which is under control of deny all, allow some. To my own set of home IPs.
The outer bridge cannot contact services in the inner but the inner can contact the outer but can only host internally.
This all done with PF within each jail as each jail provides you with its own vnet adapter which can be applied to a bridge.
If you wish to learn further that is what you work up too But for the personal user who wishes self-host and to have internet presence a firewall is just fine.
> I'm not a fan of DMZ as they get messy as you then have to ensure your host is protected correctly.
Could you elaborate on that? Specifically in my case I would have a perimeter router to which I would connect both my server and the inner router. My LAN would stay behind the inner router, so my understanding is that it still strictly has the same security as when my inner router was connected to the ISP; I just add a layer with the perimeter router.
Then the perimeter router opens the server (probably just chosen ports) to the public Internet, so that the server is reachable.
Wouldn't that mean that my host is protected correctly?
While home routers tend to set their rules as outbound allow and inbound denied. My DC just provides me with a network cable to the big pond of data.
How I secure that for my home network is using my personal rig with multiple network ports.
One port acts as a public bridge. And the 3rd and 4th network ports then are then assigned to the private bridges
The 2nd port then sits in a middle bridge where it communicates to both the public and private bridge.
You want VPS-provider firewall. Docker's going to punch holes through your software firewall.
If using Podman, should I use rootless containers (which IMO suck because you can't do macvlan so the container won't easily get its own IP on my home network)? Is it ok to just use rootful Podman with an idmapped user running without root privileges inside the container and drop all unneccessary capabilities? Should I set up another POSIX user before, such that breaking out of the container would in the end just yield access to an otherwise unused UID on the host?
If using systemd-nspawn, do all the above concerns about rootful / rootless hold? Is it a problem that one needs to run systemd-nspawn itself with root? The manpage itself mentions "Like all other systemd-nspawn features, this is not a security feature and provides protection against accidental destructive operations only.", so should I trust nspawn in general?
Or am I just being paranoid and everything should just be running YOLO-style with UID 1000 without any separation?
All of this makes me quite wary about running my paperless-ngx instance with all my important data next to my Vaultwarden with all of my passwords next to any Torrent clients or anything else with unencrypted open ports on the internet. Also keeping everything updated seems to be a full time job by itself.
WAF's etc just hide the fact the code in your service is full of holes.
Ensuring your infra is built in a secure way is as important as ensuring your service is built in a secure way.
I'm a security consultant so this is not a problem I have. To me it seems very straightforward and like most things are secure by default (with the exceptions being notorious enough that I'd know of it), so I'm interested in the other perspective
I consider hosting a system or service trivial ("just run the service and open its port to the public Internet"). Then the first question is: what if the service gets compromised (that seems like the most likely attack vector, right?)? Probably it should be sandboxed. Maybe in a container (not running as root inside the container, because I understand it makes it a lot easier to escape), better if it is in a VM (using Xen maybe?). What about jails?
Now say the services are running in VMs, and the "VM manager" (I don't know how to call it, I mean e.g. dom0 for Xen) is only accessible from my own IP (ideally over a VPN if it's running in a VPS, or just through the LAN if running at home?), the next question is: what happens if one of the services gets compromised? I assume the attacker can then compromise the VM, so now what are the risks for me? I probably should never ssh as a user and then login as root from there, because if it's compromised the attacker can probably read my password? Say I only ever login through ssh, either as root directly or as the user (but never promoting myself to root from the user), what could be vectors that would allow an attacker to compromise my host machine?
I listened to a lot of "Darknet Diaries" episodes, and the pentesters always say "I got in, and then moved laterally". So I'm super scared about that: if I run a service exposed to the Internet, I assume it may get compromised someday (though I'll do my best to protect it and keep it up-to-date). But then when it gets compromised, how can I prevent those "lateral moves"? I have no idea, as in "I don't know what I don't know".
All that to say, I would love to find a book or blog posts that explain those things. Tutorials I see usually teach how to run a service in docker and don't really talk about security.
My thinking is this; if I'm willing to fork over dollars to a VPS hosting service for peace-of-mind, then paying for a service that helps me understand what I'm doing when it comes to self-hosting should also be on the table as an alternative.
That said, I have no idea how viable of a business model that would be, or if it would even be able to be developed and upkept with reliable info. Or, maybe it already exists, but on an enterprise level that I cannot afford for some dumb little blogs.
From first hearing about Sandstorm since the first open beta 10 years ago (https://news.ycombinator.com/item?id=10147774) and reading about it on/off since then, this is first time I hear anyone pitching it for "medical and other highly regulated industries". Where exactly does this come from?
> There's still nothing else quite like it
Plenty of other similar self-hosted platforms, YunoHost is probably the closest, most mature and most feature-packed alternative to Sandstorm, at least as far as I know,.
I might have overstated the medical field- but they did pitch it as a product for enterprises with security requirements: "Sandstorm’s users included (and may still include – there’s no way for us to tell) companies, newspapers, educational institutions, research laboratories, and even government agencies. " (https://sandstorm.io/news/2024-01-14-move-to-sandstorm-org)
Things I do:
* Make sure domain WHOIS does not point to me in any way, even if that means using some silly product like "WHOIS GUARD"
* Lock down any and all SSH access. Preferably only allow key-based authentication.
* Secure the communication substrate. For me this means running a Zerotier network which all dependent services listen on. I also try to use Unix sockets for any services colocated on the same operating system and restrict the service to only listen on sockets in a directory specifically accessible by the service.
* Try to control the permission surface of any service as much as possible. Containers can be a bit heavyweight for self-hosting but make this easy. There's alternatively like bubblewrap and firejail as well.
* Make use of services like fail2ban which can automate some of the hunting of bad actors for you.
* Consider hosting a listener for external traffic outside of your infra. For redundancy, load-shedding, and for security I have an external VPS that runs haproxy before routing over Zerotier to my home infrastructure. I enforce rate limits and fail2ban at the VPS so that bad actors get stopped upstream and use none of my home compute or bandwidth. (I also am setting up some redundant caches that live on the VPS so if my home network is down, one of my services can failover.)
* Segregate data into separate databases and make sure services only have access to databases that they need. With Postgres this is really simple with virtual databases being tied to different logins. I have some services that prune databases that run in a cron-like way (but using snooze instead) and they have no outbound net access.
If your network layer is secure and your services follow least-privilege, then you should be fairly in the clear.
Of course there can be security issue on your webserver as well, but for a simple site this setup is learnable in an hour or two and you are ready to go.
You can hook that up on a pie attached to your router or pay a bit to have it hosted somewhere. Domain is perhaps 2-5$ and an TLS cert you can get from Let's Encrypt.
No idea how to put everything into a container that it makes sense. I just run this quite often on small hosted machines elsewhere. I just install everything manually because it takes 5 minutes if you have done it before.
https://docs.opnsense.org/manual/how-tos/wireguard-client.ht...
Then on my phone I just flick on the switch and can access all my home services. It's a smidge less convenient, but feels nice and secure.
I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.
You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.
Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.
If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.
After a few years you work out that holy shit we now have 15 people looking after everything instead of the previous 4 people and pods are getting a few hits an hour. Every HTTP request ends up costing $100 and then you wonder why the fuck your company is totally screwed financially.
But all the people who designed it have left for consultancy jobs with Kubernetes on their resume and now you've got an army of people left to juggle the YAML while the CEO hammers his fist on the table saying CUT COSTS. Well you hired those feckin plums!
etc etc.
Lots of them are on here. People have no idea how to solve problems any more, just create new ones out of the old ones.
It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.
Edit: anyone actually interested in such a post?
Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.
If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.
That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.
No issues whatsoever and it is so easy to manage. It just works!
It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)
Lord knows why people overcomplicate things with docker/kubernetes/etc.
Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”
I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.
Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.
In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.
Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.
Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.
But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine
"Premature clustering is the source of all evil" - or something like that.
I think it's a great way to learn Kubernetes if you're interested in that.
Writing your first yaml or two is scary & seems intimidating at first .
But after that, everything is cut from the same cloth. Its an escape from the long dark age of every sysadmin forever cooking up whatever whimsy sort of served them at the time, escape from each service having very different management practices around it.
And there's no other community anywhere like Kubernetes. Unbelievably many very good quality very smart helm charts out there, such as https://github.com/bitnami/charts/tree/main/bitnami just ready to go. Really sweet home-ops setups like https://github.com/onedr0p/home-ops that show that once you have a platform under foot, adding more services is really easy, showing an amazing range of home-ops things you might be interested in.
> Last thing I need is Kubernetes at home
Last thing we need is incredibly shitty attitude. Fuck around and find out is the hacker spirit. Its actually not hard if you try, and actually having a base platform where things follow common patterns & practices & you can reuse existing skills & services is kind of great. Everything is amazing but the snivelling shitty whining without even making the tiniest little case for your unkind low-effort hating will surely continue. Low signal people will remain low signal, best avoid.
1. RHEL 9 with Developer Subscription. Installed dnf-automatic, set `reboot = when-changed`, so it's zero effort to reliably apply all updates with daily reboots. One or two minutes of downtime, not a big deal.
2. For services: podman with quadlets. It's RH-flavoured replacement for docker-compose. Not sure, if I like it, but I guess that's the "future", so I'm embracing it. Every service is a custom-built image with common parent to reduce space waste (by reusing base OS layer).
So far I want to run static http (nginx), vaultwarden, postfix and some webmail. May be more in the future.
This setup wastes a lot of disk space for image data, so expect to order few more gigabytes of disk to pay for modern tech.
On an unrelated note, an article of how to rent a VPS in China would be interesting :)
New laws comes to mind. If a government decides to try to outlaw encryption again, cloud/hosting companies located there wouldn't have a choice but to comply, or give up on the business. The laws could also be made in such way that individuals are responsible for avoiding it, even self-hosters, and if people are using it anyways, be legally held responsible for the potential harms of it.
You are right though, it gives significantly more control to users. It's just realising 100% of the benefits that might be trickier.
Don't worry about the servers. Worry about mandated software on the client
Given that apparently it's quite difficult to even get a WeChat account without a national ID, I suspect that step 1 is "learn mandarin" and step 2 is "get a Chinese national ID".
Self hosting to me is, at the very least having physical access to the machines.
Things that haven't worked for me:
- Standalone Docker: Doesn't work great on its own. Containers often need to be recreated to modify immutable properties, like the specific image the container is running. To recreate the container, you need to store some state about how it _should_ work elsewhere.
- Quadlet: Too hard to manage clusters of services. Podman has subtle differences to Docker that occasionally cause problems and really tempting features (e.g. rootless) that cause more problems if you try to use them.
- Kubernetes: Waaaay too heavy. Even the "lightweight" distributions like k3s, k0s etc. embed large components of the official distribution, which are still heavy. Part of the embedded metric server for example periodically enumerates every single open file handle in every container. This leads to huge CPU spikes for a feature I don't care about.
With my setup now, I can more or less copy-paste a template into a new file, tweak some strings and have a HTTPS-enabled service available at https://thing.mydomain.mine. This works pretty painlessly even for services that need several volumes to maintain state or need several containers that work together.
Otherwise good article. If you want to go rootless (which you should!), Podman is the way to go; but Docker works rootless too, with some modifications [1]. I have found Docker rootless to be reliable and robust on both Debian and Ubuntu. It also solves permissions problems because your rootless user owns files inside and outside the container, whereas with rootful setups all files outside the container are owned by root, which can be a pain.
Also, you don't need Watchtower. Automatic `docker compose pull` can be setup using standard crontab, see [2].
[1]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
I run it alongside portainer because exactly of the compose.yaml file I want to have control over
Just use a static site generator like zola or hugo and rsync to a small VPS running caddy or nginx. If you need dynamic thing, there are many frameworks you can just rsync too with little dependencies. Or use PHP, it's not that bad. Just restrict all locations except public ones to your ip in nginx config if you use something like wordpress and you should be fine.
If you have any critical stuff, create a zfs dataset and use that to backup to another VPS using zfs send, there are tools to make it easy, much easier than DB replication.
But what about other services, like if you want a database server as well, a mail server, etc.?
I started using containers when I last upgraded hardware and while it's not as beneficial as I had hoped, it's still an improvement to be able to clone one, do a test upgrade, and only then upgrade the original one, as well as being able to upgrade services one by one rather than committing to a huge project where you upgrade the host OS and everything has to come with to the new major version
Now most servers are app servers, and they all run archlinux. We prepare images and we run them with PXE.
Both those are out of scope for self host.
But, we also have about a dozen of staging, dev, playground servers. And those are just regular installs of arch. We run postgres, redis, apps in many languages... For all that we use systems packages and AUR. DB upgrade? Zfs snapshot, and I follow arch wiki postgres upgrade, takes a few minutes, there is downtime, but it is fine. You mess anything? Zfs rollback. You miss a single file? cd .zfs/snapshots and grab it. I get about 30minutes of cumulated downtime per year on those machines. That's way enough for any self host.
We use arch because we try the latest "toys" on those. If you self host take an LTS distribution and you'll be fine.
My vision of self-hosting is basically the opposite. I only self-host existing apps and services for my and my family's use. I have a TrueNAS box with a few disks, run Jellyfin for music and shows, run a Nextcloud instance, a restic REST server for backing up our devices, etc. I feel like the OP is more targeted this type of "self hosting".
I'm just a boomer (technically a millennial) who sticks to Arch Linux even when it comes to servers, I have zero friction, really. I have no issues self-hosting whatever me or a client requires, keeping it minimal and functional.
I self-host like it is 2000 (apart from a couple of some more modern stuff, if you could consider systemd and certbot, etc. modern). :D
One change I made that may help with this, is to not install crap on the host that I don't plan to use for a long time. Trying out a new database server or want to set up an Android IDE for a temporary project? Use a VM, don't clutter up random files all over the host. Is this what is happening on your Windows perhaps?
On Arch Linux, all system files are listed, along with their content hashes and expected permissions/ownership, in the installed package database. So it's possible to just list changed files in /etc or unexpected files in the system, or files with unexpected permissions, and do a manual cleanup/checkup if needed. No idea how I'd even approach that on Windows.
I guess the only time I'd need to re-install would be if I messed the system so bad that manual fixup would be too laborious over fresh setup and reconfiguration. (And I'd have to lose system backups too)
The end result is right clicking has noticeable delay in some directories. Boot takes a few seconds longer than it did the first time it was installed. Some applications, even Task Manager inexplicably hang with a white window for a few seconds.
The shell (part of Explorer) no longer displaying anything when searching in the start menu (because it tries to connect to the internet to search bing and that sometimes stops working rendering half the start menu useless).
The “modern” settings app hanging with its blue window and icon for anywhere from seconds to indefinitely.
It’s just a lot of this bullshit that adds up. Reinstalling Windows makes it like it was day one.
I've been running a DigitalOcean VPS for years hosting my personal projects. These include a static website, n8n workflows, and Umami analytics. I used manual Docker container management, Nginx, and manual Let's Encrypt certificate renewals. I was too lazy even to set up certbot.
I've migrated to a Portainer + Caddy setup. Now I have a UI for container management and automatic SSL certificate handling. It took about two hours.
Thanks for bringing me to 2025!