Top
Best
New

Posted by websku 16 hours ago

CLI agents make self-hosting on a home server easier and fun(fulghum.io)
631 points | 423 comments
simonw 16 hours ago|
This posts lists inexpensive home servers, Tailscale and Claude Code as the big unlocks.

I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.

The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.

Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.

Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.

drnick1 15 hours ago||
I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.

buran77 14 hours ago|||
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.

> I am not sure why people are so afraid of exposing ports

It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

> It's the way the Internet is meant to work.

Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.

lmm 13 hours ago|||
> It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.

appplication 13 hours ago|||
Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.
johnisgood 7 hours ago|||
Same applies to Tailscale. A Tailscale client, coordination plane vulnerability, or incomplete understanding of their trust model is also all it takes. You are adding attack surface, not removing it.

If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.

If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.

b112 2 hours ago||
There was a time when people were allowed to drive cars unlicensed.

These days, that seems insane.

As the traffic grew, as speeds increased, licensing became necessary.

I think, these days, we're almost into that category. I don't say this happily. But having unrestricted access seems like an era coming to an end.

I realise this seems unworkable. But so was the idea of a driver's license. Sometimes society and safety comes first.

I'm willing to bet that in under a decade, something akin to this will happen.

yreg 15 minutes ago|||
Can you be more concrete what do you predict?
justinparus 9 hours ago||||
Using a BigCorp service also has risks. You are exposed to many of their vulnerabilities, that’s why our information ends up in data leaks.
heavyset_go 7 hours ago||||
Someone would need your 256-bit key to do anything to an exposed Wireguard port.
eqvinox 5 hours ago||
In theory.

In the same theory, someone would need your EC SSH key to do anything with an exposed SSH port.

Practice is a separate question.

bjt12345 1 hour ago|||
SSH is TCP though and the outside world can initiate a handshake, the point being that wireguard silently discards unauthenticated traffic - there's no way they can know the port is open for listening.
JasonADrury 1 hour ago|||
Not even remotely comparable.

Wireguard is explicitly designed to not allow unauthenticated users to do anything, whereas SSH is explicitly designed to allow unauthenticated users to do a whole lot of things.

SchemaLoad 12 hours ago||||
Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.
johnisgood 7 hours ago|||
Now you are exposed to every security bug in Tailscale's client, DERP relays, and coordination plane, plus you have added a trust dependency on infrastructure you do not control. The attack surface did not shrink, it shifted.
bjt12345 1 hour ago|||
How would another service be impacted by an open UDP port on a server that the service is not using?
refulgentis 9 hours ago|||
This felt like it didn’t do your aim justice, “$X and an incomplete understanding of what you’re doing is all it takes to be compromised” applies to many $X, including Tailscale.
xnickb 5 hours ago|||
Headscale is a thing
prmoustache 4 hours ago||
Headscale is only really useful if you need to manage multiple users and/or networks. If you only have one network you want to have access to and a small number of users/devices it only increases the attack surface over having one wireguard listening because it has more moving parts.
ErneX 3 hours ago|||
I set it up to open the port for few secs via port knocking. Plus another script that runs on the server that opens connections to my home ip addr doing reverse lookup to a domain my router updates via dyndns so devices at my home don’t need to port knock to connect.
mfru 4 hours ago|||
I think the most important thing about Tailscale is how accessible it is. Is there a GUI for Wireguard that lets me configure my whole private network as easily as Tailscale does?
prmoustache 4 hours ago|||
> Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.

Wireguard is distributed by distros in official packages. You don't need time, money and expertise to setup unattended upgrades with auto reboot on a debian or redhat based distro. At least it is not more complicated than setting an AI agent.

madeofpalk 3 hours ago||
What about SMTP, IMAP(S), HTTP(S), various game servers parent mentioned have open ports for?

Having a single port open for VPN access seems okay for me. That's what I did, But I don't want an "etc" involved in what has direct access to hardware/services in my house from outside.

bjt12345 1 hour ago||
How does wireguard interfere with email?
zamadatix 14 hours ago||||
It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.

drnick1 13 hours ago|||
> There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

This incident precisely shows that containerization worked as intended and protected the host.

zamadatix 12 hours ago||
It protected the host itself but it did not protect the server from being compromised and running malware, mining cryptocurrency.

Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).

Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).

SoftTalker 14 hours ago|||
I just run an SSH server and forward local ports through that as needed. Simple (at least to me).
zamadatix 12 hours ago|||
I do that as well, along with using sshd as a SOCKS proxy for web based stuff via Firefox, but it can be a bit of a pain to forward each service to each host individually if you have more than a few things going on - especially if you have things trying to use the same port and need to keep track of how you mapped it locally. It can also a lot harder to manage on mobile devices, e.g. say you have some media or home automation services - they won't be as easy to access via a single public SSH host via port forwarding (if at all) as a VPN would be, and wireguard is about as easy a personal VPN as there is.

That's where wg/Tailscale come in - it's just a traditional IP network at that point. Also less to do to shut up bad login attempts from spam bots and such. I once forgot to configure the log settings on sshd and ended up with GBs of logs in a week.

The other big upside (outside of not having a 3rd party) in putting in the slightly more effort to do wg/ssh/other personal VPN is the latency+bandwidth to your home services will be better.

jmb99 8 hours ago||
> and wireguard is about as easy a personal VPN as there is.

I would argue OpenVPN is easier. I currently run both (there are some networks I can’t use UDP on, and I haven’t bothered figuring out how to get wireguard to work with TCP), and the OpenVPN initial configuration was easier, as is adding clients (DHCP, pre-shared cert+username/password).

This isn’t to say wireguard is hard. But imo OpenVPN is still easier - and it works everywhere out of the box. (The exception is networks that only let you talk on 80 and 443, but you can solve that by hosting OpenVPN on 443, in my experience.)

This is all based on my experience with opnsense as the vpn host (+router/firewall/DNS/DHCP). Maybe it would be a different story if I was trying to run the VPN server on a machine behind my router, but I have no reason to do so - I get at least 500Mbps symmetrical through OpenVPN, and that’s just the fastest network I’ve tested a client on. And even if that is the limit, that’s good enough for me, I don’t need faster throughput on my VPN since I’m almost always going to be latency limited.

Rebelgecko 10 hours ago||||
How many random people do you have hitting port 22 on a given day?
SoftTalker 9 hours ago|||
Dozens. Maybe hundreds. But they can't get in as they don't have the key.
gsich 6 hours ago|||
change port.
lee_ars 2 hours ago|||
After years of cargo-culting this advice—"run ssh on a nonstandard port"—I gave up and reverted to 22 because ssh being on nonstandard ports didn't change the volume of access attempts in the slightest. It was thousands per day on port 22, and thousands per day on port anything-else-i-changed-it-to.

It's worth an assessment of what you _think_ running ssh on a nonstandard port protects you against, and what it's actually doing. It won't stop anything other than the lightest and most casual script-based shotgun attacks, and it won't help you if someone is attempting to exploit an actual-for-real vuln in the ssh authentication or login process. And although I'm aware the plural of "anecdote" isn't "data," it sure as hell didn't reduce the volume of login attempts.

Public key-only auth + strict allowlists will do a lot more for your security posture. If you feel like ssh is using enough CPU rejecting bad login attempts to actually make you notice, stick it behind wireguard or set up port-knocking.

And sure, put it on a nonstandard port, if it makes you feel better. But it doesn't really do much, and anyone hitting your host up with censys.io or any other assessment tool will see your nonstandard ssh port instantly.

wasmitnetzen 1 hour ago||
Conversely, what do you gain by using a standard port?

Now, I do agree a non-standard port is not a security tool, but it doesn't hurt running a random high-number port.

lee_ars 1 hour ago||
> Conversely, what do you gain by using a standard port?

One less setup step in the runbook, one less thing to remember. But I agree, it doesn't hurt! It just doesn't really help, either.

sally_glance 5 hours ago||||
Underrated reply - I randomize the default ports everywhere I can, really cuts down on brute force/credential stuffing attempts.
Maledictus 2 hours ago|||
or keep the port and move to IPv6 only.
Imustaskforhelp 12 hours ago|||
Also to Simon: I am not sure about how Iphone works but in android, you could probably use mosh and termux to then connect to the server as well and have the end result while not relying on third party (in this case tailscale)

I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.

Tmate is a wonderful service if you have home networks behind nat's.

I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source

Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv

heavyset_go 15 hours ago||||
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

This is what I do. You can do Tailscale like access using things like Pangolin[0].

You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.

> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.

This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].

[0] https://github.com/fosrl/pangolin

[1] https://news.ycombinator.com/item?id=46136026

edoceo 14 hours ago||
Is a container not enough isolation? I do SSH to the host (alt-port) and then services in containers (mail, http)
heavyset_go 14 hours ago|||
Depends on your risk tolerance.

I personally wouldn't trust a machine if a container was exploited on it, you don't know if there were any successful container escapes, kernel exploits, etc. Even if they escaped with user permissions, that can fill your box with boobytraps if they have container-granted capabilities.

I'd just prefer to nuke the VPS entirely and start over than worry if the server and the rest of my services are okay.

Imustaskforhelp 12 hours ago|||
Yea I feel that too.

there are some well respected compute providers as well which you can use and for very low amount, you can sort of offload this worry to someone else.

That being said, VM themselves are good enough security box too. I consider running VM's even on your home server with public facing strategies usually allowable

heavyset_go 9 hours ago||
Yeah, I only run very little on VPS, so this is practically free to me. Everything else I host at home behind Wireguard w/ Pangolin.
Imustaskforhelp 12 hours ago|||
I understand where you are coming from but no, containers aren't enough isolation.

If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.

Virtual machines are the intended use case for that. But they can be full of friction at time.

If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/

It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.

I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.

Etheryte 14 hours ago||||
Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there. No matter what I'm hosting, it's a lot more convenient to not have to worry about that even for a second.
NewJazz 11 hours ago|||
This is a good reason not to expose random services, but a wireguard endpoint simply won't respond at all if someone hits it with the wrong key. It is better even than key based ssh.
drnick1 13 hours ago|||
> Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there

Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.

TheCraiggers 12 hours ago||
Port scanners don't try to ssh into my server with various username/password combinations.

I prefer to hide my port instead of using F2B for a few reasons.

1. Log spam. Looking in my audit logs for anything suspicious is horrendous when there's just megs of login attempts for days.

2. F2B has banned me in the past due to various oopsies on my part. Which is not good when I'm out of town and really need to get into my server.

3. Zero days may be incredibly rare in ssh, but maybe not so much in Immich or any other relatively new software stack being exposed. I'd prefer not to risk it when simple alternatives exist.

Besides the above, using Tailscale gives me other options, such as locking down cloud servers (or other devices I may not have hardware control over) so that they can only be connected to, but not out of.

pferde 1 hour ago||
You can tweak rate thresholds for F2B, so that it blocks the 100-attempts-per-second attackers, but doesn't block your three-attempts-per-minute manual fumbling.
TheCraiggers 1 hour ago||
I know this. But I don't like that they still get to try at least once, and there's still the rest of my list.
byb 8 hours ago||||
My biggest source of paranoia is my open home assistant port, while it requires a strong password and is TLS-encrypted, I'm sure that one day someone will find an exploit letting them in, and then the attacker will rapidly turn my smart home devices on and off until they break/overheat the power components until they start a fire and burn down my house.
seszett 8 hours ago|||
That seems like a very irrational fear. Attackers don't go around trying to break into Home Assistant to turn the lights on at some stranger's house.

There's also no particular reason to think Home Assistant's authentication has to have a weakness point.

And your devices are also unlikely to start a fire just by being turned on and off, if that's your fear you should replace them at once because if they catch fire it doesn't matter if it's an attacker or yourself turning them on and off.

timc3 5 hours ago|||
People are putting their whole infrastructure onto HA - cars, Apple/Google/other accounts, integrations to grid companies, managing ESP software etc..

I think that has more potential for problems than turning lights on and off and warrants strong security.

wao0uuno 5 hours ago|||
Why expose HA to the internet? I’m genuinely curious.
alpn 13 hours ago||||
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.

https://wireplug.org

copperx 13 hours ago|||
Apparently I'm ignorant about Tailscale, bacause your service description is exactly what I thought Tailscale was.
SchemaLoad 12 hours ago||
The main issue people have with Tailscale is that it's a centralised service that isn't self hostable. The Tailscale server manages authentication and keeping track of your devices IPs.

Your eventual connection is direct to your device, but all the management before that runs on Tailscales server.

TOMDM 12 hours ago||
Isn't this what headscale is for?
hamandcheese 12 hours ago|||
This is very cool!

But I also think it's worth a mention that for basic "I want to access my home LAN" use cases you don't need P2P, you just need a single public IP to your lan and perhaps dynamic dns.

digiown 10 hours ago|||
Where will you host the wg endpoint to open up?

- Each device? This means setting up many peers on each of your devices

- Router/central server? That's a single point of failure, and often a performance bottleneck if you're on LAN. If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.

Not to mention DDNS can create significant downtime.

Tailscale fails over basically instantly, and is E2EE, unlike the hub setup.

hamandcheese 9 hours ago||
To establish a wg connection, only one node needs a public IP/port.

> Router/central server? That's a single point of failure

Your router is a SPOF regardless. If your router goes down you can't reach any nodes on your LAN, Tailscale or otherwise. So what is your point?

> If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.

Secure your router. This is HN, not advice for your mom.

> Not to mention DDNS can create significant downtime.

Set your DNS ttl correctly and you should experience no more than a minute of downtime whenever your public IP changes.

digiown 9 hours ago||
> one node needs a public IP/port

A lot of people are behind CGNAT or behind a non-configurable router, which is an abomination.

> Secure your router

A typical router cannot be secured against physical access, unlike your servers which can have disk encryption.

> Your router is a SPOF regardless

Tailscale will keep your connection over a downstream switch, for example. It will not go through the router if it doesn't have to. If you use it for other usecases like kdeconnect synchronizing clipboard between phone and laptop, that will also stay up independent of your home router.

kevin_thibedeau 10 hours ago|||
A public IP and DDNS can be impossible behind CGNAT. A VPN link to a VPS eliminates that problem.
hamandcheese 9 hours ago|||
When I said "you just need a single public IP" I figured it was clear that I wasn't claiming this works for people who don't have a public IP.
digiown 10 hours ago|||
The VPS (using wg-easy or similar solutions) will be able to decrypt traffic as it has all the keys. I think most people self-hosting are not fine with big cloud eavesdropping on their data.

Tailscale really is superior here if you use tailnet lock. Everything always stays encrypted, and fails over to their encrypted relays if direct connection is not possible for various reasons.

epistasis 13 hours ago||||
I've managed wireguard in the past, and would never do it again. Generating keys, distributing them, configuring it all...... bleh!

Never again, it takes too much time and is too painful.

Certs from Tailscale are reason enough to switch, in my opinion!

The key with successful self hosting is to make it easy and fast, IMHO.

abc123abc123 1 hour ago||||
This is the truth. I've been exposing 22 and 80 for decades, and nothing has happened. The ones I know who had something bad happen to them exposed proprietary services or security nightmares like wordpress.
inapis 1 hour ago||||
Skill issue. Not to mention the ongoing effort required to maintain and secure the service. But even before that, a lot of people are behing CGNAT. Tailscale makes punching a hole through that very easy. Otherwise you have to run your own relay server somewhere in the cloud.
pacija 3 hours ago||||
Of course. A port is a door. If service listening on a port is secure and properly configured (e.g. ssh), whole Internet can bang on it all day every day, they won't let through without proper key. Same for imap, xmpp or any othet service.

But what can you expect from people who provide services but won't even try to understand how they work and how they are configured as it's 'not fun enough', expecting claude code to do it right for them.

Asking AI to do thing you did 100 times before is OK I guess. Asking AI to do thing you never did and have no idea how it's properly done - not so much I'd say. But this guy obviously does not signal his sysadmin skills but his AI skills. I hope it brings him the result he aimed for.

eqvinox 4 hours ago||||
> I am not sure why people are so afraid of exposing ports.

Similar here, I only build & run services that I trust myself enough to run in a secure manner by themselves. I still have a VPN for some things, but everything is built to be secure on its own.

It's quite a few services on my list at this point and really don't want to have a break in one thing lead to a break in everything. It's always possible to leave a hole in one or two things by accident.

On the other side this also means I have a Postgres instance with TCP/5432 open to the internet - with no ill effects so far, and quite a bit of trust it'll remain that way, because I understand its security properties and config now.

Topgamer7 15 hours ago||||
I don't have a static IP, so tailscale is convenient. And less likely to fail when I really need it, as apposed to trying to deal with dynamic dns.
Frotag 13 hours ago||||
Speaking of Wireguard, my current topology has all peers talking to a single peer that forwards traffic between peers (for hole punching / peers with dynamic ips).

But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?

Frotag 13 hours ago|||
I guess I'm looking for wireguard's version of STUN. And now that I know what to google for, finally found some promising leads.

https://github.com/jwhited/wgsd

https://www.jordanwhited.com/posts/wireguard-endpoint-discov...

https://github.com/tjjh89017/stunmesh-go

darkwater 2 hours ago||||
I don't fully understand your topology use case. You have different peers that are "road-warriors" and that sometimes happen to be both on the same LAN which is not your home LAN, and need to speak the one to the other? And I guess you are connecting to the other peer via DNS, so your DNS record always points to the Wireguard-provided IP?
torcete 4 hours ago||||
The way I do it is to have two different first level domains. Let's say:

- w for the wireguard network. - h for the home network.

Nothing fancy, just populate the /etc/hosts on every machine with these names.

Now, it's up to me to connect to my server1.h or server1.w depending whether I am at home or somewhere else.

wooptoo 13 hours ago||||
Two separate WG profiles on the phone; one acting as a Proxy (which forwards everything), and one acting just as a regular VPN without forwarding.
megous 12 hours ago|||
Have your network managing software setup a default route with a lower metric than wireguard default route based on wifi SSID. Can be done easily with systemd-networkd, because you can match .network file configurations on SSID. You're probably out of luck with this approach on network-setup-challenged devices like so called smart phones.
rubatuga 3 hours ago||||
Yggdrasil network is probably the future. At Hoppy Network we're about to release private yggdrasil relays as a service so you don't get spammed with "WAN" traffic. With Yggdrasil, IP addresses aren't allocated by an authority - they are owned and proven by public key cryptography.
digiown 10 hours ago||||
A mesh-type wireguard network is rather annoying to set up if you have more than a few devices, and a hub-type network (on a low powered router) tends to be so slow that it necessitates falling back to alternate interfaces when you're at home. Tailscale does away with all this and always uses direct connections. In principle it is more secure than hosting it on some router without disk encryption (as the keys can be extracted via a physical attack, and a pwned router can also eavesdrop on traffic).
twelvedogs 4 hours ago||||
i tried wireguard and ended up giving up on it, too many isps just block it here or use some kind of tech that fucks with it and i have no idea why, i couldn't connect to my home network because it was blocked on whatever random wifi i was on

the new problem is now my isp uses cgnat and there's no easy way around it

tailscale avoids all that, if i wanted more control i'd probably use headscale rather than bother with raw wireguard

pferde 1 hour ago||
And there's nothing wrong with it. That is what wireguard is meant to be - a rock-solid secure tunneling implementation that's easy to build higher-level solutions on.
PeterStuer 6 hours ago||||
Defence in dept. You have a layer of security even before a packet reaches your port. I might have a zero day on your service, but now I also need to breach your reverse proxy to get to it.
BatteryMountain 7 hours ago||||
Which router OS are you using? I have openwrt + daily auto updates configure with a couple of packages blacklisted that I manually update now & then.
nialv7 5 hours ago||||
> introduce a third party like Tailscale.

Well just use headscale and you'll have control over everything.

vladvasiliu 4 hours ago||
That just moves the problem, since headscale will require a server you manage with an open port.

Sure, tailscale is nice, but from an open-port-on-the-net perspective it's probably a bit below just opening wireguard.

arjie 5 hours ago||||
I used to do that, but Tailscale with your own headscale server is pretty snazzy. The other thing is with cloudflared running your server doesn't have to be Internet-routable. Everything is tunneled.
sauercrowd 15 hours ago||||
People are not full time maintainers of their infra though, that's very different to companies.

In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.

buildfocus 14 hours ago||
Wireguard is _really_ simple in that sense though. If you're not doing anything complicated it's very easy to set up & maintain, and basically just works.

You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.

vladvasiliu 4 hours ago||
This may come with its own limitations, though.

My ISP-provided router (Free, in France) has WG built-in. But other than performance being abysmal, its main pain point is not supporting subnet routing.

So if all you want is to connect your phone / laptop while away to the local home network, it's fine. If you want to run a tunnel between two locations with multiple IPs on the remote side, you're SoL.

SchemaLoad 14 hours ago||||
If you expose ports, literally everything you are hosting and every plugin is an attack surface. Most of this stuff is built by single hobbiest devs on the weekend. You are also exposed to any security issues you make in your configuration. My first attempt self hosting I had redis compromised because I didn't realise I had exposed it to the internet with no password.

Behind a VPN your only attack surface is the VPN which is generally very well secured.

sva_ 14 hours ago|||
You exposed your redis publicly? Why?

Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.

SchemaLoad 14 hours ago|||
I actually didn't know I had. At the time I didn't properly know how docker networking worked and I exposed redis to the host so my other containers could access it. And then since this was on a VPS with a dedicated IP, this made it exposed to the whole internet.

I now know better, but there are still a million other pitfalls to fall in to if you are not a full time system admin. So I prefer to just put it all behind a VPN and know that it's safe.

drnick1 13 hours ago||
> but there are still a million other pitfalls to fall in to if you are not a full time system admin.

Pro tip: After you configure a new service, review the output of ss -tulpn. This will tell you what ports are open. You should know exactly what each line represents, especially those that bind on 0.0.0.0 or [::] or other public addresses.

The pitfall that you mentioned (Docker automatically punching a hole in the firewall for the services that it manages when an interface isn't specified) is discoverable this way.

jsrcout 13 hours ago||
Thanks, didn't know about this one.
vladvasiliu 4 hours ago|||
Isn't GP's point inadvertently exposing stuff? Just mention docker networking on HN and you'll get threadfuls of comments on how it helpfully messes with your networking without telling you. Maybe redis does the same?

I mitigate this by having a dedicated machine on the border that only does routing and firewalling, with no random services installed. So anything that helpfully opens ports on internal vms won't automatically be reachable from the outside.

Jach 12 hours ago|||
I have a VPS with OVH, I put Tailscale on it and it's pretty cool to be able to install and access local (to the server) services like Prometheus and Grafana without having to expose them through the public net firewall or mess with more apache/nginx reverse proxies. (Same for individual services' /metrics endpoints that are served with a different port.)
CSSer 15 hours ago||||
The answer is people who don't truly understand the way it works being in charge of others who also don't in different ways. In the best case, there's an under resourced and over leveraged security team issuing overzealous edicts with the desperate hope of avoiding some disaster. When the sample size is one, it's easy to look at it and come to your conclusion.

In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.

gambiting 2 hours ago||||
"Back in the day"(just few years ago) I used to expose a port for RDP on my router, on a non-standard port. Typically it would be fine and quiet for a few weeks, then I assume some automatic scanner would find it and from that point onwards I could see windows event log reporting a log in attempt every second, with random login/password combinations, clearly just looking for something that would work. I would change the port and the whole dance would repeat all over again. Tens of thousands of login attempts every day, all year round. I used to just ignore it, since clearly they weren't going to log in with those random attempts, but eventually just switched to OpenVPN.

So yeah, the lesson there is that if you have a port open to the internet, someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.

catlifeonmars 8 hours ago||||
Honestly the managed PKI is the main value-add from Tailscale over plain wireguard.

I’ve been meaning to give this a try this winter: https://github.com/juanfont/headscale

esseph 14 hours ago|||
With ports you have dozens or hundreds of applications and systems to attack.

With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.

With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.

johnisgood 7 hours ago|||
Tailscale does not solve the "falling behind on updates" problem, it just moves the perimeter. Your services are still vulnerable if unpatched: the attacker now needs tailnet access first (compromised device, account, or Tailscale itself).

You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.

WireGuard gives you the same "no exposed ports except VPN" model without the third-party dependency.

The tradeoff is convenience, not security.

BTW, why are people acting like accessing a server from a phone is a 2025 innovation?

SSH clients on Android/iOS have existed for 15 years. Termux, Prompt, Blink, JuiceSSH, pick one. Port N, key auth, done. You can run Mosh if you want session persistence across network changes. The "unlock" here is NAT traversal with a nice UI, not a new capability.

twelvedogs 4 hours ago|||
> Tailscale does not solve the "falling behind on updates" problem, it just moves the perimeter.

nothing 100% fixes zero days either, you are just adding layers that all have to fail at the same time

> You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.

you still have to have a vulnerable service after that. in your scenario you'd need an exploitable attack on wireguard or one of tailscale's modifications to it and an exploitable service on your network

that's extra difficulty not less

johnisgood 4 hours ago||
The "layers" argument applies equally to WireGuard without Tailscale. Attacker still needs VPN exploit + vulnerable service.

The difference: Tailscale adds attack vectors that do not exist with self-hosted WireGuard: account compromise, coordination plane, client supply chain, other devices on your tailnet. Those are not layers to bypass, they are additional entry points.

Regardless, it is still for convenience, not security.

Galanwe 5 hours ago|||
> BTW, why are people acting like accessing a server from a phone is a 2025 innovation?

> SSH clients on Android/iOS have existed for 15 years

That is not the point, Tailscale is not just about having a network connection, it's everything that goes with. I used to have OpenVPN, and there's a world of difference.

- The tailscale client is much nicer and convenient to use on Android than anything I have seen.

- The auth plane is simpler, especially for non tech users (parents, wife) whom I wish to access my photo album. They are basically independent with tailscale.

- The simplicity also allows me to recommend it to friends and we can link between our tailnet, e.g. to cross backup our NAS.

- Tailscale can terminate SSH publicly, so I can selectively expose services on the internet (e.g. VaultWarden) without exposing my server and hosting a reverse proxy.

- ACLs are simple and user friendly.

johnisgood 5 hours ago||
You are listing conveniences, which is fair. I said the tradeoff is convenience, not security.

> "Tailscale can terminate SSH publicly"

You are now exposing services via Tailscale's infrastructure instead of your own reverse proxy. The attack surface moved, it did not shrink.

hexfish 5 hours ago|||
Is Tailscale still recording metadata about all your connections? https://github.com/tailscale/tailscale/issues/16165
philips 15 hours ago|||
I agree! Before Tailscale I was completely skeptical of self hosting.

Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!

ryandrake 15 hours ago||
Maybe I'm dumb, but I still don't quite understand the value-add of Tailscale over what Wireguard or some other VPN already provides. HN has tried to explain it to me but it just seems like sugar on top of a plain old VPN. Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.
Jtsummers 15 hours ago|||
I think you answered the question. Sugar. It's easier than managing your own Wireguard connections. Adding a device just means logging into the Tailscale client, no need to distribute information to or from other devices. Get a new phone while traveling because yours was stolen? You can set up Tailscale and be back on your private network in a couple minutes.

Why did people use Dropbox instead of setting up their own FTP servers? Because it was easier.

johnisgood 7 hours ago||
Yeah, but "people" here are alleged software engieners. It is quite disheartening.
wiether 5 hours ago|||
First and foremost they are humans, with a limited time on Earth.

Being a software engineer doesn't mean you want to spend you free time tinkering about your self-hosting setup and doing support for your users.

With Tailscale, not only you don't have to care about most things since _it just works_, but also on-boarding of casual users is straightforward.

Same goes for Plex. I want to watch movies/shows, I don't want to spend time tinkering with my setup. And Plex provides exactly that. Ditto for my family/friends that can access my library with the same simple experience as Netflix or whatever.

Meanwhile, I have a coworker who want to own/manage everything. So they don't want to use Tailscale and they dropped Plex when they forced to use the third-party login system. Now they watch less than a third than they used to be, and they share their setup with nobody since it's too complicated to do.

To each their own, but my goal is to enjoy my setup and share it with others. Tailscale and Plex give me that.

johnisgood 5 hours ago||
There is a difference between "I choose not to" and "I cannot". The thread is full of people saying Tailscale "unlocked" self-hosting, implying capability, not time savings or time preference.

Choosing convenience is fine. But if basic port forwarding or WireGuard is beyond someone's skill set, "software engineer" is doing a lot of heavy lifting.

I am not saying they are, but if it really is the case, then yeah.

As for file sharing... I remember when non-SWEs knew how to torrent movies, used DC++ and so on. These days even SWEs have no idea how to do it. It is mind-boggling.

wiether 3 hours ago||
To me the "unlocked" is just another hyperbole used by some people, partly because they lack initial knowledge, partly because its click-bait.

The way I understand it is more like "without the ease of use provided by X, even though I could have done it, I wouldn't have done it because it would require time and energy that I'm not willing to put in".

Since we're talking about self-hosting, to me the main focus is not skill set but time and energy.

There's the same debate around NAS products like Synology that are sold with a high markup, meanwhile "every SWE should be able to make their own NAS using recycled hardware".

Sure. And I did all of this: - homemade NAS setup - homemade network setup - homemade mediaplayer setup

It was fun and I learned a lot.

But I moved to some more convenient tools so that I can just use them as reliable services, and focus on other experimentations/tinkering.

To be honest, the fact that you insist that Plex is just "file sharing" that can be replaced by torrents makes me think you either don't know what Plex actually is, or you are acting in bad faith.

johnisgood 2 hours ago||
I did not say Plex is "just file sharing that can be replaced by torrents". Those were two separate points:

1. The "unlocked" framing implies capability, not time preference

2. General technical literacy has declined: non-SWEs used to torrent, use DC++ extensively, etc.

I was not comparing Plex to torrenting. I was observing that basic file-sharing knowledge used to be common and now is not (see Netflix et al).

> time and energy being the focus

Sure, that is fair. But that is a different claim than "Tailscale unlocked self-hosting for me" which is how it is often framed.

duckmysick 5 hours ago|||
Software engineering is a broad spectrum where we can move up and down its abstraction ladder. Using off-the-shelf tools and even third-party providers is fine. I don't have to do everything from scratch - after all, I didn't write my own text editor. I'm also happy to download prepacked and preconfigured software on my Linux distro instead of compiling and adding them to PATH manually.

I could, I just choose not to and direct my interests elsewhere. Those interests can change over time too. One day someone with Tailscale can decide to explore Wireguard. Similarly, someone who runs their own mail server might decide to move to a hosted solution and do something else. That's perfectly fine.

To me, this freedom of choice in software engineering is not disheartening. It's liberating and exciting.

johnisgood 5 hours ago||
That is a strawman though, and I am not sure why all replies assume extremes all the time.

Nobody said do everything from scratch. The point is: basic networking (port forwarding, WireGuard) should not be beyond someone's capability as a software engineer.

"I use apt instead of compiling" is a time tradeoff. "I can't configure a VPN" is a skill gap. These are not equivalent.

If you choose convenience for whatever reasons, that is completely fine.

mschild 4 hours ago|||
"I can't configure a VPN" and "I don't want to configure a VPN" are 2 entirely different things. Mind you I have no idea how complex tailscale setup is in comparison.

I'm in the middle of setting up my own homeserver. Still deciding on what/if I want to expose to the internet and not just local network and while setting everything up and tinkering is part of the fun for me. I get some people just want results that they can rely on. Tailscale, while not a perfect option, is still an option and if they're fine with the risk profile I can understand sacrificing some security for it.

johnisgood 3 hours ago||
It seems like we do agree. :)

For a homeserver:

- SSH with key-only auth, exposed directly. This has worked for decades. Consider non-standard port to reduce log noise (not security, just quieter logs), fail2ban if you want

- Access internal services via SSH tunnels or just work on the box directly

- If exposing HTTP(S): reverse proxy (nginx/caddy) with TLS, rate limiting

- Databases, admin panels, monitoring - access via SSH, not public (ideally)

You do not need a VPN layer if you are comfortable with SSH. It has been battle-tested longer than most alternatives.

The fun part of tinkering is also learning what is actually necessary vs. cargo-culted advice. You will find most "security hardening" guides are overkill for a homeserver with sensible defaults.

duckmysick 4 hours ago|||
I'd argue that no, managing your own VPN is not a basic skill - certainly not in the realms of software engineering (more like network engineering).
johnisgood 3 hours ago||
WireGuard is ~10 lines of config and wg genkey. Calling that "network engineering" is a stretch.

The siloing of basic infrastructure knowledge into "not my discipline" is part of the problem. Software gets deployed somewhere: understanding ports, keys, and routing at a basic level is not specialized knowledge.

Honestly, if 10 lines of config is "network engineering", then the bar for software engineering has dropped considerably.

InfinityByTen 3 hours ago||
I am probably in the camp where I've found myself ovewhelmed with the amount of information about networks and I'm an alleged software engineer (without formal training in CS albeit).

The 10 loc is not a valid measure.

`sudo rm -rf /` is a 1 line of code. It's not the lines that are hard to wrap your brain around, it's the implication of the lines that really what we are talking about.

johnisgood 2 hours ago||
The rm -rf comparison is a bit dramatic. WireGuard's config is conceptually simple: your key, peer's key, endpoint, what IPs route through the tunnel. The "implications" are minimal. It is a point-to-point encrypted tunnel.

Being overwhelmed by networking basics is worth addressing regardless. It comes up constantly: debugging connectivity, deployments, understanding why your app cannot reach a database. 30 minutes with the WireGuard docs would demystify it. The concepts are genuinely simple and worth 30 minutes to understand as it applies far beyond VPNs.

I have become pragmatic too. I do not tinker for the sake of it anymore. But there is a difference between choosing convenience and lacking foundational knowledge. One is a time tradeoff, the other is a gap that will bite you eventually.

And with LLMs, learning the basics is easier than ever. You can ask questions, get explanations, work through examples interactively. There is less excuse now to outsource or postpone foundational knowledge, not more[1].

At some point it is just wanting the benefits without the investment. That is not pragmatism, it is hoping the gaps never matter. They usually do.

[1] You can ask an LLM to do all of that for you and make it help you understand under less than 10 minutes!

InfinityByTen 1 hour ago||
I do agree on that using LLMs to demistify, learn and explore is better alternative than handing it off to go rouge on, is a better advice. That's how I used it last weekend and I think that's what I would advocate the usage instead of just letting YourFavouriteAI be the sys admin.

My problem is not just networking knowledge. I genuinely faced issues with open source tools. Troubleshooting in the days of terrible search is also a major annoyance. Sometimes, it's just the case that some of the tools have evolved and the same commands don't work as did for someone in 2020 in some obscure forum. I remember those days of tinkering with linux and open source where you'd rely on a Samaritan (bless their soul) who said they'd go home and check up and update you.

Claude suggested me Tailscale too, but I'm glad we're having this conversation (thanks for the tips btw), so that we don't follow hallucinations or bad advice by similarly trained agents. I'm cautiously positive, but I think there's still a case to go self hosted with AI assistance. I found myself looking at possibilities rather than fearing dead ends and time black holes.

johnisgood 39 minutes ago||
Thank you for your reply!

I am glad that it is useful to you! The "terrible search + outdated forum posts" problem is real for sure. LLMs genuinely help there by synthesizing across versions and explaining what changed.

I would say that self-hosting with AI assistance is the right approach. Use it to understand, not to blindly execute. Trust me, it is not much of a deal and you will be happy to have gone with this route afterwards!

Good luck with the setup. If you have any questions, let me know, I am always happy to help.

(I have very briefly mentioned some stuff here: https://news.ycombinator.com/item?id=46586406 but I can expand and be a bit more detailed as needed.)

simonw 15 hours ago||||
If you're confident that you know how to securely configure and use Wireguard across multiple devices then great, you probably don't need Tailscale for a home lab.

Tailscale gives me an app I can install on my iPhone and my Mac and a service I can install on pretty much any Linux device imaginable. I sign into each of those apps once and I'm done.

The first time I set it up that took less than five minutes from idea to now-my-devices-are-securely-networked.

Cyph0n 15 hours ago||||
It’s a bit more than sugar.

1. 1-command (or step) to have a new device join your network. Wireguard configs and interfaces managed on your behalf.

2. ACLs that allow you to have fine grained control over connectivity. For example, server A should never be able to talk to server B.

3. NAT is handled completely transparently.

4. SSO and other niceties.

For me, (1) and (2) in particular make it a huge value add over managing Wireguard setup, configs, and firewall rules manually.

zeroxfe 14 hours ago||||
> Plex is just sugar on top of file sharing.

right, like browsers are just sugar on top of curl

InfinityByTen 1 hour ago|||
At least postman is :P
edoceo 14 hours ago|||
curl is just sugar on sockets ;)
epistasis 13 hours ago||
SSH is just sugar on top of telnet and running your own encryption algorithms by hand on paper and typing in the results.
drnick1 15 hours ago||||
> Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.

Speaking of that, I have always preferred a plain Unbound instance and a Samba server over fancier alternatives. I guess I like my setups extremely barebone.

ryandrake 14 hours ago||
Yea, my philosophy for self-hosting is "use the smallest amount of software you can in order to do what you really need." So for me, sugar X on top of fundamental functionality Y is always rejected in favor of just configuring Y."
SchemaLoad 14 hours ago||||
Tailscale is Wireguard but it automatically sets everything up for you, handles DDNS, can punch through NAT and CGNAT, etc. It's also running a Wireguard server on every device so rather than having a hub server in the LAN, it directly connects to every device. Particularly helpful if it's not just one LAN you are trying to connect to, but you have lots of devices in different areas.
Frotag 15 hours ago||||
I always assumed it was because a lot of ISPs use CGNAT and using tailscale servers for hole punching is (slightly) easier than renting and configuring a VPS.
BatteryMountain 7 hours ago||||
Setting up wireguard manually can be a pain in the butt sometimes. Tailscale makes it super easy but then your info flows through their nodes.
navigate8310 11 hours ago||||
Tailscale is able to punch holes in CGNAT which a vanilla wireguard cannot
mfcl 15 hours ago||||
It's plug and play.
Forgeties79 14 hours ago||
And some people may not value that but a lot of people do. It’s part of why Plex has become so popular and fewer people know about Jellyfin. One is turnkey, the other isn’t.

I could send a one page bullet point list of instructions to people with very modest computer literacy and they would be up and running in under an hour on all of their devices with Plex in and outside of their network. From that point forward it’s basically like having your own Netflix.

atmosx 15 hours ago||||
You don’t have to run the control plane and you don’t have to manage DNS & SSL keys for the DNS entries. Additionally the RBAC is pretty easy.

All these are manageable through other tools, but it’s more complicated stack to keep up.

Skunkleton 15 hours ago||||
Yes, that is really all it is.
lelandbatey 11 hours ago|||
If Plex is "just file sharing" then I guarantee you'd find Tailscale "just WireGuard".

I enjoy that relative "normies" can depend on it/integrate it without me having to go through annoying bits. I like that it "just works" without requiring loads of annoying networking.

For example, my aging mother just got a replacement computer and I am able to make it easy to access and remotely administer by just putting Tailscale on it, and have that work seamlessly with my other devices and connections. If one day I want to fully self-host, then I can run Headscale.

miki123211 4 hours ago|||
Now I wish there was some kind of global, single-network version of Tailscale...

TS is cool if you have a well-defined security boundary. This is you / your company / your family, they should have access. That is the rest of the world, they should not.

My use case is different. I do occasionally want to share access to otherwise personal machines around. Tailscale machine sharing sort of does what I want, but it's really inconvenient to use. I wish there was something like a Google Docs flow, where any Tailscale user could attempt to dial into my machine, but they were only allowed to do so after my approval.

PLG88 4 hours ago|||
You have more or less described OpenZiti. Just mint a new identity/JWT for the user, create a service, and viola, only that user has access to your machine. Fully open source and self-hostable.
fartfeatures 4 hours ago|||
Take a look at Zrok it might be what you want: https://zrok.io
MattSayar 10 hours ago|||
Just be sure to run it with --accept-dns=false otherwise you won't have any outbound Internet on your server if you ever get logged out. That was annoying to find out (but easy to debug with Claude!)
throwup238 3 hours ago|||
There’s also Cloudflare tunnels for stuff that you want to be available to the internet but dont want to open ports and deal with that. You can add an auth policy that only works with your email and Github/whatever SSO.
PeterStuer 6 hours ago|||
These are two very separate issues. Tailscale or other reverse proxies will give you access from the WAN.

Claude Code or other assistants will give you conversational management.

I already do the former (using Pangolin). I'm building towards the latter but first need to be 100% sure I can have perfect rollback and containement across the full stack CC could influence.

lee_ars 1 hour ago||
I've started experimenting with Claude Code, and I've decided that it never touches anything that isn't under version control.

The way I've put this into practice is that instead of letting claude loose on production files and services, i keep a local repo containing copies of all my service config files with a CLAUDE.md file explaining what each is for, the actual host each file/service lives on, and other important details. If I want to experiment with something ("Let's finally get around to planning out and setting up kea-dhcp6!"), Claude makes its suggestions and changes in my local repo, and then I manually copy the config files to the right places, restart services, and watch to see if anything explodes.

Not sure I'd ever be at the point of trusting agentic AI to directly modify in-place config files on prod systems (even for homelab values of "prod").

PaulKeeble 15 hours ago|||
Its especially important in the CGNAT world that has been created and the enormous slog that IPv6 rollout has ultimately become.
SchemaLoad 14 hours ago|||
Yeah same story for me. I did not trust my sensitive data on random self hosting apps with no real security team. But now I can put the entire server on the local network only and split tunnel VPN from my devices and it just works.

LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.

BatteryMountain 7 hours ago|||
Tailscale is a good first step, but its best to configure wireguard directly on your router. You can try headscale but it seems to be more of a hobby project - so native wireguard is the only viable path. Most router OS's supports wireguard these days too. You can ask claude to sanity check your configuration.
comrade1234 15 hours ago|||
I just have a vpn server on my fiber modem/router (edgerouter-4) and use vpn clients on my devices. I actually have two vpn networks - one that can see the rest of my home network (and server) and the other that is completely isolated and can't see anything else and only does routing. No need to use a third-party and I have more flexibility
JamesSwift 12 hours ago|||
Just use subpath routing and fail2ban and Im very comfortable with exposing my home setup to the world.

The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.

Melatonic 13 hours ago|||
Why not cloudflare tunnels ?
mobilio 6 hours ago||
CF tunnels are game changers for me!
dangoodmanUT 15 hours ago|||
definitely, but to be fair, beyond that it's just linux. Most people would need claude code to get what ever they want to use linux for running reliably (systemd service, etc.)
dangoodmanUT 15 hours ago||
i'm still waiting for ECC minipcs, then i'll go all in on local DBs too
aaronax 10 hours ago||
Supermicro has some low power options such as https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E...
znpy 5 hours ago|||
> The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.

In my experience this is much less of an issue depending on your configuration and what you actually expose to the public internet.

Os-side, as long as you pick a good server os (for me that’s rocky linux) you can safely update once every six months.

Applications-wise, i try and expose as little as possible to the public internet and everything exposed is running in an unprivileged podman container. Random test stuff is only exposed within the vpn.

Also tailscale is not even a hard requirement: i rub openvpn and that works as well, on my iphone too.

The truly differentiating factor is methodological, not technological.

shadowgovt 14 hours ago|||
Besides the company that operates it, what is the big difference between Tailscale and Cloudflare tunnels? I've seen Tailscale mentioned frequently but I'm not quite sure what it gets for me. If it's more like a VPN, is it possible to use on an arbitrary device like a library kiosk?
ssl-3 14 hours ago|||
I don't use Cloudflare tunnels for anything.

But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.

Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.

But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.

vachina 12 hours ago|||
You can self host a tailscale network entirely on your own, without making a single call to Tailscale Inc.

Your cloudflare tunnel availability depends on Cloudflare’s mood of the day.

mtoner23 5 hours ago||
People are way too worried about security imo. Statistically, no one is targeting you to be hacked. By the time you are important and valuable enough for your home equipment to be a target you would have hired someone else to manage this for you
subscribed 4 hours ago|||
Oh, sure, no one is targeting me specifically.

Its only swarms of bots and scripts going through the entire internet, including me.

iptables and fail2ban should be installed pretty early, and then - just watch the logs.

fetzu 5 hours ago|||
I think this is very dangerous perspective. A lot of attacks on infra are automated, just try to expose a Windows XP machine to the internet for a day and see with how much malware you end up with. If you leave your security unchecked, you will end up attacked; not by someone targeting you specifically, but having all your data encrypted for ransom might still create a problem for you (even if the attacker doesn’t care about YOUR data specifically).
thrownawaysz 13 hours ago||
I went down the self host route some years ago but once critical problems hit I realized that beyond a simple NAS it can be a very demanding hobby.

I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.

zrail 12 hours ago||
My spouse and I work at home and after the first couple multi-day power outages we invested in good UPSs and a whole house standby generator. Now when the power goes out it's down for at most 30 seconds.

This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.

rootusrootus 12 hours ago|||
Yeah we did a similar thing. Same situation, spouse and I both work from home, and we got hit by a multiple day power outage due to a rare severe ice storm. So now I have an EV and a transfer switch so I can go for a week without power, and I have a Starlink upstream connection in standby mode that can be activated in minutes.

Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.

VTimofeenko 11 hours ago|||
We had a 5 day outage last year, got a generator at the tail end of the windy season and made exact same jokes.

A year later another atmospheric river hit and we had a 4 hour outage. No more jokes.

Make sure to run that generator once every few months with some load to keep it happy.

rootusrootus 10 hours ago||
Well, it's an EV with a big inverter, not a generator, but I get your point. And I do periodically fire it up and run the house on it for a little while, just to exercise the connection and maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
kiddico 11 hours ago|||
Thanks for taking one for the team.
gorgoiler 6 hours ago||||
2025 was the year of LiFePo power packs for me and my family. Absolute game changers: 1000Wh of power with a multi-socket inverter and UPS-like failover. You lose capacity over a gas genny but the simplicity and lack of fumes adds back a lot of value. If it’s sunny you can also make your own fuel.

https://www.ankersolix.com/ca/products/f2600-400w-portable-s...

_the_inflator 1 hour ago|||
I made the same revelation.

Self hosting sounds so simple, but if you consider all the critical factors involved, in becomes a full time job. You own your server. In every regard.

And security is only one crucial aspect. How spam filters react to your IP is another story.

In the end I cherrish the dream but rely on third party server providers.

advael 12 hours ago|||
Yea I think my own preference for self-hosting boils down to a distrust of a continuous dependency on a service in control of a company and a desire to minimize such dependencies. While there are FOSS and self-hostable alternatives to tailscale or indeed claude code, using those services themselves simply replaces old dependencies on externally-controlled cloud-based services on new ones
Aurornis 9 hours ago|||
I thought I was smart because I invested in UPS backup from the start.

Then 5 years later there was a power outage and the UPS lasted for about 10 seconds before the batteries failed. That's how I learned about UPS battery maintenance schedules and the importance of testing.

I have a calendar alert to test the UPS. I groan whenever it comes up because I know there's a chance I'm going to discover the batteries won't hold up under load any more, which means I not only have to deal with the server losing power but I have to do the next round of guessing which replacement batteries are coming from a good brand this time. Using the same vendor doesn't even guarantee you're going to get the same quality when you only buy every several years.

Backup generators have their own maintenance schedule.

I think the future situation should be better with lithium chemistry UPS, but every time I look the available options are either exorbitantly expensive or they're cobbled together from parts in a way that kind of works but has a lot of limitations and up-front work.

kalaksi 4 hours ago||
My APC UPS self-tested and monitored battery status automatically. Then started to endlessly beep when it noticed the battery needed replacing (could be muted though). Eventually, I stopped using UPS since I rarely needed it and it was just another thing to keep and maintain.
4k93n2 5 hours ago|||
syncthing might be worth looking into. ive been using that more and more the last few years for anything that i use daily, things like keepass, plain-text notes, calendars/contacts, rss feeds, then everything else that im "self hosting" are just things that i might only use a few times a week so its no big deal if i lose access.

its so much simpler when you have the files stored locally, then syncing between devices is just something that can happen whenever. anything that is running on a server needs user permissions, wifi, a router etc etc, its just a lot of complexity for very little gain.

although keep in mind im the only one using all of this stuff. if i needed to share things with other people then syncthing gets a bit trickier and a central server starts to make more sense

digiown 10 hours ago|||
Tailscale has passkey-only account support but requires you to sign up in a roundabout way (first use an SSO, then invite another user, throw away the original). The tailnet lock feature also protects you to some extent, arguably more so than solutions involving self-hosting a coordination server on a public cloud.
timwis 7 hours ago|||
You can self-host Pocket ID (or another OIDC auth service) on a tiny $1/mo box and use that as your identity provider for Tailscale. Here's a video explaining how: https://www.youtube.com/watch?v=sPUkAm7yDlU
CGamesPlay 12 hours ago|||
I really enjoy self-hosting on rented compute. It's theoretically easy to migrate to an on-prem setup, but I don't have to deal with the physical responsibilities while it's in the cloud.
Gigachad 9 hours ago||
Depends what you are trying to host. For many people it’s either to keep their private data local, or stuff that has to be on the home network (pi hole / home assistant)

If you just want to put a service on the internet, a VPS is the way to go.

baq 7 hours ago|||
I went with home assistant and zigbee smart plugs to restart the router and the optical terminator.
gessha 10 hours ago|||
Tailscale recently added passkey log in. Would that alleviate the SSO login?

Tailscale also has a self-hosted version I believe.

altmanaltman 11 hours ago|||
I mean you're right in terms of it being a demanding hobby. The question is, is it worth the switch from other services.

I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.

JamesSwift 12 hours ago|||
Well, its not a bottomless pit really. Yes you need a UPS. That’s basically it though.
Aurornis 9 hours ago|||
UPS batteries don't last forever.

So now you need to test them regularly. And order new ones when they're not holding a charge any more. Then power down the server, unplug it, pull the UPS out, swap batteries, etc.

Then even when I think I've got the UPS automatic shutdown scripts and drivers finally working just right under linux, a routine version upgrade breaks it all for some reason and I'm spending another 30 minutes reading through obscure docs and running tests until it works again.

joshvm 7 hours ago||||
My home server doesn't need to be high availability, and the BIOS is set to whatever state prior to power loss. I don't have a UPS. However, we were recently hit with a telco outage while visiting family out of town. As far as I can tell there wasn't a power outage, but it took a hard reboot of the modem to get connectivity back. Frustrating because it meant no checking home automation/security and of course no access to the servers. I'm not at a point where my homelab is important enough that I would invest in a redundant WAN though.

I've also worked in environments where the most pragmatic solution was to issue a reboot periodically and accept the minute or two of (external) downtime. Our problem is probably down to T-Mobile's lousy consumer hardware.

bisby 11 hours ago|||
Power outages here tend to last an hour or more. A UPS doesn't last forever, and depending on how much home compute you have, might not last long enough for anything more than a brief outage. A UPS doesn't magically solve things. Maybe you need a home generator to handle extended outages...

How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.

jmb99 9 hours ago|||
Fairly frequently, 6kVA UPSs come up for sale locally to me, for dirt cheap (<$400). Yes, they're used, and yes, they'll need ~$500 worth of batteries immediately, but they will run a "normal" homelab for multiple hours. Mine will keep my 2.5kW rack running for at least 15 minutes - if your load is more like 250W (much more "normal" imo) that'll translate to around 2 hours of runtime.

Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.

Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.

> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."

Aurornis 9 hours ago||
This is a great example of how the homelab bottomless pit becomes normalized.

Rewiring the house for 240V supply and spending $400+500 to refurbish a second-hand UPS to keep the 2500W rack running for 15 minutes?

And then there's the electricity costs of running a 2.5kW load, and then cooling costs associated with getting that much heat out of the house constantly. That's like a space heater and a half running constantly.

bongodongobob 11 hours ago|||
[dead]
cyberax 13 hours ago|||
Long time ago, it was popular for ISPs offer a small amount of space for personal websites. We might see a resurgence of this, but with cheap VPS. Eventually.
SchemaLoad 13 hours ago||
Free static site hosting and cheap VPSs already exist. Self hosting is less about putting sites on the internet now and more about replicating cloud services locally.
Imustaskforhelp 12 hours ago||
VPS's are really so dirt cheap that some of them only work because people dont use the servers 100% that they are allocated at or when people dont use the resources they have for most part because of economies of scale but vps's are definitely subsidized.

Cheap vps servers 1 gb ram and everything can cost around 10-11$ per year and using something like hetzner's cheap as well for around 30$ ish an year or 3$ per month most likely while having some great resilient numbers and everything

If anything, people self host because they own servers so upgrading becomes easier (but there are vps's which target a niche which people should look at like storage vps, high perf vps, high mem vps etc. which can sometimes provide servers for dirt cheap for your specific use case)

The other reason I feel like are the ownership aspect of things. I own this server, I can upgrade this server without costing a bank or like I can stack up my investment in a way and one other reason is that with your complete ownership, you don't have to enforce t&c's so much. Want to provide your friends or family vps servers or people on internet themselves? Set up a proxmox or incus server and do it.

Most vps servers sometimes either outright ban reselling or if they allow, they might sometimes ban your whole account for something that someone else might have done so somethings are at jeopardy if you do this simply because they have to find automated ways of dealing with abuse at scale and some cloud providers are more lenient than others in banning matters. (OVH is relaxed in this area whereas hetzner, for better or for worse, is strict on its enforcement)

SchemaLoad 11 hours ago||
Self hosting for me is important because I want to secure the data. I've got my files and photos on there, I want to have the drive encrypted with my key. Not just sitting on a drive I don't have any control over. Also because it plugs in to my smart home devices which requires being on the local network.

For something like a website I want on the public internet with perfect reliability, a VPS is a much better option.

neoromantique 3 hours ago|||
For this reason I have hybrid homelab, with most stuff hosted at home, but critical things I'd need to have running are on a VM in cloud. Best of both worlds.
ekianjo 11 hours ago|||
> I was in another country when there was a power outage at home.

If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.

newsclues 11 hours ago|||
I have a desktop I use but if I had to start again, I’d build a low power r pi or n100 type system that can be powered by a mobile battery backup with solar (flow type with sub 10ms switching and good battery chemistry for long life) that can do the basic homelab tasks. Planning for power outages from the get go rather than assuming unlimited and cheap power
Imustaskforhelp 12 hours ago|||
Hey, if tailscale is something you are worried about. There are open source alternatives to it as well but I think if your purpose is to just port forward a simple server port, wouldn't ssh in general itself be okay with you.

You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well

You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.

You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything

But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.

Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up

> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what

CGamesPlay 12 hours ago||
Thanks for the shout-out! If you have any experiential reports using QTM, I'd love to hear them!
Imustaskforhelp 11 hours ago||
Oh yeah this is a really funny story considering what thread we are on, but I remember asking chatgpt or claude or gemini or anything xD to make QTM work and none of them could figure out

But I think in the end what ended up working was my frustration took over and I just copy pasted the commands from readme and if I remember correctly, they just worked.

This is really ironical considering on what thread we are on but in the end, Good readme's make self hosting on a home server easier and fun xD

(I don't exactly remember chatgpt's conversations, perhaps they might have helped a bit or not, but I am 99% sure that it was your readme which ended up helping and chatgpt etc. in fact took an hour or more and genuinely frustrated me from what I remember vaguely)

I hope QTM reaches more traction. Its build on solid primitives.

One thing I genuinely want you to perhaps take a look at if possible is creating an additional piece of software or adding the functionality where instead of the careful dance that we have to make it work (like we have to send two large data pieces from two computers, I had to use some hacky solution like piping server or wormhole itself for it)

So what I am asking is if there could be a possibility that you can make the initial node pairing (ticket?) [Sorry, I forgot the name of primitive] between A and B, you use wormhole itself and now instead of these two having to send large chunks of data between each other, they can now just send 6 words or similar

Wormhole: https://github.com/magic-wormhole/magic-wormhole

I even remember building some of my own CLI for something liek this and using chatgpt to build it xD but in the end gave up because I wasn't familiar with the codebase or how to make these two work together but I hope that you can add it. I sincerely hope so.

Another minor suggestion I feel like giving is to please have asciinema demo. I will create an asciinema patch if you want between two computers but a working demo gif from 0 -> running really really would've helped me save some/few hours

QTM has lots of potential. Iroh is so sane, it can run directly on top of ipv4 itself and talk directly if possible but it can even break through nats and you can even self host the middle part itself. I had thought about building such a project when I had first discovered QTM and you can just imagine my joy when I discovered QTM from one of your comments a long time ago for what its worth

Wishing the best of luck of your project! The idea is very fascinating. I would appreciate a visual demo a lot though and I hope we can discuss more!

Edit: I remember that qtm docs had this issue of where they really felt complex for me personally when all I wanted was one computer port mapped to another computer port and I think what helped in the end was the 4th comment if I remember correctly, I might have used LLM assistance or not or if it helped or not, I genuinely don't remember but it definitely took me an hour or two to figure things out but its okay since I still feel like the software is definitely positive and this might have been a skill issue from my side but I just want if you can add asciinema docs, I can't stress it enough if possible on how much it can genuinely help an average person to figure out the product.

(Slowly move towards the complex setups with asciinema demos for each of them if you wish)

Once again good luck! I can't stress qtm and I still strongly urge everyone to try qtm once https://gitlab.com/CGamesPlay/qtm since its highly relevant to the discussion

CGamesPlay 8 hours ago||
You aren't actually supposed to ever need to deal with tickets manually, unless you are trying to get a tunnel between two machines and neither can SSH into the other. It could be streamlined with something like Magic Wormhole, though. I'll add that to the backlog and see if there's interest. The normal way is to use SSH / docker exec / any remote shell to let QTM swap the tickets over it.

I've added an asciinema to the README now <https://asciinema.org/a/z2cdsoVDVJu0gIGn>, showing the manual connection steps. Thanks for the kind words. Hope you find it useful!

Imustaskforhelp 4 hours ago||
well my use case is the fact of connecting two servers behind nat. If I were to be able to gain ssh lets say, then I could've simply port forwarded in the first place.

Wow the asciinema is really good and very professional, thanks for creating it, I found it very helpful (in the sense that if I ever were to repeat my experiment, now I got your asciinema server) and I hope more people use it

> It could be streamlined with something like Magic Wormhole, though. I'll add that to the backlog and see if there's interest

To be really honest, its not that big of a deal considering one can do that on their own but I just had this idea for my own convenience when I was using QTM

I really like QTM a lot! Thanks for building it once again, I would try to integrate it more often and give you more feedback when possible from now.

tehlike 13 hours ago||
Starlink backup sounds fun now!
thrownawaysz 13 hours ago||
Way too expensive for that imo (but then again might as well just go all in). Probably a 5G connection is more than enough
Imustaskforhelp 12 hours ago||
Honestly I think that there must be adapters which can use unlimited 5g sim's data plans as fallback network or perhaps (even primary?)

They would be cheaper than starlink fwiw and most connections can be robust usually.

That being said, one can use tailscale or cloudflare tunnels to expose the server even if its behind nat which you mention in your original comment that you might be against at for paranoid reasons and thats completely fine but there are ways to go do that if you want as well which I have talked about it on the other comment I have written here in-depth.

numpad0 12 hours ago||
Some SOHO branch office routers like Cisco ISR models can take cellular dongles and/or SIM. Drivers for supported models are baked into ROM and everything works through CLI.
Imustaskforhelp 12 hours ago|||
man I have this vague memory that I was at a neighbour's house and we were all kids and internet wasn't that widespread (I was really young) and I remember that they had this dongle in which they inserted an sim card in for network access. This is why this idea has always persisted in my head in the first place.

I don't know what's the name of dongle though, it was similar to those sd card to usb thing ykwim, I'd appreciate it if someone could help find this too if possible

but also yeah your point is also fascinating as well, y'know another benefit of doing this is that atleast in my area, 5g (500-700mbps) is really cheap (10-15$) with unlimited bandwidth per month and on the ethernet side of things I get 10x less bandwidth (40-80mbps) so much so that me and my brother genuinely thought of this idea

except that we thought that instead of buying a router like this, we use an old phone device and insert sim in it and access router through that way.

tehlike 9 hours ago|||
I would do this on OPNSense - with a separate WLAN. Fairly easy to do.
InfinityByTen 4 hours ago||
I was just thinking I should write something about this, because the words needs spreading.

I cannot say how happy I am configuring my own immich server on a decade old machine. I just feel empowered. Because despite my 9 years of software development, I haven't gotten into the nitty gritties of networking, VPN and I always see something non-standard while installing an open source package and without all of this custom guidance, I always would give up after a couple of hours of pulling my hair apart.

I really want to go deeper and it finally feels this could be a hobby.

PS: The rush was so great I was excitedly talking to my wife how I could port our emails away from google, considering all of the automatic opt in for AI processing and what not. The foolhardy me thought of even sabbatical breaks to work on long pending to-do's in my head.

lee_ars 1 hour ago||
> PS: The rush was so great I was excitedly talking to my wife how I could port our emails away from google, considering all of the automatic opt in for AI processing and what not. The foolhardy me thought of even sabbatical breaks to work on long pending to-do's in my head.

I've been email self-hosting for a decade, and unfortunately, self-hosting your email will not help with this point nearly as much as it seems on first glance.

The reason is that as soon as you exchange emails with anyone using one of the major email services like gmail or o365, you're once again participating in the data collection/AI training machine. They'll get you coming or they'll get you going, but you will be got.

InfinityByTen 1 hour ago||
Words of wisdom. Hear hear!
Maledictus 2 hours ago||
Email is endgame, I suggest you get more experience self hosting in other areas first.
InfinityByTen 1 hour ago||
I concur. I did mention there was a rush and foolhardiness. That's my mid 30s excitement. Let me revel a bit :P

I do want to be able to take control; with photos and Google not giving me a folder view to manage them was the last straw that pushed me deep into the self hosted world. I just want to de-google as much as reasonable.

tbyehl 56 minutes ago||
My favorite genre of post in r/homelab and r/selfhosted this past year has been "I used AI to set all this stuff up and something broke so I asked AI to fix it and now all my data is gone."

There are so many NAS + Curated App Catalog distros out there that make self-hosting trivial without needing to Vibe SysAdmin.

valcron1000 10 hours ago||
> When something breaks, I SSH in, ask the agent what is wrong, and fix it.

> I am spending time using software, learning

What are you actually learning?

PSA: OP is a CEO of an AI company

enos_feedler 10 hours ago|
you are learning what it takes to keep a machine up and running. You still witness the breakage. You can still watch the fix. You can review what happened. What you are implying from your question is that compared to doing things without AI, you are learning less (or perhaps you believe nothing). You definitely are learning less about mucking around in linux. But, if the alternative was not ever running a linux machine at all because you didn't want to deal with running it, you are learning infinitely more.
croes 8 hours ago||
How can you review if you don‘t know in the first place?

You can watch your doctor, your plumber, your car mechanic and still wouldn’t know if they di something wrong if you don’t know the subject as such.

doctoboggan 6 hours ago||
You can learn a lot from watching your doctor, plumber or mechanic work, and you could learn even more if you could ask them questions for hours without making them mad.
defrost 6 hours ago||
You learn less from watching a faux-doctor, faux-plumber, faux-mechanic and learn even less by engaging in their hallucinations without a level horizon for reference.

Bob the Builder doesn't convey much about drainage needs for foundations and few children think to ask. Who knows how AI-Bob might respond.

fhennig 13 hours ago||
I think it's great that people are getting into self-hosting, but I don't think it's _the_ solution to get us off of big tech.

Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.

This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.

sroerick 8 hours ago|
Hey, I was thinking about this same idea lately. What exactly would you want hosted by somebody?

I was thinking about what if your "cloud" was more like a tilde.club, with self hosted web services plus a Linux login. What services would you want?

Email and cloud make sense. I think a VPN and Ad Blocker would too. Maybe Immich and music hosting? Calendar? I don't know what people use for self hosting

Humorist2290 15 hours ago||
Fun. I don't agree that Claude Code is the real unlock, but mostly because I'm comfortable with doing this myself. That said, the spirit of the article is spot on. The accessibility to run _good_ web services has never been better. If you have a modest budget and an interest, that's enough -- the skill gap is closing. That's good news I think.

But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.

I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.

heavyset_go 15 hours ago|
I believe Vaultwarden keeps data encrypted at rest with your master key, so some of the problems inherent to hosting such data can be mitigated.
Humorist2290 15 hours ago||
I can believe this, and it's a good point. I believe Bitwarden does the same. I'm not against Vaultwarden in particular but against colocation of highly sensitive (especially orthogonally sensitive) data in general. It's part of a self-hoster's journey I think: backups, isolation, security, redundancy, energy optimization, etc. are all topics which can easily occupy your free time. When your partner asks whether your photos are more secure in Immich than Google, it can lead to an interesting discussion of nuances.

That said, I'm not sure if Bitwarden is the answer either. There is certainly some value in obscurity, but I think they have a better infosec budget than I do.

catlifeonmars 8 hours ago||
> I have flirted with self-hosting at home for years. I always bounced off it - too much time spent configuring instead of using. It just wasn't fun.

No judgement, but wanting to tinker/spend time on configuration is a major reason why many people do self-host.

reactordev 1 hour ago||
I just recently wrote my own agent that can gdb, objdump, nasm, cc, make, and more.

Agents are powerful. Even more so with skills and command line tools they can call to do things. You can even write custom tools (like I did) for them to use that allows for things like live debugging.

The tailscale piece to this setup is key.

tietjens 1 hour ago|
This is very cool and I'm doing something similar but without the Claude interface as the contact point for manipulating the server. What happens if one day Claude is down, or it becomes too expensive, or it is purchased by another company, etc.

In this case you will be completely unable to navigate the infrastructure of your homeserver that your life will have become dependent on.

But a homeserver is always about your levels of risk, single points of failure. I'm personally willing to accept Tailscale but I'm not willing to give the manipulation of all services directly over to Claude.

More comments...