Top
Best
New

Posted by websku 1 day ago

CLI agents make self-hosting on a home server easier and fun(fulghum.io)
726 points | 502 commentspage 4
piqufoh 1 day ago|
I'm working on something very similar, but I've found that if I'm not doing the work - I forget what has been set up and how its running a lot faster.

For example - I have ZFS running with a 5-bay HDD enclosure, and I honestly can't remember any of the rules about import-ing / export-ing to stop / start / add / remove pools etc.

I have to write many clear notes, and store them in a place where future me will find them - otherwise the system gets very flaky through my inability to remember what's active and what isn't. Running the service and having total control is fun, but it's a responsibility too

mvanbaak 22 hours ago||
This is the reason one should always ask the LLM to create scripts to complete the task. Asking it to do things is fine, but as you stated you will forget. If you ask the LLM to do something, but always using a script first, and if you ask: 'Create a well documented shell script to <your question here>', you will have auto documentation. One could go one step further and ask it to create a documented terraform/ansible/whatever tooling setup you prefer.
Draiken 23 hours ago|||
Write scripts for everything.

If you need to run the command once, you can now run it again in the future.

It's very tempting to just paste some commands (or ask AI to do it) but writing simple scripts like this is an amazing solution to these kinds of problems.

Even if the scripts get outdated and no longer work (maybe it's a new version of X) it'll give you a snapshot of what was done before.

ibizaman 18 hours ago|||
This is the reason I adore NixOS. The documentation is the code. Seriously.
Maledictus 1 day ago||
Which enclosure do you use, and can you recommend it?
mr-karan 1 day ago||
I've landed on a similar philosophy but with a slightly different approach to orchestration. Instead of managing everything interactively, I built a lightweight bash-based deployment system that uses rsync + docker compose across multiple machines.

The structure is dead simple: `machines/<hostname>/stacks/<service>/` with a `config.sh` per machine defining SSH settings and optional pre/post deploy hooks. One command syncs files and runs `docker compose up -d`.

I could see Claude Code being useful for debugging compose files or generating new stack configs, but having the deployment itself be a single `./deploy.sh homeserver media` keeps the feedback loop tight and auditable.

Draiken 23 hours ago||
I use Ansible.

It's simple enough and I had some prior experience with it, so I merely have some variables, roles that render a docker-compose.yml.j2 template and boom. It all works, I have easy access to secrets, shared variables among stacks and run it with a simple `ansible-playbook` call.

If I forget/don't know the Ansible modules, Claude or their docs are really easy to use.

Every time I went down a bash script route I felt like I was re-inventing something like Ansible.

neoromantique 1 day ago||
I have very similar setup, but I use komo.do with netbird.

Which basically accomplishes same thing, but gives a bit more UI for debugging when needed.

JodieBenitez 1 day ago||
So it's self hosting but with a paid and closed saas dependency ? I'll pass.
HarHarVeryFunny 1 day ago|
Doesn't have to be that way though. As discussed here recently, a basic local agent like Claude Code is only a couple hundred lines of code, and could easily be written by something like Claude Code if you didn't want to do it yourself.

If you have your own agent, then it can talk to whatever you want - could be OpenRouter configured to some free model, or could be to a local model too. If the local model wasn't knowledgeable enough for sysadmin you could perhaps use installable skills (scripts/programs) for sysadmin tasks, with those having been written by a more powerful model/agent.

nojs 1 day ago||
This post is spot on, the combo of tailscale + Claude Code is a game changer. This is particularly true for companies as well.

CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.

It also means you can literally host the tools on a server in your office, if you really want to.

Putting CC on the server makes this set up even better. It’s extremely good at system admin.

sambuccid 1 day ago||
And if you prefer to learn well how to do it without AI, you can always try to do it manually the old way but then use AI at the end to review your config and spot any security issues
shamiln 1 day ago||
Tailscsle was never the unlock for me, but I guess I never was the typical use case here.

I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.

Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).

I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.

My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.

zrail 1 day ago|
Curious how long you've been sitting on the IP block. I've been nosing around getting an ASN to mess around with the lower level internet bones but a /24 is just way too expensive these days. Even justifying an ASN is hard, since the minimum cost is $275/year through ARIN.
bakies 1 day ago||
Is that the minimum for an ASN? /24 is a lot of public IP space! I'd expect just to get a static IP from and ISP if I were to coloc like this
zrail 1 day ago||
The minimum publicly routable IPv4 subnet is /24 and IPv6 is /48. IPv6 is effectively free, there are places that will lease a /48 for $8/year, whereas as far as I can tell it's multiple thousands of USD per year to acquire or lease a /24 of IPv4.
wantlotsofcurry 1 day ago||
Was this article written entirely by Claude for the most part? It definitely reads like it was.
jordanf 22 hours ago|
No
chromehearts 1 day ago||
Me personally; I have a similar mini pc with kubuntu installed, coolify to deploy my projects & cloudflare tunnels to expose them to the internet. the mini pc is still usable for daily use so that's great too
visageunknown 1 day ago|
I find LLMs remove all the fun for me. When I build my homelab, I want the satisfaction of knowing that I did it. And the learning gains that only come from doing it manually. I don't mind using an LLM to shortcut areas that are just pure pain with no reward, but I abstain from using it as much as possible. It gives you the illusion that you've accomplished something.
lurking_swe 1 day ago||
> It gives you the illusion that you've accomplished something.

What’s the goal? If the act of _building_ a homelab is the fun then i agree 100%. If _having_ a reliable homelab that the family can enjoy is the goal, then this doesn’t matter.

For me personally, my focus is on “shipping” something reliable with little fuss. Most of my homelab skills don’t translate to my day job anyway. My homelab has a few docker compose stacks, whereas at work we have an internal platform team that lets me easily deploy a service on K8s. The only overlap here is docker lol. Manually tinkering with ports and firewall rules, using sqlite, backups with rsync, etc…all irrelevant if you’re working with AWS from 9-5.

I guess I’m just pointing out that some people want to build it and move on.

visageunknown 17 hours ago||
If your sole goal is to have a homelab that self-hosts services, I completely agree. I'm speaking for those who are interested in developing their skills and knowledge, and believe that building something with AI somehow does that.

I'll agree to disagree on it not being applicable. Having fundamental knowledge on topics like networking thru homelabbing have helped me develop my understanding from the ground up. It helps in ways that are not always obvious. But if your goal is purely to be better at your job at work, it is not the most efficient path.

lee_ars 23 hours ago|||
>I don't mind using an LLM to shortcut areas that are just pure pain with no reward...

Enlightenment here comes when you realize others are doing the exact same thing with the exact same justification, and everyone's pain/reward threshold is different. The argument you are making justifies their usage as well as yours.

visageunknown 17 hours ago||
That may be true. Ultimately, what I'd advise is for people to be cognizant of their goals and whether AI does or does not help to achieve them.
torginus 1 day ago|||
The thing about anything that actually gets used, is what removes the fun the quickest is when it breaks and people who actually want to use it start complaining.

In that case, it's not about the 'joy of creation', but actually getting everything up and running again, in which case LLMs are indispensable.

visageunknown 17 hours ago||
I don't disagree. All depends on what you're looking to get out of it.
cyberrock 1 day ago|||
Getting it up and running is fun but I find maintaining some services a pain. For example, Authelia has breaking configuration changes every minor release, and fixing that easily takes 1-X hours every time. I gave up for 4.38 and just tossed the patch notes into NotebookLM.
visageunknown 17 hours ago||
Definitely. That's a great use case. How do you use NotebookLM? First I'm hearing about it
Gigachad 1 day ago|||
I don’t give them direct access to my computer. I just use them as an alternative to scrolling reddit for answers. Then I take the actions myself.
jordanf 22 hours ago||
yeah. I wrote a little about that here: https://fulghum.io/fun2
More comments...