Posted by websku 1/11/2026
I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.
I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.
And then you can only use distros which have a raspberry pi specific build. Generic ARM ones won't work.
I build out my server in Docker and I’ve been surprised that every image I’ve ever wanted to download has an ARM image.
I'm not familiar with Dell product names specifically but 'tower' sounds like it'll sit there burning 200W idle. Old laptops (sliding out the battery) is what I've been opting for, which use barely anything more than the router it sits next to. Especially if you just want to serve static files as GP seems to be looking for, an old smartphone will be enough but there you can't remove the battery (since it won't run off of just the charger)
My first "server" was a 65€ second-hand laptop including shipping iirc, in ~2010 euros so say maybe 100€ now when taking inflation into account. I used that for a number of years and had a good idea of what I wanted from my next setup (which wasn't much heavier, but a little newer cpu wasn't amiss after 3 years). Don't think one needs to even go so far as 200$ for a "local Bandcamp archive" (static file storage) and serving that via some streaming webserver
Jellyfin docs do mention "Not having a GPU is NOT recommended for Jellyfin, as video transcoding on the CPU is very performance demanding" but that's for on-the-fly video transcoding. If you transcode your videos to the desired format(s) upon import, or don't have any videos at all yet as in GP's case, it doesn't matter if the hardware is 20x slower. Worst case, you just watch that movie in source material quality: on a LAN you won't have network speed bottlenecks anyway, and transcoding on GPU is much more expensive (purchase + ongoing power costs) than the gigabit ethernet that you can already find by default on every laptop and router
(In)famous last words?
For now I'm just using Cloudflare tunnels, but ideally I also want to do that myself (without getting DDoS)
The structure is dead simple: `machines/<hostname>/stacks/<service>/` with a `config.sh` per machine defining SSH settings and optional pre/post deploy hooks. One command syncs files and runs `docker compose up -d`.
I could see Claude Code being useful for debugging compose files or generating new stack configs, but having the deployment itself be a single `./deploy.sh homeserver media` keeps the feedback loop tight and auditable.
It's simple enough and I had some prior experience with it, so I merely have some variables, roles that render a docker-compose.yml.j2 template and boom. It all works, I have easy access to secrets, shared variables among stacks and run it with a simple `ansible-playbook` call.
If I forget/don't know the Ansible modules, Claude or their docs are really easy to use.
Every time I went down a bash script route I felt like I was re-inventing something like Ansible.
Which basically accomplishes same thing, but gives a bit more UI for debugging when needed.
For example - I have ZFS running with a 5-bay HDD enclosure, and I honestly can't remember any of the rules about import-ing / export-ing to stop / start / add / remove pools etc.
I have to write many clear notes, and store them in a place where future me will find them - otherwise the system gets very flaky through my inability to remember what's active and what isn't. Running the service and having total control is fun, but it's a responsibility too
If you need to run the command once, you can now run it again in the future.
It's very tempting to just paste some commands (or ask AI to do it) but writing simple scripts like this is an amazing solution to these kinds of problems.
Even if the scripts get outdated and no longer work (maybe it's a new version of X) it'll give you a snapshot of what was done before.
In this case you will be completely unable to navigate the infrastructure of your homeserver that your life will have become dependent on.
But a homeserver is always about your levels of risk, single points of failure. I'm personally willing to accept Tailscale but I'm not willing to give the manipulation of all services directly over to Claude.
I wonder if a local model might be enough for sysadmin skills, especially if were trained specifically for this ?
I wonder if iOS has enough hooks available that one could make a very small/simple agentic Siri replacement like this that was able to manage the iPhone at least better than Siri (start and stop apps, control them, install them, configure iPhone, etc) ?
What I've found: Claude Code is great at the "figure out this docker/nginx/systemd incantation" part but the orchestration layer (health checks, rollbacks, zero-downtime deploys) still benefits from purpose-built tooling. The AI handles the tedious config generation while you focus on the actual workflow.
github.com/elitan/frost if curious