Top
Best
New

Posted by websku 19 hours ago

CLI agents make self-hosting on a home server easier and fun(fulghum.io)
631 points | 423 commentspage 2
stuaxo 3 hours ago|
Is everyone just running claude code not even in a container, letting it go wild and change stuff?
raxxorraxor 3 hours ago|
I use Cursor and quickly let it run pretty wild. Claude doesn't seem to mind to extract auth info from everywhere. Cursor usually blacklists some files for AI access depending on language and environment, but Claude just queries environment variables without even simulating a bad conscience. Probably info that gets extracted by the next programmer using it. Well, whoops...
dwd 17 hours ago||
Been self-hosting for last 20 years and I would have to say LLMs were good for generating suggestions when debugging an issue I hadn't seen before, or for one I had seen before but was looking for a quicker fix. I've used it to generate bash scripts, firewall regex.

On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.

MrDarcy 17 hours ago||
The best solution I’ve found for probes is to put all eggs into the basket listening on 443.

Haproxy with SNI routing was simple and worked well for many years for me.

Istio installed on a single node Talos VM currently works very well for me.

Both have sophisticated circuit breaking and ddos protection.

For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.

I expose one or two things to the public behind an oauth2-proxy for authnz.

Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.

aaronax 17 hours ago||
And use a wildcard cert so that all your services don't get proved due to cert transparency logs.
FaradayRotation 12 hours ago|||
~10 years ago I remember how shocked I was the first time I saw how many people were trying to probe my IP on my home router, from random places all over the globe.

Years later I still had the same router. Somewhere a long the line, I fired the right neurons and asked myself, "When was the last time $MANUFACTURER published an update for this? It's been awhile..."

In the context of just starting to learn about the fundamentals of security principles and owning your own data (ty hackernews friends!), that was a major catalyst for me. It kicked me into a self-hosting trajectory. LLMs have saved me a lot of extra bumps and bruises and barked shins in this area. They helped me go in the right direction fast enough.

Point is, parent comment is right. Be safe out there. Don't let your server be absorbed into the zombie army.

SchemaLoad 15 hours ago|||
These days I just wouldn't put my homeserver exposed to the internet only. LAN only with a VPN. Does mean you can't share links and such with other people, but your server is now very secure and most of the stuff you do on it doesn't need public access anyway.
donatj 4 hours ago||
I have been self hosting since the late 90s, but I've always just installed everything on Bare metal. I hear more and more about these elaborate Docker setups. What does a setup like this actually look like?

Is it just a single docker-compose.yml with everything you want to run and 'docker compose up'?

abc123abc123 3 hours ago||
And why would I bother with a home setup? Sure, for industrial IT go for it, VM:s and/or containers, but for my own personal stuff, baremetal, packages, and good old fashioned way is more than enough.
jordanf 2 hours ago||
yeah basically.
WiSaGaN 3 hours ago||
I have a similar experience when I found out that claude code can use ssh to conect to remote server and diagnose any sysadmin issue there. It just feels really empowered.
chaz6 18 hours ago||
I would really like some kind of agnostic backup protocol, so I can simply configure my backup endpoint using an environment variable (e.g. `-e BACKUP_ENDPOINT=https://backup.example.com/backup -e BACKUP_IDENTIFIER=xxxxx`), then the application can push a backup on a regular schedule. If I need to restore a backup, I log onto the backup app, select a backup file and generate a one time code which I can enter into the application to retrieve the data. To set up a new application for backups, you would enter a friendly name into the backup application and it would generate a key for use in the application.
PaulKeeble 17 hours ago|||
At the moment I am docker compose down everything, run the backup of their files and then docker compose up -d again afterwards. This sort of downtime in the middle of the night isn't an issue for home services but its also not an ideal system given most wont be mid writing a file at the time of backup anyway because its the middle of the night! But if I don't do it the one time I need those files I can guarantee it will be corrupted so at the moment don't feel like there are a lot of other options.
Waterluvian 18 hours ago|||
Maybe apps could offer backup to stdout and then you pipe it. That way each app doesn’t have to reason about how to interact with your target, doesn’t need to be trusted with credentials, and we don’t need a new standard.
dangus 18 hours ago||
I use Pika Backup which runs on the BorgBackup protocol for backing up my system’s home directory. I’m not really sure if this is exactly what you’re talking about, though. It just sends backups to network shares.
cryostasis 15 hours ago||
I'm actively in the process of setting this up for my devices. What have you done for off-site backups? I know there are Borg specific cloud providers (rsync.net, borgbase, etc.). Or have you done something like rclone to an S3 provider?
dangus 14 hours ago||
No off-site backup for me, these items aren’t important enough, it’s more for “oops I broke my computer” or “set my new computer up faster” convenience.

Anything I really don’t want to lose is in a paid cloud service with a local backup sync over SMB to my TrueNAS box for some of the most important ones.

An exception is GitHub, I’m not paying for GitHub, but git kinda sorta backs itself up well enough for my purposes just by pulling/pushing code. If I get banned from GitHub or something I have all the local repos.

cryostasis 13 hours ago||
Good to know! I have shifted more to self hosting, e.g., Gitea rather than Github, and need to establish proper redundancy. Hopefully Borg Backup, with it's deduplication will be good, at least for on-site backups.
dangus 10 hours ago||
I am much more in-between. I don’t mind cloud stuff and even consider it safer than my local stuff due to other smart people doing the work. And I’m not looking for a second job self hosting, except for my game servers.

I mostly just don’t want to be stuck with cloud services from big tech that have slimy practices. I’d rather pay for honest products that let me own my data better. With the exception given to GitHub which I guess is out of my own laziness and maybe I should do something about that.

If you’re using gitea you might be interested in Forgejo, it’s a fork and I think it’s well regarded since gitea went more commercial-ish IIRC?

river_otter 6 hours ago||
Next level up is self hosting your LLM! I put LM Studio on a mac mini at home and have been extremely happy with it. Then you can use a tool like opencode to connect to that LLM and boom, Claude Code dependency is removed and you just got even more self-hosted. For what you're using Claude Code for, a smaller open-weight model would probably work fine
NicoJuicy 6 hours ago|
Well, to a limit. I have an RTX 3090 24gb that enables a lot of use-cases.

But for what i'm using Agents right now, claude code is the tool to go.

river_otter 3 hours ago||
makes sense. You could look at something like https://github.com/musistudio/claude-code-router if at some point you're interested in going down that path. I've been using gpt-oss-20b which would fit on your GPU and I've found useful for basic tasks like recipe creation and agentic tool usage (I use it with Notion MCP tools)
duttish 8 hours ago||
I've been building a home library system mainly for personal use, I want to run it cheaply so a $4 black Friday sale OVH vps is perfect.

But I wanted decent deployments. Hosting a image repository cost 3-4x of the server. Sending over the container image took over an hour due to large image processing python dependencies.

Solution? Had a think and a chat with Claude code, now I have blue-green deployments where I just upload the code which takes 5 seconds, everything is then run by systemd. I looked at the various PaaSes but they ran up to $40/month with compute+database etc.

I would probably never have built this myself. I'd have gotten bored 1/3 through. Now it's working like a charm.

Is it enterprise grade? Gods no. Is it good enough? Yes.

Draiken 3 hours ago|
This summarizes what LLMs are best at: hobby projects that you care mostly about the outcome and won't have to actively maintain forever.

When using them with production code they are a liability more than a resource.

piqufoh 5 hours ago||
I'm working on something very similar, but I've found that if I'm not doing the work - I forget what has been set up and how its running a lot faster.

For example - I have ZFS running with a 5-bay HDD enclosure, and I honestly can't remember any of the rules about import-ing / export-ing to stop / start / add / remove pools etc.

I have to write many clear notes, and store them in a place where future me will find them - otherwise the system gets very flaky through my inability to remember what's active and what isn't. Running the service and having total control is fun, but it's a responsibility too

mvanbaak 2 hours ago||
This is the reason one should always ask the LLM to create scripts to complete the task. Asking it to do things is fine, but as you stated you will forget. If you ask the LLM to do something, but always using a script first, and if you ask: 'Create a well documented shell script to <your question here>', you will have auto documentation. One could go one step further and ask it to create a documented terraform/ansible/whatever tooling setup you prefer.
Draiken 3 hours ago|||
Write scripts for everything.

If you need to run the command once, you can now run it again in the future.

It's very tempting to just paste some commands (or ask AI to do it) but writing simple scripts like this is an amazing solution to these kinds of problems.

Even if the scripts get outdated and no longer work (maybe it's a new version of X) it'll give you a snapshot of what was done before.

Maledictus 5 hours ago||
Which enclosure do you use, and can you recommend it?
legojoey17 13 hours ago||
I just got around to a fresh NixOS install and I couldn't be happier as I've been able to do practically everything via Codex while keeping things concise and documented (given it's nix, not a bunch of commands of the past).

I recently had a bunch of breakages and needed to port a setup - I had a complicated k3s container in proxmox setup but needed it in a VM to fix various disk mounts (I hacked on ZFS mounts, and was swapping it all for longhorn)

As is expected, life happens and I stopped having time for anything so the homelab was out of commission. I probably would still be sitting on my broken lab given a lack of time.

sambuccid 5 hours ago|
And if you prefer to learn well how to do it without AI, you can always try to do it manually the old way but then use AI at the end to review your config and spot any security issues
More comments...