Top
Best
New

Posted by emilburzo 8 hours ago

Running Claude Code dangerously (safely)(blog.emilburzo.com)
223 points | 184 comments
rcarmo 1 minute ago|
I use https://github.com/rcarmo/agentbox inside a Proxmox VM. My setup syncs the workspaces back to my Mac via SyncThing, so I can work directly in the sandbox or literally step away.
lucasluitjes 5 hours ago||
> What you’re NOT protecting against:

> a malicious AI trying to escape the VM (VM escape vulnerabilities exist, but they’re rare and require deliberate exploitation)

No VM escape vulns necessary. A malicious AI could just add arbitrary code to your Vagrantfile and get host access the first time you run a vagrant command.

If you're only worried about mistakes, Claude could decide to fix/improve something by adding a commit hook. If that contains a mistake, the mistake gets executed on your host the first time you git commit/push.

(Yes, it's unpleasantly difficult to truly isolate dev environments without inconveniencing yourself.)

embedding-shape 6 minutes ago||
Doesn't this assume you bi-directionally share directories between the host or the VM? Or how would the AI inside the VM be able to write to your .git repository or Vagrantfile? That's not the default setup with VMs (AFAIK, you need to explicitly use "shared directories" or similar), nor should you do that if you're trying to use VM for containment of something.

I basically do something like "take snapshot -> run tiny vm -> let agent do what it does -> take snapshot -> look at diff" for each change, restarting if it doesn't give me what I wanted, or I misdirected it somehow. But there is no automatic sync of files, that'd defeat the entire point of putting it into a VM in the first place, wouldn't it?

johndough 4 hours ago|||

    > A malicious AI could just add arbitrary code to your Vagrantfile
    > [...]
    > Claude could decide to fix/improve something by adding a commit hook.
You can fix this by confining Claude to a subdirectory (with Docker volume mounts, for example):

    repository/
    ├── sandbox <--- Claude lives in here
    │   └── main.py <--- Claude can edit this
    └── .git <--- Claude can not touch this
dist-epoch 2 hours ago|||
Another way is malicious code gets added to the repo, if you ever run the repo code outside the VM you get infected.
redactsureAI 4 hours ago||
ec2 node?
eli 3 hours ago||
Or just a VM that doesn't share so much with your host. Just makes for a more annoying dev experience.
dist-epoch 2 hours ago||
Why do you need to share anything? Code goes through GitHub - VM has it's own repo clone, if you need data files, you mount them read-only in the VM, have a read-write mount for output data.
TCattd 10 minutes ago||
Can i plug my solution here too?

https://github.com/EstebanForge/construct-cli

For Linux, WSL also of course, and macOS.

Any coding agent (from the supported ones, our you can install your own).

Podman, Docker or even Apple's container.

In case anyone is interested.

corv 6 hours ago||
I'm pursuing a different approach: instead of isolating where Claude runs, intercept what it wants to do.

Shannot[0] captures intent before execution. Scripts run in a PyPy sandbox that intercepts all system calls - commands and file writes get logged but don't happen. You review in a TUI, approve what's safe, then it actually executes.

The trade-off vs VMs: VMs let Claude do anything in isolation, Shannot lets Claude propose changes to your real system with human approval. Different use cases - VMs for agentic coding, whereas this is for "fix my server" tasks where you want the changes applied but reviewed first.

There's MCP integration for Claude, remote execution via SSH, checkpoint/rollback for undoing mistakes.

Feedback greatly appreciated!

[0] https://github.com/corv89/shannot

horsawlarway 5 hours ago||
I'm struggling to see how this resolves the problem the author has. I still think there's value in this approach, but it feels to be in the same thrust as the built in controls that already exist in claude code.

The problem with this approach (unless I'm misunderstanding - entirely possible!) is that it still blocks the agent on the first need for approval.

What I think most folks actually want (or at least what I want) is to allow the agent to explore a space, including exploring possible dead ends that require permissions/access, without stopping until the task is finished.

So if the agent is trying to "fix a server" it might suggest installing or removing a package. That suggestion blocks future progress.

Until a human comes in and says "yes - do it" or "no - try X instead" it will sit there doing nothing.

If instead it can just proceed, observe that the package doesn't resolve the issue, and continue exploring other solutions immediately, you save a whole lot of time.

corv 5 hours ago||
You're right that blocking on every operation would defeat the purpose! Shannot is able to auto-approve safe operations for this reason (e.g. read-only, immutable)

So the agent can freely explore, check logs, list files, inspect service status. It only blocks when it wants to change something (install a package, write a config, restart a service).

Also worth noting: Shannot operates on entire scripts, not individual commands. The agent writes a complete program, the sandbox captures everything it wants to do during a dry run, then you review the whole batch at once. Claude Code's built-in controls interrupt at each command whereas Shannot interrupts once per script with a full picture of intent.

That said, you're pointing at a real limitation: if the fix genuinely requires a write to test a hypothesis, you're back to blocking. The agent can't speculatively install a package, observe it didn't help, and roll back autonomously.

For that use case, the OP's VM approach is probably better. Shannot is more suited to cases where you want changes applied to the real system but reviewed first.

Definitely food for thought though. A combined approach might be the right answer. VM/scratch space where the agent can freely test hypotheses, then human-in-the-loop to apply those conclusions to production systems.

horsawlarway 4 hours ago||
yeah, I think the combo approach definitely has the most appeal:

- Spin up a vm with an image of the real target device.

- Let the agent act freely in the vm until the task is resolved, but capture and record all dangerous actions

- Review & replay those actions on the real machine

My issue is that for any real task, an agent without feedback mechanisms is essentially worthless. You have to have some sort of structured "this is what success looks like, here's how you check" target for it. A human in the loop can act as that feedback, which is in line with how claude code works by default (you define success by approving actions and giving feedback on status), but requiring a human in the loop also slows it down a bunch - you can end up ping-ponging between terminals trying to approve actions and review the current status.

Retr0id 6 hours ago||
Very clever name!
corv 6 hours ago||
Thank you, good to know it landed :)
molson8472 3 hours ago||
Once approval fatigue and ongoing permission management kicks in, the temptation is strong to run `--dangerously-skip-permissions`. I think that's what we all want - run agents in a locked-down sandbox where the blast radius of mistakes and/or prompt injection attacks is minimal/acceptable.

I started running Claude Code in a devcontainer with limited file access (repo only) and limited outbound network access (allowlist only) for that reason.

This weekend, I generalized this to work with docker compose. Next up is support for additional agents (Codex, OpenCode, etc). After that, I'd like to force all network access through a proxy running on the host for greater control and logging (currently it uses iptables rules).

This workflow has been working well for me so far.

Still fresh, so may be rough around the edges, but check it out: https://github.com/mattolson/agent-sandbox

nunez 5 hours ago||
Vagrant is great for Claude!

You can also use Lima, a lightweight VM control plane, as it natively works with qemu and Virtualization.Framework. (I think Vagrant does too; it's been a minute since I've tried.) This has traditionally been used for running container engines, but it's great for narrowly-scoped use cases like this.

Just need to be careful about how the directory Claude is working with is shared. I copy my Git repo to a container volume to use with Claude (DinD is an issue unless you do something like what Kind did) and rsync my changes back and verify before pushing. This way, I don't have to worry if Claude decides to rewind the reflog or something.

bonsai_spool 5 hours ago|
How are you configuring Lima? Do you have any scripts you use to set up the environments or is this done ad hoc?
snowmobile 4 hours ago||
Bit of a wider discussion, but how do you all feel about the fact that you're letting a program use your computer to do whatever it wants without you knowing? I know right now LLMs aren't overly capable, but if you'd apply this same mindset to an AGI, you'd probably very quickly have some paperclip-maximizing issues where it starts hacking into other systems or similar. It's sort of akin to running experiments on contagious bacteria in your backyard, not really something your neighbors would appreciate.
devolving-dev 4 hours ago||
Don't you have the same issue when you hire an employee and give them access to your systems? If the AI seems capable of avoiding harm and motivated to avoid harm, then the risk of giving it access is probably not greater than the expected benefit. Employees are also trying to maximize paperclips in a sense, they want to make as much money as possible. So in that sense it seems that AI is actually more aligned with my goals than a potential employee.
johndough 4 hours ago|||
I do not believe that LLMs fear punishment like human employees do.
devolving-dev 4 hours ago||
Whether driven by fear or by their model weights or whatever, I don't think that the likelihood of an AI agent, at least the current ones like Claude and Codex, acting maliciously to harm my systems is much different than the risk of a human employee doing so. And I think this is the philosophical difference between those who embrace the agents, they view them as akin to humans, while those who sandbox them view them as akin to computer viruses that you study within a sandbox. It seems to me that the human analogy is more accurate, but I can see arguments for the other position.
snowmobile 1 hour ago||
Sure, current agents are harmless, but that's due to their low capability, not due to their alignment with human goals. Can you explain why you'd view them as more similar to humans than to computer viruses?
snowmobile 1 hour ago|||
An AI has no concept of human life nor any morals. Sure, it may "act" like it, but trying to reason about its "motivations" is like reasoning about the motivations of smallpox. Humans want to make money, but most people only want that in order provide a stable life for their family. And they certainly wouldn't commit mass murder for a billion dollars, while an AGI is capable of that.

> So in that sense it seems that AI is actually more aligned with my goals than a potential employee.

It may seem like that but I recommend you reading up on different kinda of misalignment in AI safety.

andai 4 hours ago|||
Try asking the latest Claude models about self replicating software and see what happens...

(GPT recently changed its attitude on this subject too which is very interesting.)

The most interesting part is that you will be given the option to downgrade the conversation to an older model. Implying that there was a step change in capability on this front in recent months.

theptip 4 hours ago|||
The point of TFA is that you are not letting it do whatever it wants, you are restricting it to just the subset of files and capabilities that you mount on the VM.
snowmobile 1 hour ago||
Sure, and right now they aren't very capable, so it's fine. But I'm interested in the mindset going forward. I've read a few stories about people handling radioactive materials at home, they usually explain the precautions they take, but still many would condemn them for the unnecessary risk. Compare it to road racing, whose advocates usually claim they pose no danger to the general public.
deegles 4 hours ago||
I run mine in a docker container and they get read only access to most things.
kernc 5 hours ago||
Since everyone tends to present their own solution, I bid you mine:

    sandbox-run npx @anthropic-ai/claude-code
This runs npx (...) transparently inside a Bubblewrap sandbox, exposing only the $PWD. Contrary to many other solutions, it is a few lines of pure POSIX shell.

https://github.com/sandbox-utils/sandbox-run

corv 5 hours ago|
I like the bubblewrap approach, it just happens to be Linux-only unfortunately. And once privileges are dropped for a process it doesn't appear to be possible to reinstate them.
kernc 5 hours ago||
> Linux-only

What other dev OSs are there?

> once privileges are dropped [...] it doesn't appear to be possible to reinstate them

I don't understand. If unprivileged code could easily re-elevate itself, privilege dropping would be meaningless ... If you need to communicate with the outside, you can do so via sockets (such as the bind-mounted X11 socket in one of the readme Examples).

corv 4 hours ago||
I happen to use a Mac, even when targeting Linux so I'd have to use a container or VM anyways. It's nice how lightweight bubblewrap would be however.

Consider one wanted to replicate the human-approval workflow that most agent harnesses offer. It's not obvious to me how that could be accomplished by dropping privileges without an escape hatch.

kernc 4 hours ago||
It being deprecated and all, didn't feel like wrapping it, but macOS supposedly has a similar `sandbox-exec` command ...
nowahe 3 hours ago||
IIRC from a comment in another thread, it's marked as deprecated to stop people from using it directly and to use the offical macOS tools directly. But it's still used internally by macOS.

And I think that what CC's /sandbox uses on a Mac

raesene9 6 hours ago||
Of course it depends on exactly what you're using Claude Code for, but if your use-case involves cloning repos and then running Claude Code on that repo. I would definitely recommend isolating it (same with other similar tools).

There's a load of ways that a repository owner can get an LLM agent to execute code on user's machines so not a good plan to let them run on your main laptop/desktop.

Personally my approach has been put all my agents in a dedicated VM and then provide them a scratch test server with nothing on it, when they need to do something that requires bare metal.

intrasight 6 hours ago|
In what situations where it require bare metal?
raesene9 6 hours ago||
In my case I was using Claude Code to build a PoC of a firecracker backed virtualization solution, so bare metal was needed for nested virtualization support.
bob1029 6 hours ago|
My approach to safety at the moment is to mostly lean on alignment of the base model. At some point I hope we realize that the effectiveness of an agent is roughly proportional to how much damage it could cause.

I currently apply the same strategy we use in case of the senior developer or CTO going off the deep end. Snapshots of VMs, PITR for databases and file shares, locked down master branches, etc.

I wouldn't spend a bunch of energy inventing an entirely new kind of prison for these agents. I would focus on the same mitigation strategies that could address a malicious human developer. Virtual box on a sensitive host another human is using is not how you'd go about it. Giving the developer a cheap cloud VM or physical host they can completely own is more typical. Locking down at the network is one of the simplest and most effective methods.

More comments...