Posted by 0o_MrPatrick_o0 1/15/2026
- totally unsandboxed but I supervise it in a tight loop (the window just stays open on a second monitor and it interrupts me every time it needs to call a tool).
- unsupervised in a VM in the cloud where the agent has root. (I give it a task, negotiate a plan, then close the tab and forget about it until I get a PR or a notification that it failed).
I want either full capabilities for the agent (at the cost of needing to supervise for safety) or full independence (at the cost of limited context in a VM). I don't see a productive way to mix and match here, seems you always get the worst of both worlds if you do that.
Maybe the usecase for this particular example is where you are supervising the agent but you're worried that apparently-safe tool calls are actually quietly leaving a secret that's in context? So it's not that it's a 'mixed' usecase but rather it's just increasing safety in the supervised case?
Why in the cloud and not in a local VM?
I've re-discovered Vagrant and have been using it exactly for this and it's surprisingly effective for my workflows.
https://blog.emilburzo.com/2026/01/running-claude-code-dange...
> Eventually I found this GitHub issue. VirtualBox 7.2.4 shipped with a regression that causes high CPU usage on idle guests.
The list of viable hypervisors for running VMs with 3D acceleration is probably short but I'd hope there are more options these days for running headless VMs. Incus (on Linux hosts) and Lima come to mind and both are alternatives to Vagrant as well.
> VMs with 3D acceleration
I think we don't even need 3D acceleration since Vagrant is running the VMs headless anyways and just ssh-ing in.
> Incus (on Linux hosts)
That looks interesting, though from a quick search it doesn't seem to have a "Vagrantfile" equivalent (is that correct?), but I guess a good old shell script could replace that, even if imperative can be more annoying than declarative.
And since it seems to have a full-VM mode, docker would also work without exposing the host docker socket.
Thanks for the tip, it looks promising, I need to try it out!
It's just YAML config for the VM's resources:
https://linuxcontainers.org/incus/docs/main/howto/instances_...
https://linuxcontainers.org/incus/docs/main/explanation/inst...
And cloud-init for provisioning:
https://gitlab.oit.duke.edu/jnt6/incus-config/-/blob/main/co...
[1] - https://en.wikipedia.org/wiki/Turtles_all_the_way_down
(If the VM is remote, even more so).
There are theoretical risks of Claude getting fully owned and going rogue, and doing the iterative malicious work to escape a weaker sandbox, but it seems substantially less likely to me, and therefore perhaps not (currently) worth the extra work.
I see there are cloud VMs like at kilocode but they are kind if useless IMO. I can only interact with the prompt and not the code base directly. Too many things go wrong and maybe I also want kilo code to run a docker stack for me which it can't in the agent cloud.
The UI is obviously vibe-coded garbage but the underlying system works. And most of the time you don't have to open the UI after you've set it running you just comment on the Github PR.
This is clearly an unloved "lab" project that Google will most likely kill but to me the underlying product model is obviously the right one.
I assume Microsoft got this model right first with the "assign issue to Copilot" thing and then fumbled it by being Microsoft. So whoever eventually turns this <correct product model> into an <actual product that doesn't suck> should win big IMO.
Yes! I'm surprised more people do not want this capability. Check out my comment above, I think Vagrant might also be what you want.
TLDR:
- Ensure that you have installed npm on your machine.
- Install the dev container CLI globally via npm: `npm i -g @devcontainers/cli`
- Clone the Claude Code repo: https://github.com/anthropics/claude-code
- Navigate into the root directory of that repo.
- Run the dev container CLI command to start the container: `devcontainer --workspace-folder . up`
- Run another dev container command to start Claude in the container: `devcontainer exec --workspace-folder . claude`
And there you go! You have a sandboxed environment for Claude to work in. (As sandboxed as Docker is, at least.)
I like this method because you can just manage it like any other Docker container/volumes. When you want to rebuild it, or reset the volume, you just use the appropriate Docker (and the occasional dev container) commands.
- confused/misaligned agent: probably good enough (as of Q1 2026...).
- hijacked agent: definitely not good enough.
But also it's kinda weird that we still have high-level interfaces that force you to care this much about the type of virtualization it's giving you. We probably need to be moving more towards stuff like Incus here that treats VMs and system containers basically as variants of the same thing that you can manage at a higher level of abstraction. (I think k8s can be like that too).
--bind "$HOME/.claude" "$HOME/.claude"
That directory has a bunch of of sensitive stuff in it, most notable the transcripts of all of your previous Claude Code sessions.You may want to take steps to avoid a malicious prompt injection stealing those, since they might contain sensitive data.
Without this, you'll have to re-login to Claude every time. Breaks the speed of development.
I'm going to do some experimenting to see if I can make this bind more precise.
> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.
So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.
I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.
Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.
With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.
bwrap --ro-bind image/ / --bind src/ /src ...
Any tools you need in the container are installed in the image you unpack.
Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL
Let me know if you give it a go ;)
Internet to connect with the provider, install packages, and search.
It's not perfect but it's a start.
You must not care about those systems that much.
Mysql user: test
Password: mypass123
Host: localhost
...
I could prevent this by running Claude outside of this context. I'm not going to, because this context only has access to my dev secrets. Hence the vault name: `81 Dev environment variables`.
I've configured it so that the 1P CLI only has access to that vault. My prod secrets are in another vault. I achieve this via a OP_SERVICE_ACCOUNT_TOKEN variable set in .zshrc.
I can verify this works by running:
op run --env-file='.env.production' -- printenv
[ERROR] 2026/01/15 21:37:41 "82 Prod environment variables" isn't a vault in this account. Specify the vault with its ID or name.
Also, of course, 1Password pops up a fingerprint request every time something tries to read its database. So if that happened unexpectedly, I'd wonder what was up. I'm acutely conscious of those requests.I can't imagine it's perfect, but I feel pretty good.
The approach I started taking is mounting the directory, that I want the agent to work on, into a container. I use `/_` as the working directory, and have built up some practices around that convention; that's the only directory that I want it to make changes to. I also mount any config it might need as read-only.
The standard tools like claude code, goose, charm, whatever else, should really spawn the agent (or MCP server?) in another process in a container, and pipe context in and out over stdin/stdout. I want a tool for managing agents, and I want each agent to be its own process, in its own container. But just locking up the whole mess seems to work for now.
I see some people in the other comments iterating on what the precise arguments to bubblewrap should be. nnc lets you write presets in Jsonnet, and then refer them by name on the command line, so you can version and share the set of resources that you give to an agent or subprocess.
I originally set up the git filters, but later disabled them.
1. allow an agent to run wild in some kind of isolated environment, giving the "tight loop" coding agent experience so you don't have to approve everything it does.
2. let it execute the code it's creating using some credentials to access an API or a server or whatever, without allowing it to exfil those creds.
If 1 is working correctly I don't see how 2 could be possible. Maybe there's some fancy homomorphic encryption / TEE magic to achieve this but like ... if the process under development has access to the creds, and the agent has unfettered access to the development environment, it is not obvious to me how both of these goals could be met simultaneously.
Very interested in being wrong about this. Please correct me!
You setup a simple proxy server on localhost:1234 that forwards all incoming requests to the real API and the crucial part is that the proxy adds the "Auth" header with the real auth token.
This way, the agent never sees the actual auth token, and doesn't have access to it.
If the agent has full internet access then there are still risks. For example, a malicious website could convince the agent itself to perform malicious requests against the API (like delete everything, or download all data and then upload it all to some hacker server).
But in terms of the security of the auth token itself, this system is 100% secure.
Did you make this account to tell me this? Thank you!
Where I’m at with #2 is the agent builds a prototype with its own private session credentials.
I have orchestration created that can replicate the prototyping session.
From there I can keep final build keys secret from the agent.
My build loop is meant to build an experiment first, and then an enduring build based on what it figures out.
You can easily script it to decode passwords on demand.
If you bind-mount the directory, the sandbox can see the commands, but executing them won’t work since it can’t access the secret service.
[1]: https://repology.org/project/bubblewrap/information https://repology.org/project/landrun/information