Posted by austinwang115 3 hours ago
When an agent writes code, it usually needs to start a dev server, run tests, open a browser to verify its work. Today that all happens on your local machine. This works fine for a single task, but the agent is sharing your computer: your ports, RAM, screen. If you run multiple agents in parallel, it gets a bit chaotic. Docker helps with isolation, but it still uses your machine's resources, and doesn't give the agent a browser, a desktop, or a GPU to close the loop properly. The agent could handle all of this on its own if it had a primitive for starting VMs.
CloudRouter is that primitive — a skill that gives the agent its own machines. The agent can start a VM from your local project directory, upload the project files, run commands on the VM, and tear it down when it's done. If it needs a GPU, it can request one.
cloudrouter start ./my-project
cloudrouter start --gpu B200 ./my-project
cloudrouter ssh cr_abc123 "npm install && npm run dev"
Every VM comes with a VNC desktop, VS Code, and Jupyter Lab, all behind auth-protected URLs. When the agent is doing browser automation on the VM, you can open the VNC URL and watch it in real time. CloudRouter wraps agent-browser [1] for browser automation. cloudrouter browser open cr_abc123 "http://localhost:3000"
cloudrouter browser snapshot -i cr_abc123
# → @e1 [link] Home @e2 [link] Settings @e3 [button] Sign Out
cloudrouter browser click cr_abc123 @e2
cloudrouter browser screenshot cr_abc123 result.png
Here's a short demo: https://youtu.be/SCkkzxKBcPEWhat surprised me is how this inverted my workflow. Most cloud dev tooling starts from cloud (background agents, remote SSH, etc) to local for testing. But CloudRouter keeps your agents local and pushes the agent's work to the cloud. The agent does the same things it would do locally — running dev servers, operating browsers — but now on a VM. As I stopped watching agents work and worrying about local constraints, I started to run more tasks in parallel.
The GPU side is the part I'm most curious to see develop. Today if you want a coding agent to help with anything involving training or inference, there's a manual step where you go provision a machine. With CloudRouter the agent can just spin up a GPU sandbox, run the workload, and clean it up when it's done. Some of my friends have been using it to have agents run small experiments in parallel, but my ears are open to other use cases.
Would love your feedback and ideas. CloudRouter lives under packages/cloudrouter of our monorepo https://github.com/manaflow-ai/manaflow.
I much prefer independent, loosely coupled, highly cohesive, composeable, extensible tools. It's not a very "programmery" solution, but it makes it easier as a user to fix things, extend things, combine things, etc.
The Docker template you have bundles a ton of apps into one container. This is problematic as it creates a big support burden, build burden, and compatibility burden. Docker works better when you make individual containers of a single app, and run them separately, and connect them with tcp, sockets, or volumes. Then the user can swap them out, add new ones, remove unneded ones, etc, and they can use an official upstream project. Docker-in-docker with a custom docker network works pretty well, and the host is still accessible if needed.
As a nit-pick: your auth code has browser-handling logic. This is low cohesion, a sign of problems to come. And in your rsync code:
sshCmd := fmt.Sprintf("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=%q", proxyCmd)
I was just commenting the other day on here about how nobody checks SSH host keys and how SSH is basically wide-open due to this. Just leaving this here to show people what I mean. (It's not an easy problem to solve, but ignoring security isn't great either)Re: Docker template. I understand the Docker critique. So, the primary use case is an agent uploading its working directory and spinning it up as a dev environment. The agent needs the project files, the dev server, and the browser all in one place. If these are separate containers, the agent has to reason about volume mounts, Docker networking, etc — potentially more confusion, higher likelihood that agents get something wrong. A single environment where cloudrouter start ./my-project just works is what I envisioned.
Re: SSH host keys. SSH never connects to a real host. It's tunneled through TLS WebSocket via ProxyCommand. Also the hostname is fake, we have per-session auth token on the WebSocket layer, and VMs are ephemeral with fresh keys every boot. So, SSH isn't wide-open. We don't expose the SSH port (port 10000); everything goes through our authenticated proxy.
We recently also added support for agents: https://skills.sh/dstackai/dstack/dstack
Our approach though is more tide-case agnostic and in the direction of brining full-fledged container orchestration converting from development to training and inference