Top
Best
New

Posted by souvik1997 7 days ago

Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents(github.com)
WASM sandbox for running LLM-generated code safely.

Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox

146 points | 73 commentspage 2
thepoet 7 days ago|
This looks cool, congratulations. We investigated WASM for our use case but then turned to Apple containers which run 1:1 mapped to a microVM for local use here, which is being used by a bunch of folks https://github.com/instavm/coderunner

We are currently also building a solution InstaVM which is ideologically the same but for cloud https://instavm.io

jameslk 7 days ago||
Nice! This looks like it would pair really well with something like RLM[0] which requires "symbolic" representation of the prompt and output during recursion[1]

0. https://mack.work/blog/recursive-language-models

1. https://x.com/lateinteraction/status/2011250721681773013

souvik1997 7 days ago|
This is a really interesting direction we have been exploring too! Our approach is basically to create a file containing the prompt for each turn within the virtual filesystem. The results seem promising so far
sibellavia 7 days ago||
I had the same idea, forcing the agent to execute code inside a WASM instance, and I've developed a few proof of concepts over the past few weeks. The latest solution I adopted was to provide a WASM instance as a sandbox and use MCP to supply the tool calls to the agent. However, it hasn't seemed flexible enough for all use cases to me. On top of that, there's also the issue of supporting the various possible runtimes.
souvik1997 7 days ago|
Interesting! What use cases felt too constrained? We've been mostly focused on "agent calls tools with parameters". Curious where you hit flexibility limits.

Would love to see your MCP approach if you've published it anywhere.

skybrian 7 days ago||
This is cool, but I had imagined something like a pure Typescript library that can run in a browser.
simonw 7 days ago|
Sounds like just-bash: https://github.com/vercel-labs/just-bash
touwer 7 days ago||
Cool! If it is full OSS indeed
behnamoh 7 days ago||
> What you don't get: ...GPU access...

So no local models are supported.

souvik1997 7 days ago|
The sandbox doesn’t run models. it runs agent-generated code and constrains tool calls. The model runs wherever you want (OpenAI, Anthropic, local Ollama, whatever).
evanjrowley 6 days ago||
Is there any affiliation with AlmaLinux project?
poly2it 7 days ago|
Why not just put the agent in a VM?
More comments...