Top
Best
New

Posted by rbanffy 1 day ago

OpenCode – Open source AI coding agent(opencode.ai)
1207 points | 596 commentspage 10
nbevans 17 hours ago|
Codex is 15MB of memory per process. Just sayin'
gigatexal 1 day ago||
If you have to post something like this line already loser the plot

I only boot my windows 11 gaming machine for drm games that don’t work with proton. Otherwise it’s hot garbage

voidfunc 1 day ago||
I fucking love OpenCode.
pugchat 5 hours ago||
[flagged]
jee599 1 day ago||
[flagged]
solarkraft 20 hours ago||
This is another one of OpenCode’s current weak points in the security complex: They consider permissions a “UX feature” rather than actual guardrails. The reasoning is that you’re giving the agent access to the shell, so it’ll be able to sidestep everything.

This is of course a cop-out: They’re not considering the case in which you’re not blindly doing that.

Fun fact: In the default setup, the agent can fully edit all of the harnesses files, including permissions and session history. So it’s pretty trivial for it to a) escalate privileges and then even b) delete evidence of something nefarious happening.

It’s pretty reckless and even pretty easy to solve with chroot and user permissions. There just has been (from what I see currently) relatively little interest from the project in solving this issue.

jee599 5 hours ago||
[flagged]
embedding-shape 22 hours ago|||
Granted, I just started playing around with OpenCode (but been using Codex and Claude Code since they were initially available, so not first time with agents), but anyways:

> they need broad file system access to be useful, but that access surface is also the attack surface

Do they? You give them access to one directory typically (my way is to create a temporary docker container that literally only has that directory available, copied into the container on boot, copied back to the host once the agent completed), and I don't think I've needed them to have "broad file system access" at any point, to be useful or otherwise.

So that leads me to think I'm misunderstanding either what you're saying, or what you're doing?

thevinchi 22 hours ago|||
This is the way. If you’re not running your agent harness/framework in a container with explicit bind mounts or copy-on-build then you’re doing it wrong. Whenever I see someone complain about filesystem access and sequirity risk it’s a clear signal of incompetence imo.
embedding-shape 21 hours ago||
> container with explicit bind mounts

Someone correct me if I'm wrong, but if you're doing bind-mounts, ensure you do read-only, if you're doing bi-directional bind mounts with docker, the agent could (and most likely know how to) create a symlink that allows them to browse outside the bind mount.

That's why I explicitly made my tooling do "Create container, copy over $PWD, once agent completes, copy back to $PWD" rather than the bind-mount stuff.

yogurt0640 19 hours ago||
> create a symlink that allows them to browse outside the bind mount Could you reproduce that? iiuc the symlink that the agent creates should follow to the path that's still inside the container.
jee599 5 hours ago|||
[flagged]
jy-tan 17 hours ago|||
I created a tool for this: https://github.com/Use-Tusk/fence

Same thoughts - I wanted a "permission manager" that defines a set of policies agnostic to coding agents. It also comes with "monitor mode" that shows operations blocked, but not quite an audit log yet though.

jee599 5 hours ago||
[flagged]
aniviacat 23 hours ago|||
Codex has some OS-level sandboxing by default that confines its actions to the current workspace [1].

OpenCode has no sandboxing, as far as I know.

That makes Codex a much better choice for security.

[1] https://developers.openai.com/codex/concepts/sandboxing

nezhar 10 hours ago|||
I created VibePod, which allows you to sandbox the agents in containers and monitor their activities. It also supports OpenCode.

https://github.com/VibePod/vibepod-cli

jee599 40 minutes ago|||
[dead]
luk4 22 hours ago|||
Greywall/Greyproxy aims to address this. I haven't tried it yet though.

https://greywall.io/

knocte 22 hours ago|||
Or just run it in your VPS?
weitendorf 20 hours ago|||
I built a product solving this problem about a year ago, basically a serverless, container-based, NATed VScode where you can eg "run Claude Code" (or this) in your browser on a remote container.

There's a reason I basically stopped marketing it, Cursor took off so much then, and now people are running Claude/Codex locally. First, this is something people only actually start to care about once they've been bitten by it hard enough to remember how much it hurt, and most people haven't got there yet (but it will happen more as the models get better).

Also, the people who simultaneously care a lot about security and systems work AND are AI enthusiasts AND generally highly capable are potentially building in the space, but not really customers. The people who care a lot about security and systems work aren't generally decision makers or enthusiastic adopters of AI products (only just now are they starting to do so) and the people who are super enthusiastic about AI generally aren't interested in spending a lot of time on security stuff. To the extent they do care about security, they want it to Just Work and let them keep building super fast. The people who are decision makers but less on the security/AI trains need to this happen more, and hear about the problem from other executives, before they're willing to spend on it.

To the extent most people actualy care about this, they still want to Just Work like they do now and either keep building super fast or not thinking about AI at all. It's actually extremely difficult to give granular access to agents because the entire point is them acting autonomously or keeping you in a flow state. You either need to have a really compatible threat model to doing so (eg open source work, developer credentials only used for development and kept separate from production/corp/customer data), spend a lot of time setting things up so that agents can work within your constraints (which also requires a willingness to commit serious amounts of time or resources to security, and understanding of it), or spend a lot of time approving things and nannying it.

So right now everybody is just saying, fuck it, I trust Anthropic or Microsoft or OpenAI or Cursor enough to just take my chances with them. And people who care about security are of course appalled at the idea of just giving another company full filesystem access and developer credentials in enterprises where the lack of development velocity and high process/overhead culture was actually of load-bearing importance. But really it's just that secure agentic development requires significant upfront investment in changing the way developers work, which nobody is willing to pay for yet, and has no perfect solutions yet. Dev containers were always a good idea and not that much adopted either, btw.

It takes a lot more investment in actually providing good permissions/security for agent development environments still too, which even the big companies are still working on. And I am still working on it as well. There's just not that much demand for it, but I think it's close.

jee599 39 minutes ago||
[dead]
JasperNoboxdev 10 hours ago|||
[dead]
nadav_tal 18 hours ago||
[dead]
leontloveless 11 hours ago||
[dead]
leontloveless 15 hours ago||
[dead]
abitabovebytes 19 hours ago||
[dead]
rodchalski 14 hours ago||
[dead]
aplomb1026 16 hours ago|
[dead]
More comments...