Top
Best
New

Posted by rbanffy 1 day ago

OpenCode – Open source AI coding agent(opencode.ai)
1146 points | 562 commentspage 2
Frannky 23 hours ago|
I don't use it for coding but as an agent backend. Maybe opencode was thought for coding mainly, but for me, it's incredibly good as an agent, especially when paired with skills, a fastapi server, and opencode go(minimax) is just so much intelligence at an incredibly cheap price. Plus, you can talk to it via channels if you use a claw.
solarkraft 8 hours ago||
I see great potential in this use case, but haven’t found that many documented cases of people doing this.

Do you have resources you can point to / mind sharing your setup? What were the biggest problems / delights doing this?

krzyk 14 hours ago||
By "agent" you mean what?

Coding is mostly "agentic" so I'm bit puzzled.

epolanski 12 hours ago||
It's defined in opencode docs, but it's an overall cross industry term for custom system prompt with it's own permissions:

https://opencode.ai/docs/agents/

phainopepla2 37 minutes ago||
I'm still kind of confused, but opencode itself comes with several agents built-in, and you can also build your own. So what does it mean to use opencode itself as an agent?
65a 20 hours ago||
I'd really like to get more clarification on offline mode and privacy. The github issues related to privacy did not leave a good feeling, despite being initially excited. Is offline mode a thing yet? I want to use this, but I don't want my code to leave my device.
solaire_oa 20 hours ago|
Related https://github.com/anomalyco/opencode/issues/10416
taosx 14 hours ago||
The only thing I'm wondering is if they have eval frameworks (for lack of a better word). Their prompts don't seem to have changed for a while and I find greater success after testing and writing my own system prompts + modification to the harness to have the smallest most concise system prompt + dynamic prompt snippets per project.

I feel that if you want to build a coding agent / harness the first thing you should do is to build an evaluation framework to track performance for coding by having your internal metrics and task performance, instead I see most coding agents just fiddle with adding features that don't improve the core ability of a coding agent.

epolanski 12 hours ago|
You can't write your system prompt in opencode, there's no API to override the default anthropic.txt as far as I'm aware.

I considered creating a PR for that, but found that creating new agents instead worked fine for me.

embedding-shape 10 hours ago|||
> You can't write your system prompt in opencode

Now I just started looking into OpenCode yesterday, but seems you can override the system prompts by basically overloading the templates used in for example `~/.opencode/agents/build.md`, then that'd be used instead of the default "Build" system prompt.

At least from what I gathered skimming the docs earlier, might not actually work in practice, or not override all of it, but seems to be the way it works.

taosx 12 hours ago|||
I've forked it locally, to be honest I haven't merged upstream in a while as I haven't seen any commits that I found relevant and would improve my usage, they seem to work on the web and desktop version which I don't use.

The changes I've made locally are:

- Added a discuss mode with almost on tools except read file, ask tool, web search only based no heuristics + being able to switch from discuss to plan mode.

Experiments:

- hashline: it doesn't bring that much benefit over the default with gpt-5.4.

- tried scribe [0]: It seems worth it as it saves context space but in worst case scenarios it fails by reading the whole file, probably worth it but I would need to experiment more with it and probably rewrite some parts.

The nice thing about opencode is that it uses sqlite and you can do experiments and then go through past conversation through code, replay and compare.

[0] https://github.com/sibyllinesoft/scribe

khimaros 23 hours ago||
i've been using this as my primary harness for llama.cpp models, Claude, and Gemini for a few months now. the LSP integration is great. i also built a plugin to enable a very minimal OpenClaw alternative as a self modifying hook system over IPC as a plugin for OpenCode: https://github.com/khimaros/opencode-evolve -- and here's a deployment ready example making use of it which runs in an Incus container/VM: https://github.com/khimaros/persona
riedel 23 hours ago|
Very cool! I have been using opencode, as almost everybody else in the lab is using codex. I found the tools thing inside your own repo amazing but somehow I could not get it to reliably get opencode to write its own tools. Seems also a bit scary as there is pretty much not much security by default. I am using it in a NixOS WSL2 VM
yogurt0640 9 hours ago||
You could try something like this https://github.com/andersonjoseph/jailed-agents

I'm actually moving to containerised isolation. I realised the agents waste too much time trying to correctly install dependencies, not unlike a normal nixos user.

hrpnk 4 hours ago||
I wish the team would be more responsive to popular issues - like inability to provide a dynamic api key helper like claude has. This one even has a PR open: https://github.com/anomalyco/opencode/issues/1302
arthurjean 4 hours ago||
I've used both. I stuck with Claude Code, the ergonomics are better and the internals are clearly optimized for Opus which I use daily, you can feel it. That said OpenCode is still a very good alternative, well above Codex, Gemini CLI or Mistral Vibe in my experience.
dalton_zk 8 hours ago||
Dax post on x:

"we see occasional complaints about memory issues in opencode

if you have this can you press ctrl+p and then "Write heap snapshot"

Upload here: https://romulus.warg-snake.ts.net/upload

Original post:https://x.com/i/status/2035333823173447885

shaneofalltrad 22 hours ago||
What would be the advantage using this over say VSCode with Copilot or Roo Code? I need to make some time to compare, but just curious if others have a good insight on things.
javier123454321 21 hours ago||
In terms of output, it's comparable. In terms of workflow, it suits my needs a lot more as a VIM terminal user.
ray_v 21 hours ago|||
I started out using VSCode with their Claude plugin; it seemed like a totally unnecessary integration. A better workflow seems to just run Claude Code directly on my machine where there are fewer restrictions - it just opens a lot more possibilities on what it can do
zingar 21 hours ago||
Aren’t those in-editor tools? Opencode is a CLI
shaneofalltrad 16 hours ago||
Ok I get it now, same with the vim comment above, it seems VSCode has the more IDE setup while OpenCode is giving the vim nerdtree vibe? I'll have to take a look, it makes sense to possibly have both for different use cases I guess.
zingar 6 hours ago||
No that’s not it. Opencode is a pure terminal app, your interaction is by typing prompts and slash commands. You can also script prompts to it.

There are probably IDE plugins that feed prompts or context in based on your interaction with the editor.

01100011 19 hours ago||
Stupid question, but are there models worth using that specialize in a particular programming language? For instance, I'd love to be able to run a local model on my GPU that is specific to C/C++ or Python. If such a thing exists, is it worth it vs one of the cloud-based frontier models?

I'm guessing that a model which only covers a single language might be more compact and efficient vs a model trained across many languages and non-programming data.

Fulgidus 4 hours ago||
Months ago I tested a concept revolving this issue and made a weird MCP-LSP-LocalLLM hybrid thing that attempts to enhance unlucky, fast changing, or unpopular languages (mine attempts with Zig)

Give it a look, maybe it could inspire you: https://github.com/fulgidus/zignet

Bottom-line: fine-tuning looks like the best option atm

girvo 18 hours ago|||
I'm currently experimenting with (trying to) fine tune Qwen3.5 to make it better at a given language (Nim in this case); but I am quite bad at this, and honestly am unsure if it's even really fully feasible at the scale I have access to. Certainly been fun so far though, and I have a little Asus GX10 box on the way to experiment some more!
embedding-shape 10 hours ago||
Been playing around with fine-tuning models for specific languages as well (Clojure and Rust mostly), but the persistent problem is high quality data sets, mostly I've been generating my own based on my own repositories and chat sessions, what approach are you taking for gathering the data?
numberwan9 5 hours ago|||
Have you looked on HF? Here's one that is fine tuned on Rust https://huggingface.co/Fortytwo-Network/Strand-Rust-Coder-14...
girvo 7 hours ago|||
Yeah I have a body of work of 15 years, plus I’ve been building labelled sets from open source (the source code isn’t quite enough on its own)

Now I’m using that to generate synthetic sets and clean it up, but man I’m struggling hah. Fun though.

epolanski 12 hours ago|||
My own experience trying many different models is that general intelligence of the model is more important.

If you want it to stick to better practices you have to write skills, provide references (example code it can read), and provide it with harnessing tools (linters, debuggers, etc) so the agent can iterate on its own output.

cpburns2009 19 hours ago||
I'd be interested in this too. I think that's what post-training can achieve but I've never looked into it.
zkmon 11 hours ago|
OpenCode works awesome for me. The BigPickle model is all I want. I do not throw some large work at the agent that requires lot of reasoning, thinking or decision making. It's my role to chop the work down to bite-size and ask the fantastic BigPickle to just do the damn coding or bit of explaining. It works very well with interactive sessions with small tasks. Not giving something to work over night.

I used Claude with paid subscription and codex as well and settled to OpenCode with free models.

More comments...