Top
Best
New

Posted by rbanffy 1 day ago

OpenCode – Open source AI coding agent(opencode.ai)
1169 points | 574 commentspage 3
hrpnk 5 hours ago|
I wish the team would be more responsive to popular issues - like inability to provide a dynamic api key helper like claude has. This one even has a PR open: https://github.com/anomalyco/opencode/issues/1302
aduermael 21 hours ago||
I started my own fully containerized coding agent 100% in Go recently. Looking for testers: https://github.com/aduermael/herm
heywinit 15 hours ago|
i like the containerization idea. i wish you used the opencode cli as the actual underlying agent.
aduermael 15 hours ago||
What do you like particularly about the opencode cli?
zkmon 12 hours ago||
OpenCode works awesome for me. The BigPickle model is all I want. I do not throw some large work at the agent that requires lot of reasoning, thinking or decision making. It's my role to chop the work down to bite-size and ask the fantastic BigPickle to just do the damn coding or bit of explaining. It works very well with interactive sessions with small tasks. Not giving something to work over night.

I used Claude with paid subscription and codex as well and settled to OpenCode with free models.

lordforever7 2 hours ago||
doesn't someone feel weird with the tui being not that of terminal but more of UI with text box coming in center, etc?
arthurjean 5 hours ago||
I've used both. I stuck with Claude Code, the ergonomics are better and the internals are clearly optimized for Opus which I use daily, you can feel it. That said OpenCode is still a very good alternative, well above Codex, Gemini CLI or Mistral Vibe in my experience.
ndom91 13 hours ago||
Since this is blowing up, gonna plug my opencode/claude-code plugin that allows you to annotate LLMs plans like a Google doc with strikethroughs, comments, etc. and loop with your agent until you're happy with the plan.

https://github.com/ndom91/open-plan-annotator

__mharrison__ 1 day ago||
This replaced Aider for me a couple months back.

I use it with Qwen 3.5 running locally when my daily limits run out on my other subscriptions.

The harness is great. Local models are just slow enough that the subscription models are easier to use. For most of my tasks these days, the model's capability is sufficient; it is just not as snappy.

plipt 7 hours ago||
Could you say more about the differences between Aider and OpenCode?

I briefly dabbled with Aider some months back but never got any real work done with it. Without installing each one of these new tools I'm having trouble grokking what is changing about them that moves the LLM-assisted software dev experience forward.

psibi 19 hours ago|||
One thing I like with Aider is the fact that I can control the context by using /add explicitly on a subset of files. Can you achieve the same wit OpenCode ?
__mharrison__ 17 hours ago|||
I feel like I haven't really needed to manage context with newer models. Rarely I will restart the session to clear out out.
esafak 5 hours ago|||
Yes, using at @ sign; CC and Codex use this too.
cyanydeez 1 day ago||
I'm curious: I'venever touched cloud models beyond a few seconds. I run a AMD395+ with the new qwen coder. Is there any intelligence difference, or is it just speed and context? At 128GB, it takes quite awhile before getting context wall.
__mharrison__ 17 hours ago||
There's a difference in intelligence. However for 90% of what I'm doing I don't really need it. The online models are just faster.

I just did a one hour vibe session today, ripping out a library dependency and replacing it with another and pushing the library to pypi. I should take my task list and let the local model replicate the work and see how it works out.

sebastianconcpt 8 hours ago||
OpenCode is the almost good IDE I need.

What does well: helps context switching by using one window to control many repos with many worktrees each.

What can do better? It's putting AI too much in control? What if I want to edit a function myself in the workspace I'm working on? or select a snippet and refer that in the promp? without that I feel it's missing a non-negotiable feature.

xpe 7 hours ago|
Do you think the design direction of “chat first” is compatible with editor first? I don’t know if any tools do both well. Seems like a fork in the road, design wise.
sebastianconcpt 2 hours ago||
I think we already need to flow back and forth in both modes. Because you steer from the chat more ambitious changes (zoom out) but then you need to still have the power to go full high res and zoom in in whatever you need.

From architecture to system programming smoothly. We need to nail that.

cgeier 1 day ago||
I‘m a big fan of OpenCode. I’m mostly using it via https://github.com/prokube/pk-opencode-webui which I built with my colleague (using OpenCode).
systima 23 hours ago|
Open Code has been the backbone of our entire operation (we used Claude Code before it, and Cursor before that).

Hugely grateful for what they do.

james2doyle 23 hours ago|
What caused the switch? Also, are you still trying to use Claude models in OpenCode?
systima 13 hours ago|||
Sorry, I missed part of your question:

What caused the switch was that we're building AI solutions for sometimes price-conscious customers, so I was already familiar with the pattern of "Use a superior model for setting a standard, then fine-tuning a cheaper one to do that same work".

So I brought that into my own workflows (kind of) by using Opus 4.6 to do detailed planning and one 'exemplar' execution (with 'over documentation' of the choices), then after that, use Opus 4.6 only for planning, then "throw a load of MiniMax M2.5s at the problem".

They tend to do 90% of the job well, then I sometimes do a final pass with Opus 4.6 again to mop up any issues, this saves me a lot of tokens/money.

This pattern wasn't possible with Claude Code, thus my move to Open Code.

zingar 23 hours ago||||
You can access anthropic models with subscription pricing via a copilot license.
xvector 22 hours ago||
Pretty sure that's against TOS.

Edit: it's not. https://github.blog/changelog/2026-01-16-github-copilot-now-...

They must be eating insane amounts of $$$ for this. I wouldn't expect it to last

hawtads 20 hours ago||
No, Claude on GitHub Copilot is billed at 3X the usage rate of the other models e.g. GPT-5.4 and you get an extremely truncated context window.

See https://models.dev for a comparison against the normal "vanilla" API.

systima 17 hours ago|||
Yes I regularly plan in Opus 4.6 and execute in “lesser” models ie MiniMax
More comments...