Top
Best
New

Posted by alattaran 13 hours ago

DeepClaude – Claude Code agent loop with DeepSeek V4 Pro(github.com)
497 points | 200 comments
aftbit 12 hours ago|

    #!/bin/sh
    export ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic
    export ANTHROPIC_AUTH_TOKEN=sk-secret
    export ANTHROPIC_MODEL=deepseek-v4-flash
    export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
    exec claude $@
rapind 8 hours ago||
ANTHROPIC_MODEL=deepseek-v4-pro[1m] ANTHROPIC_SUBAGENT_MODEL=deepseek-v4-flash

This is what I’ve been using for non-confidential projects for about a week now (soon after v4 came out). I honestly can’t tell the difference, but I’m not doing anything crazy with it either.

Worth noting that I don’t think DeepSeek‘s API lets you opt out of training. Once this is up on other providers though… (OpenRouter is just proxying to DeepSeek atm)

lhl 1 hour ago|||
For those that don't want their data trained on, OpenRouter allows you to have account-wide or per-request routing with either provider.data_collection: "deny" or zdr: true (zero data retention).

Also, you can use HuggingFace Inference for DeepSeek V4 or Kimi K2.6, both of which work quite well and route through providers that you can enable/disable (like Together AI, DeepInfra, etc) - you'll have to check their policies but I think most of those commercial inference providers claim to not train on your data either.

miroljub 1 hour ago||
I wonder why the question about data security and training comes often with DeepSeek, Kimi, Glm and never with Anthropic, OpenAI, and Google models.

Why is that?

IIRC, USA data protection protects data of US citizens only, foreigners data is not protected, and the companies are not even allowed to disclose when they collect those data.

Matl 34 seconds ago||
> USA data protection protects data of US citizens only, foreigners data is not protected

HN is an American site. If you look at the US government, it is going to fearmonger about anything China related, because they haven't had a genuine competitor for decades and they're scared and lashing out. Most US news just parrot the government line, sometimes more so than state TV would, and so it reflects here.

maxgashkov 1 hour ago||||
As of now, OpenRouter offers multiple providers for DeepSeek with ZDR (not sure if they respect it but still).
vidarh 1 hour ago||
At several times the price of DeepSeek, though, so it's a tradeoff... Even then Pro is still cheaper than Haiku.
tariky 5 hours ago|||
I wanted to try this. To bring back opus and sonnet do I just reset those env's?
ianmurrays 4 hours ago||
Correct.
aaurelions 11 hours ago|||
It seems like any project that makes fun of Claude is bound to reach the top spot on Hacker News. Even if it’s just a project consisting of four lines of code.
oblio 2 hours ago|||
You're just mean. I count 6 lines of code!
ihsw 11 hours ago|||
[dead]
varenc 7 hours ago|||
The more interesting part of deepclaude is the local proxy it runs to switch models mid-session and do combined cost tracking. Though these features seem quite buried in the LLM-generated readme. Looking at the history, it appears they were added later, and the readme wasn't restructured to highlight this.

Also, the author checked in their apparently effective social media advertising plan: https://github.com/aattaran/deepclaude/commit/a90a399682defc... (which seems to be working)

yard2010 6 hours ago||
How come such slop is allowed here, what value do these vibe coded zero shot "projects" add? Why not just post the prompt?
woctordho 3 hours ago|||
For the same reason that GitHub has a releases page for uploading binaries.
fragmede 6 hours ago||||
Convenience? Am I supposed to take the prompt and use my own tokens on it? Why should I have to do that?
otabdeveloper4 6 hours ago|||
Recruiters used to use the candidate's Github "sources" page for evaluating candidates as a kind of proof-of-work.
groestl 6 hours ago||
And recruiter agents still do.
spirit23 8 hours ago|||
So I created https://getaivo.dev, one can use model in the coding agent directly. Just `aivo claude -m deepseek-v4-pro`
Tanxsinxlnx 3 hours ago||
does it support aws bedrock provider support,does i can use any model in this
spirit23 3 hours ago||
Currently no, but it can be added
btbuildem 10 hours ago|||
This in essence is what allows one to use any model with CC -- including local.
nadermx 12 hours ago|||
The AI wars have begun
heisenbit 5 hours ago|||
And they are enticing human agents to further their agendas using techniques learned from the white mice.
stingraycharles 9 hours ago||||
This has been possible since the beginning.
niobe 4 hours ago|||
thanks, that was super easy.

I have been wanting to try CC with different models since Opus went downhill last month..

What limitations or issues have you noticed when using DeepSeek with Claude Code if any?

faangguyindia 8 hours ago||
those who use deepseek v4, what level of output you get? Codex 5.3 or GPT 5.4?

is flash version on level of gpt 5.4 mini

adonese 6 hours ago|||
I tried it on a non trivial, but also well documented and self contained task. It did amazingly well. I used deepseek v4 pro via deepseek platform. The model is very fast and also it is super cheap. I burned only 0.06 USD (I reckon how the same task would have cost me had I used e.g., amp).

PS. mentioning amp because i used to use it and I pay directly for token. I topped up 5 usd so I will be going to use it and see how far can it take me. But my impression so far is even when model subsidization is done, those open source models are quite viable alternatives.

zozbot234 5 hours ago|||
> But my impression so far is even when model subsidization is done, those open source models are quite viable alternatives.

My understanding is that DeepSeek V4 Pro is going to be uniquely good at working on consumer platforms with SSD offload, due to its extremely lean KV cache. Even if you only have a slow consumer platform, you should be able to just let it grind on a huge batch of tasks in parallel entirely unattended, and wake up later to a finished job.

AIUI, people are even experimenting with offloading the KV cache itself to storage, which may unlock this batching capability even beyond physical RAM limits as contexts grow. (This used to be considered a bad idea with bulky KV caches, due to concerns about wearout and performance, but the much leaner KV cache of DeepSeek V4 changes the picture quite radically.)

torginus 2 hours ago|||
Good. It's hard to overstate how nervous most executives are about relying on cloud-based providers.

AI currently works basically by sending your entire codebase and workflow, and internal communication over the internet to some third party provider, and your only protection is some legal document say they pinky promise they won't train on your data.

And said promise is made by people whose entire business model relies on being able to slurp up all the licensed content on the internet and ignore said licensing, on the defense of being too big to fail.

zozbot234 2 hours ago||
Yes, this is the most straightforward argument for local AI inference. "Why buy cloud-based SOTA AI? We have SOTA AI at home." It's great that DeepSeek may now be about to make this possible, once the support in local inference frameworks is up to the task.
adonese 5 hours ago|||
Is there any place I can read about KV? Excuse my ignorance as I'm not familiar with this topic and I read scattered notes that deepseek's cost are well optimized due to how their kv cache work. But I want to read more how kv cache relates to the inference stack and where does it actually sit.

> AIUI, people are even experimenting with offloading the KV cache itself to storage, which may unlock this batching capability even beyond physical RAM limits as contexts grow.

Especially this point. Any reason that this idea was considered bad? Is it due to the speed difference between the GPU VRAM to the RAM?

zozbot234 4 hours ago||
KV cache generally grows linearly with your current context; it gets filled-in with your prompts during prompt processing, and newly created context gets tacked on during token generation. LLM inference uses it to semantically relate the currently-processed token to its pre-existing context.

> Any reason that this idea was considered bad?

Because the KV cache was too big, even for a small context. This is still an issue with open models other than DeepSeek V4, though to a somewhat smaller extent than used to be the case. But the tiny KV of DeepSeek V4 is genuinely new.

spaceman_2020 3 hours ago||||
have you used it for non coding tasks via MCP, like Figma/Paper for design or Ableton MVP for sound design?

The token cost makes it tempting to use for token-heavy tasks like this

miroljub 1 hour ago|||
> even when model subsidization is done, those open source models are quite viable alternatives.

Model inference was never subsidized. Inference is highly profitable with today's prices. That's why you have many inference providers. My guess, the prices for inference will go down, as more competition starts cutting the margin.

It's model training, development and R&D that cost a lot, and companies creating closed models don't have any business model except astroturfing and trying to recover training costs through overpriced inference.

63stack 21 minutes ago|||
It's close to Opus 4.5 for me
vitaflo 12 hours ago||
I'm not exactly sure what the point of this is. Deepseek already has instructions to use its API with many CLI's including Claude Code directly:

https://api-docs.deepseek.com/quick_start/agent_integrations...

varenc 7 hours ago||
The readme absolutely buries the features that are actually non-trivial: It runs a proxy to switch models mid-session, and does combined cost tracking between Anthropic and other models you might be using. The LLM that wrote the readme never updated the general project description to highlight these features.

Also the author checked in their advertising plan: https://github.com/aattaran/deepclaude/commit/a90a399682defc...

2ndorderthought 12 hours ago|||
There probably isn't a point. Someone didn't understand something, didn't research it, so they 1 shotted their first thought and sent it to the front page of HN and all of their socials. It's the future bruh
georgeburdell 7 hours ago|||
I embrace it at this point. It ends all the shilling of vibe coded tools at work that I have endured over the past year. Everyone can now make their own tools with zero obligation to coordinate beyond shared hardware resources
altmanaltman 7 hours ago||||
To be fair, HN sent it to the front page, not the user. The rest I agree.
dev_hugepages 5 hours ago|||
And now, because we all upvoted and commented on it, the vibe coded slop of the new user is on the front page now.
2ndorderthought 1 hour ago||
Same place same time tomorrow?
croes 11 hours ago|||
From vibe coders for vibe coders
2ndorderthought 10 hours ago|||
I don't always copy paste vibe coded project readme mds into Claude code and ask them to rewrite it but when I do... actually that's all I do now because my goal in life is to make wealthy overvalued companies wealthier.
incrudible 5 hours ago||
Anthropic is the opposite of wealthy, the more you use their service, the more money they lose. Unless you think your precious MDs being used for training data is gonna make them rich eventually.
adastra22 4 hours ago|||
Their marginal inference cost is less than what they charge for it. Normally that is considered profitable...
yard2010 3 hours ago|||
It's not the md files it's how you interact with their agents.
kordlessagain 9 hours ago|||
Problem?
TacticalCoder 45 minutes ago|||
It's really getting a lot of upvotes so it's nearly as if people were feeling locked-in and wanted a way out but...

Why would you keep using CC CLI if you want to use the much cheaper DeepSeek v4 models (Flash and Pro): isn't it the opportunity to kiss CC CLI goodbye and use something not controlled by Anthropic?

Anyone here successfully moved from CC CLI to a fully open-source project? I'm asking this as a Claude Code CLI (Sonnet/Opus) user. My "stack" is all open-source: from Linux to Emacs to what-have-you. I'd rather also have open-weight models and a fully open-source (not controlled by a single company) AI CLI.

Any suggestion for something that works well? (by "well" I mean "as well as Claude Code CLI", which is not a panacea so my bar ain't the end of the world either).

crooked-v 11 hours ago|||
I'm curious how well it actually works. I tried Deepseek with Hermes and Opencode and it seemed extremely bad about using some of the basic tools given, like the Hermes holographic memory tools, even with system prompt instructions strongly pointing them out.
ttoinou 12 hours ago||
I thought the tool format wasnt exactly the same ? So plugging any IA into claude code requires a conversion of format
selcuka 10 hours ago|||
DeepSeek has a dedicated Anthropic-compatible endpoint [1].

[1] https://api-docs.deepseek.com/guides/anthropic_api

miroljub 34 minutes ago||
This one still lacks some features. They still recommend using their OpenAI compatible endpoint.

But I guess Anthropic is just not capable of implementing the OpenAI API compatible client in Claude Code.

ricardobeat 12 hours ago||||
Many of them expose “anthropic-compatible” APIs for this very purpose.
faangguyindia 8 hours ago|||
qwen also offers openai compatible endpoint.
syntex 2 hours ago||
Not sure you can replace Claude with DeepSeek V4 that easily and have same results.

From what I see while building my own agentic system in Elixir, the problem is in training for your specific harness/contracts. Claude/GPT-style models seem to be trained around very specific contracts used by the harness like tool call formats, planning structure, patching, reading files, recovering from errors, and knowing when to stop.

In practice, you either need a very strong general model that can infer and follow those contracts (expensive), or a weaker model that has been fine-tuned / trained specifically on your own agent contracts. Otherwise, the whole thing becomes flaky very quickly. And I suspect with Deepseek V4 you may get last options.

vidarh 57 minutes ago||
There are certainly quirks, but identifying and conforming to those quirks is not that complex. E.g. I had Kimi "fix" my harness to work better with Kimi by pointing it at the (open source) kimi-cli + web search and telling it to figure out which differences might matter (it made compaction more aggressive, and worked around some known looping issues (by triggering compaction if it spotted looping tool calls). Largely addressing the quirks tend to harden the harness for other models too. But, yeah, it is more work to make the smaller models work with instead of against the harness.
dandaka 1 hour ago|||
I hope they collaborate with open source harness providers (Pi, Opencode) and train models with those. So next generations will have better integration and better overall quality.
o10449366 1 hour ago|||
Idk, my recent experience with Claude is that 4.7 barely knows how to use basic bash tools - how to properly check when programs have finished running, even basic stuff like how to run pytest suites and read the failed tests from the output without re-running the suite to specifically look for them. It's shockingly dumb for all of the tooling they've built into Claude Code (the useless Monitoring tool that blocks bash polling/sleeping that actually works, etc.).

I finally get fed up and started using GPT 5.5 the past 4 days and its a breath a fresh air despite feeling much more minimal. With Claude I had to write so many hooks to enforce behaviors it wouldn't remember and it lacked common sense on. GPT 5.5 does a much better job with things like knowing the AWS CDK CLI can hang on long CloudFormation deployments and it should actively check the deployment status using CloudFormation API rather than hanging for 30+ minutes - and it does this all without asking.

Maybe there's better tooling built into Codex too, but at least on the surface level it seems like how smart the model is makes a significant difference because Claude has more tools than I can count and still struggles to use "grep".

Edit: Like just now - I can't tell you how many times I day I see this sequence:

"Sorry, I'll run in parallel"

"Error editing file"

"File must be read first"

Repeat 10x for the 10 subagents Claude spawned and then it gets stuck until you press escape and it says "You rejected the parallel agents. Running directly now"

cpursley 2 hours ago|||
I love to learn more about the system you’re building out in Elixir and your learnings if any of it is public.
dalekkskaro 2 hours ago||
[flagged]
rsanek 2 hours ago||
>DeepSeek V4 Pro scores 96.4% on LiveCodeBench and costs $0.87/M output tokens

This is a heavily subsidized price and will only last until the end of the month: "The deepseek-v4-pro model is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC." [0]

The "supported backends" table is also deceiving -- while OpenRouter's server's may be in the US, the only way to get the $0.44/$0.87 pricing is to pass through to the DeepSeek API, which of course is China-based. [1]

I do think the model is quite good, I myself use it through Ollama Cloud for simple tasks. But I think some folks have bought in a little too much to the marketing hype around it.

[0] https://api-docs.deepseek.com/quick_start/pricing [1] https://openrouter.ai/deepseek/deepseek-v4-pro/providers

FooBarWidget 2 hours ago|
They expect inference prices to structurally drop once they receive their big batch of Huawei Ascend chips by the second half of the year.
justech 11 hours ago||
If you're looking for Claude Code alternatives, I would first suggest looking into pi.dev or opencode for your harness. And then for models, you can choose from OpenCode Go (IMO most cost effect at this moment), OpenRouter, or direct from DeepSeek. Better if you go the Kimi route IMO and just buy a subscription from kimi.com
mgoetzke 44 minutes ago||
I liked pi.dev but why is registering endpoints and models not as simple as possible ? Or am i missing something ? I always have to fiddle with the config file.
miroljub 31 minutes ago||
Editing config files is not necessary. Just do /login from your session, choose your provider, and there you go.
wolttam 11 hours ago|||
I’m going to throw my harness in the ring: https://codeberg.org/mlow/lmcli
taocoyote 6 hours ago||
Looks interesting. Does it offer anything special that pi.dev or opencode does not?
wolttam 6 hours ago||
Probably not, `lmcli` is very lean. I would consider it a slightly lower-level tool than either pi.dev or opencode. E.g. there is no built-in coding agent, but it's easy to build one up in the config with your own prompt (or use the example).

It's proven useful for me, and I figure others might appreciate how light of a shim it is between you and the models.

Aeroi 11 hours ago|||
agreed. OpenCode is a strong base, and with a couple modifications it can become a very effective harness. my sideproject mouse.dev I’ve been combining parts from OpenCode, Claude Code, and Hermes to build a cloud agent architecture that works well from mobile.
CharlesW 11 hours ago|||
> OpenCode is a strong base, and with a couple modifications it can become a very effective harness.

I personally didn't find it to be competitve with Claude Code as a harness. Can I ask how you modified it to perform better?

Aeroi 10 hours ago||
I haven’t run formal evals but i improved the experience for my own needs and it feels noticeably better with these modifications.

-Claude-style subagents -an MCP layer for higher-level tools -Cursor-style control plane modes like Ask, Plan, Debug, and Build.

The MCP layer lets the harness use things like GitHub file/code read, PR creation, web search/fetch, structured user questions, plan-mode switching, user skills, and subagents.

So the improvement is mostly from better ui/ux orchestration and tool access. There's some things from hermes that are interesting as well.

Most of my focus has been on applying this stack to sandboxed cloud agents so you can properly code and work from mobile devices.

I can't definitively say that the stack is better or worse than Claude code, more just tuned for my use case I guess.

adobrawy 6 hours ago|||
I'm a Claude Code Web fan and a rather heavy user. So I was interested in your product. However, I couldn't find an answer on the website. What parts did you find so good that you ported them?
Aeroi 36 minutes ago||
Nothing groundbreaking but i'll do a blog writeup on the architecture if it would be helpful for people. My focus has been on mobile.

The main pieces I've integrated for mouse.dev inspired by claude/cursor was plan mode, agent questions, subagents, pre/post hooks, context compaction, repo-local skills, and permission modes. So mostly tools like enter_plan_mode, ask_user_question, and spawn_subagent, plus .mouse/skills and .mouse/plans.

One nice feature is continuity. If you’re working on desktop and save a plan to .mouse/plans, you can pick it up later on mobile with cloud agents, or do the reverse. You can plan something from your phone, then when you’re back at your desk, review it/build it. That was my initial goal with this project because I've found the plan act loop so helpful.

Mouse Cloud Agents is mostly an OpenCode-based harness, but everything routes through our MCP/event system so it’s mobile-first and provider-agnostic.

I intentionally skipped a lot of IDE and Claude Code style desktop features. The bet is that this new style of coding is becoming less “edit files in an IDE” and more steer a capable coding chatbot.

Would love to hear from anyone reading that's iterating on harness architecture, it's been really fun to work on.

aaurelions 11 hours ago|||
Another very cost-effective option is Ollama Cloud. In a month of use, I only hit the 5-hour limit once, when I ran 8 agents simultaneously for 2 hours.
kopirgan 8 hours ago||||
On which tier?
postatic 11 hours ago|||
definitely worth it - have both ollama cloud, opencode and hermes running to test them all out, working great so far.
cpursley 2 hours ago|||
How does the kimi subscription compare to Codex and Claude Code in terms of how much mileage you get for the pricing? I mean, I see the prices but not sure how usage that buys.
bakugo 11 hours ago|||
> I would first suggest looking into pi.dev

Looked into this one. Thought it was suspicious that it only had 7 open issues on github. Turns out they have a bot that auto-closes every single issue just because.

I honestly have no words.

mikeocool 8 hours ago|||
Their process is outlined here: https://github.com/badlogic/pi-mono/blob/main/CONTRIBUTING.m...

> Maintainers review auto-closed issues daily and reopen worthwhile ones. Issues that do not meet the quality bar below will not be reopened or receive a reply.

Seems like not an unreasonable way to deal with the problem of large numbers of low quality issues being submitted.

oefrha 5 hours ago|||
If that process actually happens then there’s absolutely no reason not to have the reviewing maintainer close it after review instead. The only reasonable conclusion is that documented process is aspirational at best and vibed itself at worst.
cromka 6 hours ago||||
Sounds like a perfect way to agitate the community going against the established culture like that.
63stack 17 minutes ago||
The established culture on a lot of projects is that you open an issue, and then you have to keep pinging it every week otherwise the stale bot closes it with "this issue is stale, closing, but your contribution is very important to us".

It's crap either way.

altmanaltman 7 hours ago|||
But how is it any different from keeping them open?

Like if they are going to sort through all the issues eventually (like they claim), why not just close the ones that are not worthy when they get to them instead of closing all by default?

Is it just so that the project doesnt have open issues on its github page? But they are open issues in reality because the maintainer will eventually go through them?

Nothing is "unreasonable" in the sense that an open source project should have the right to do what it wants with its rules but its definitely a weird stance.

mellosouls 4 hours ago|||
They address the decision at the end of those contribution guidelines linked above, specifically:

It is a guardrail against burnout and tracker spam

Its based on their implied perspective that the majority of submissions don't follow those guidelines which helps determine their quality threshold.

https://github.com/badlogic/pi-mono/blob/main/CONTRIBUTING.m...

oarsinsync 3 hours ago|||
> But how is it any different from keeping them open?

If all open issues are actionable items, that makes expected workload a lot easier to handle.

If most open issues are actually in "needs triage / needs review" state, you lose the signal from the noise.

The issue tracker for a project exists primarily as a tool for maintainers, not for outsiders. Yes, the maintainers could change their workflow to create a new view that only shows triaged tickets.

Or, they could ensure the default 'open' view serves their needs.

vanchor3 3 hours ago||
Somehow going through closed issues just to reopen them sounds like more effort than just using the built in label system which is made for this purpose, but maybe that's just me.
oarsinsync 1 hour ago||
I can either change my daily workflow to accommodate the noisy herd, or I can change the noisy herd to accommodate my daily workflow.
__cayenne__ 8 hours ago||||
The maintainer, Mario, sometimes declares the repo is on an “issue holiday” where issues are auto closed. This particular holiday is because there is a big refactor coming up. In non holiday periods issues can be reported as normal.
skeledrew 8 hours ago||||
They have a pretty decent explanation.

https://github.com/badlogic/pi-mono/blob/main/CONTRIBUTING.m...

DetroitThrow 6 hours ago||
"Decent" is doing some work. This is going beyond any norms I've encountered in OSS to close issues by default via a LLM or an "issue holiday".
Munksgaard 4 hours ago||
It was pretty well received when mitchellh copied the idea and formalized it into vouch.

- https://news.ycombinator.com/item?id=46930961 - https://github.com/mitchellh/vouch

LPisGood 10 hours ago||||
The idea is for it to he extremely minimal which strikes me as a very opinionated stance, and not opinions I agree with.
justinhj 8 hours ago|||
It's a very interesting project. Many popular open source projects are inundated with poor quality issues and prs, hence the defences they are starting to erect.
DeathArrow 5 hours ago||
>If you're looking for Claude Code alternatives, I would first suggest looking into pi.dev or opencode for your harness.

While those are nice, Claude Code has the largest amount of plugins and skills I want to use.

wizhi 4 hours ago||
Aren't skills just literal plaintext files? Why not just copy them?
DeathArrow 1 hour ago||
Yes, they are .md files but they can rely on builtin behaviors in the harness or on plugins.
isege 4 hours ago||
> Claude Code is the best autonomous coding agent.

If you look at the terminal-bench@2.0 leaderboard, you'll quickly see it's actually one of the weakest agentic harnesses. Anthropic's own models score lower with Claude Code than with virtually any other harness.

So it's quite the opposite. Claude Code is arguably the worst harness to run models with.

DaanDL 3 hours ago||
Okay, but not all results on there are valid, ForgeCode for instance has been cheating in the past:

https://debugml.github.io/cheating-agents/#sneaking-the-answ...

cpursley 2 hours ago||
Those benches are completely and totally meaningless when it comes down to real world work tasks, and everyone knows it.
l5870uoo9y 4 hours ago||
> DeepSeek V4 Pro scores 96.4% on LiveCodeBench and costs $0.87/M output tokens.

Yes and this is a temporary discount which increases to 3.48 USD on 2026/05/31 15:59 UTC.

Source: https://api-docs.deepseek.com/quick_start/pricing

TheServitor 6 hours ago||
It's surprisingly easy to hit $200 worth of tokens even at ~$1/M token though. No matter how many times I do the math the coding plans are the better value.
izietto 3 hours ago||
Just want to say that I faced this very problem the last week, I discovered OpenCode agent and it works great, with DeepSeek and other models. Try it out guys.
column 3 hours ago|
Pi will blow your mind :)
aucisson_masque 2 hours ago||
No MCP.

No sub-agents. There's many ways to do this. Spawn Pi instances via tmux, or build your own with extensions, or install a package that does it your way.

No permission popups. Run in a container, or build your own confirmation flow with extensions inline with your environment and security requirements.

No plan mode. Write plans to files, or build it with extensions, or install a package.

No built-in to-dos. Use a TODO.md file, or build your own with extensions.

No background bash. Use tmux. Full observability, direct interaction.

diamondosas 27 minutes ago|
I have a question. does anyone have a problem with switihng context between AI and your terminal
More comments...