Top
Best
New

Posted by danenania 4/16/2025

Show HN: Plandex v2 – open source AI coding agent for large projects and tasks(github.com)
Hey HN! I’m Dane, the creator of Plandex (https://github.com/plandex-ai/plandex), an open source AI coding agent focused especially on tackling large tasks in real world software projects.

You can watch a 2 minute demo of Plandex in action here: https://www.youtube.com/watch?v=SFSu2vNmlLk

And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: https://www.youtube.com/watch?v=g-_76U_nK0Y.

I launched Plandex v1 here on HN a little less than a year ago (https://news.ycombinator.com/item?id=39918500).

Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.

In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.

I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.

A bit more on some of Plandex’s key features:

- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.

- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.

- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.

- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.

- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.

- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.

I’d love to get more HNers in the Plandex Discord (https://discord.gg/plandex-ai). Please join and say hi!

And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!

257 points | 81 commentspage 2
zamalek 4/17/2025|
Have you considered adding LSP support? I anticipate go-to-defintion/implementation and go-to-usages being pretty useful via MCP or function calling. I started doing this for an internal tool a while back (to help with understanding some really poorly written Ruby) but I don't find any joy in coding this kind of stuff and have been hoping for someone else to do it instead.
danenania 4/17/2025|
Yeah, I've definitely thought about this. I would likely try to do it through tree-sitter to keep it as light and language-agnostic as possible vs. language-specific LSP integrations, but I agree it could be very helpful.
gcanyon 4/17/2025||
> It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.

Does this possibly have non-coding-related utility for general reasoning about large volumes of text?

danenania 4/17/2025|
The project map supports markdown files (and html), so you could definitely use it to explore docs, notes, etc. if they're all in markdown/html. Plaintext files aren't currently mapped though, so just the file name would be used to determine whether to load those.
iambateman 4/16/2025||
Really cool! Looking forward to checking this out.

I really like my IDE (PHPStorm) but I want Cursor-like functionality, where it’s aware of my codebase and able to make changes iteratively. It sounds like this is what I need?

Excited to give this a go, thanks for sharing.

Btw one of the videos is private.

danenania 4/16/2025|
Thanks! I'd love to hear your feedback.

> I want Cursor-like functionality, where it’s aware of my codebase and able to make changes iteratively. It sounds like this is what I need?

Yes, Plandex uses a tree-sitter project map to identify relevant context, then makes a detailed plan, then implements each step in the plan.

> Btw one of the videos is private.

Oops, which video did you mean? Just checked them all on the README and website in incognito mode and they all seem to be working for me.

iambateman 4/16/2025||
The tutorial video : https://www.youtube.com/watch?v=VCegxOCAPq0
danenania 4/16/2025||
Oh I see, in the HN post above. Sorry about that! Seems it's too late for me to edit, but here's the correct URL - https://youtu.be/g-_76U_nK0Y

I'll ping the mods to see if they can edit it.

ako 4/17/2025||
Interesting to see that even with these type of tools coding it takes 8 months. That is not the general impression people have of ai assisted coding. Any thoughts on how you could improve plandex to bring down 8 months to 1 month or less?
danenania 4/17/2025|
Another way to think about it is the 8 months of work I did would have taken years without help from AI tools (including Plandex itself).
elliot07 4/16/2025||
Congrats on the V2 launch. Does Plandex support MCP? Will take it for a test drive tonight.
danenania 4/16/2025|
Thanks! It doesn't support MCP yet, but it has some MCP-like features built-in. For example, it can launch a browser, pull in console logs or errors, and send them to model for debugging (either step-by-step or fully automated).
ErikBjare 4/16/2025||
Awesome to see you're still at it. v2 looks great, I will take it for a spin.
danenania 4/16/2025|
Thanks Erik! I'd love to hear your thoughts.
greggh 4/20/2025||
Have you tested any local models through ollama? Did any work good enough to recommend?
mertleee 4/17/2025||
CLI is the worst possible interface for coding llms. Especially for "larger" projects.
danenania 4/17/2025||
There are pros and cons to different interfaces for sure. Personally, I'd want to have a CLI-based codegen tool in my toolkit even if I hadn't created Plandex, as there are benefits (environment configuration, execution control, file management, piping data, to name a few) that you can't easily get outside of a CLI.

I also personally find the IDE unwieldy for reviewing large diffs (e.g. dozens of files). I much prefer a vertically-scrolling side-by-side comparison view like GitHub's PR review UI (which Plandex offers).

shotgun 4/17/2025||
Are you saying GUI IDEs are best? Or is there an ideal kind of interface we haven't yet seen?
esafak 4/16/2025||
I think you should have put the "terminal-based" qualifier in the title and lede.
danenania 4/16/2025|
Yeah that's fair enough. The way I look at it though, a lot of the value of Plandex is in the underlying infrastructure. While I've stuck with the terminal so far in order to try to keep a narrower focus, I plan to add other clients in the future.
lsllc 4/17/2025||
You should talk to the folks at Warp, this plus Warp would be pretty interesting.
lsllc 4/17/2025|
The link in the README.md to "local-mode quickstart" seems broken.
danenania 4/17/2025|
Do you mean at the top of the README or somewhere else? The link at the top seems to be working for me.
lsllc 4/17/2025||
In the table showing the hosting options lower down in the README, the 3rd row is titled "Self-hosted/Local Mode", but the link in "Follow the *local-mode quickstart* to get started." goes to a GH 404 page.
danenania 4/17/2025||
I see—fixed it, thanks!
More comments...