Posted by JamesSwift 12/22/2025
I thought about moving after 10+ years when they abandoned the commit modal, and jacked up the plan prices, but I barely understand how to commit things in Vscode anyway. Let's see in 2026.
[0] https://blog.jetbrains.com/datagrip/2025/12/18/query-console...
It's not part of my daily driver toolbox, but they do get used a lot by me.
And I like the (recent?) update in which the changes from current commit (not only stage) are coloured.
Someone voiced that they liked a certain tool for a certain feature and suddenly we are judging them for it? I like that people share their thoughts and opinions.
Instead of passing judgement on why someone values something, why not ask?
For example, if you were to ask me why I chose to keep using an IDE that I had spent years of my life building muscle memory using perhaps you would get a better understanding of the specific part of the lifecycle I was at when paying for software.
It's not that the git gui was the reason why I signed up for the software in the first place. The git gui was the last reason for me to not jump ship when switching to something like neovim or helix. This was during a time where LSP was becoming popular so refactoring tools and intellisense were finally getting better adoption outside of the JetBrains tooling. Most of this was achievable with editor du jour + lsp plugins, but the git ui was the one piece I hadn't personally solved outside of the JetBrains ecosystem.
I love using IJ + git because there are no seams in between edit and commit. For instance, with IJ, I could easily split every other line of a change into separate commits.
Maybe there's a way in git to stage only certain parts of a diff, but I'd have to go an learn another flag or command that I'm going to forget by the next time I need to do it again.
Also with IJ, I just glance at my main branch tab and the highlighting tells me what commits aren't in my checked out feature branch.
Two small examples but there are many more and it adds up.
I do that at the CLI most of the time and I'd say I'm quite experienced with it, but I still prefer IntelliJ when it gets complicated.
But then I tried it and...WHAT?! Git is an endless rabbit hole of complexity.
It allows you to do stuff so much faster than having to type everything manually into the terminal. Also really enjoy the "Undo Last Commit" feature and how I can easily see all modified files at once and shuffle around stuff between the staging area.
(By “works anywhere”, I meant you can use it with any IDE or editor, or just run it from terminal, though it is cross-platform and should work on Windows, just not sure how well it would play with WSL.)
Maybe you’re saying that you wish VSCose itself was a TUI?
There’s a lot of decent TUI editors nowadays though, and with LSP they’re pretty on-par with vscode. Maybe try Micro? https://micro-editor.github.io/
I would never try running any graphical stuff in WSL anymore, not worth it. VMWare with a graphical installation of any Linux system would be a preferred choice as I'm testing lately.
Also like Sublime Merge, if you want a GUI (paid though)
Yeah that’s on you not even trying. Source control panel, add files or chunks, write message, commit.
What's incredible is just how bad it works. I nearly always work with projects that mount multiple folders, and the IDE's MCP doesn't support that. So it doesn't understand what folders are open and can't interact with them. Junie the same issue, and the AI Assistant appears to have inherited it. The issue has been open for ages and ignored by Jetbrains.
I also tried out their full line completion, and it's incomprehensibly bad, at least for Go, even with "cloud" completion enabled. I'm back to using Augment, which is Claude-based autocompletion.
But Augment is not the most stable. I've had lots of serious problems with it. The newest problem that's pushing me over the edge is that it's recently have been causing the IDE to shoot up to use all cores (it's rare to see an app use 1,000% CPU in the macOS Activity Monitor, but it did it!) when it needs to recompute indexes, which is the only thing that has ever made my M2 Mac run its fan. It's not very reliable generally (e.g. autocompletions don't always appear), so I'd be interested in trying alternatives.
I'm not sure this is true, do you have a source? Maybe conflating this with the recent Agentic AI Foundation & MCP news?
VSCode? Select AI view via shortcut or CMD + P and you’re done. That’s how you do it.
https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet...
And not the dozens of others you have? Do you not consider them also separate families?
Yeah, they completely didn’t see any of this coming.
Fleet is a completely different codebase.
So they’re correct, there’s only two families of IDEs.
Fleet was very stable to use , it just never successfully turned into a product which they address in their link as well why that happened
Uncharitable but yeah, reality isn't always charitable.
Atlassian is next…
An explainer for others:
Not only can analyzers act as basic linters, but transformations are built right in to them. Every time claude does search-and-replace to add a parameter I want to cry a little, this has been a solved science.
Agents + Roslyn would be productive like little else. Imagine an agent as an orchestrator but manipulation through commands to an API that maintains guard rails and compilability.
Claude is already capable of writing roslyn analyzers, and roslyn has an API for implementing code transformations ( so called "quick fixes" ), so they already are out there in library form.
It's hard to describe them to anyone who hasn't used a similarly powerful system, but essentially it enables transforms that go way beyond simple find/replace. You get accurate transformations that can be quite complex and deep reworks to the code itself.
A simple example would be transforming a foreach loop into a for loop, or transforming and optimizing linq statements.
And yet we find these tools unused with agentic find/replace doing the heavy lifting instead.
Whichever AI company solves LSP and compiler based deep refactoring will see their utility shoot through the roof for working with large codebases.
It was one of the things that brought me to DataGrid in the first place
Like, the AI can't jump to definition! What are we fucking doing!?
This is why LSP support should be huge, and I'm surprised it's just a line-item in a changelog.
Days fucking around with clangd for jump to definition to sometimes work. Sigh
It was code-named to disambiguate it from the old compiler. But Roslyn is almost 15 years old now, so I can't call it new, but it's newer than the really legacy stuff.
It essentially lets you operate on the abstract snytax tree itself, so there is background compilation that powers inspection and transformation.
Instant renaming is an obvious benefit, but you can do more powerful transformations, such as removing redundant code or transforming one syntax style into another, e.g. tranforming from a Fluent API into a procedural one or vice-versa.
Tools like Claude Code (and Cursor) are treating the editor/CLI as a fluid canvas for the AI, whereas JetBrains treats AI as just a sidebar plugin. If they don't expose their internal refactoring tools to agents soon, the friction of switching to VS Code/CLI becomes negligible compared to the productivity gains of these agents.
In addition to Rust code analysis, RustRover provides many features, including code linting, code formatting, dependency management (Cargo.toml editing), UI debugging, support for web technologies and databases, and AI support (including an agentic approach with Junie).
Comparing code analysis capabilities themselves is quite difficult, because Rust is a very complex language, especially when it comes to implementing IDE-level support. Features such as macros make this even more challenging. RustRover and rust-analyzer use different approaches to solving these problems, and we also share some components. Of course, neither approach is perfect. Depending on the specific project, the developer experience may vary.
I get it, horse, you're confused, but we got places to go.
https://github.com/oraios/serena#the-serena-jetbrains-plugin
After years of JetBrains PyCharm pro I'm seriously considering switch to cursor. Before supermaven being acquired, pycharm+supermaven was feeling like having superpowers ... i really wish they will manage to somehow catch up, otherwise the path is written: crisis, being acquired by some big corp, enshitification.
They have an MCP server, but it doesn't provide easy access to their code metadata model. Things like "jump to definition" are not yet available.
This is really annoying, they just need to add a bit more polish and features, and they'll have a perfect counter to Cursor.
I much prefer their ides to say vscode, but their development has been a mess for a while with half-assed implementations and long standing bugs
One thing that I'm really missing is the automatic cursor move.
To this day people still 'refactor' by doing string replacement and hoping for the best. Any serious IDE should just say no to that.
Ever thought that two vocal minorities might not overlap, or even represent opinion of a bigger group?
VSCode just scooped up that market with their remote development plugin. And it did not matter that it is an electron app. Still faster than Jetbrains.
Because their refactoring tools are not a "slap on a couple of commands and delegate actual work to external code" like LSP? Because their tools are a huge collection of tools deeply integrated into the IDE? Including custom parsers, interpreters and analysers?
They’ve dropped the ball over the past five years. Part of me thinks it was the war in Ukraine that did them in. The quality of tooling and the investment in Fleet and AI slop was the death nell for me. I was slated to renew at the grandfathered price on the 17th and decided to let my subscription lapse this year because the value prop just isn’t strong enough anymore.
I'm also a subsciber for over a decade, and came here to say the same thing. I don't know how their teams were distributed across eastern Europe and Russia but the war is when I pinpoint quality declining.
I've kept my subscription for now as for PHP and Symfony nothing comes close, but I'm actively looking to move away.
This is 5% of what refactoring is, the rest is big scale re-architecting code where these tools are useless.
The agents can do this big scale architecturing if you describe exactly what you want.
IntelliJ has no moat here, because they can do well 5% of what refactoring is.
$x$.foo($args$)
Where you add filters like x's type is a subclass of some class, and args stands for 0-n arguments.You can also access the full intellij API via groovy scripts in the filters or when computing replacement variables, if you really want.
Though most of the time built in refactors like 'extract to _' or 'move to' or 'inline' or 'change type signature' or 'find duplicates' are enough.
- rename variable/function
- extract variable/function
- find duplicate code
- add/remove/extract function parameter
- inline a function
- moving code between classes
- auto imports
Others are used more rarely and can probably be left out but I do think it would save a lot of tokens, errors and time.https://gitlab.com/rhobimd-oss/shebe/-/blob/main/docs/guides...
https://gitlab.com/rhobimd-oss/shebe/-/tree/main?ref_type=he...
Then in skills or CLAUDE.md I instruct claude to use this mcp tool to enumerate all files need changing/updating.
• Use `/plugin` to open Claude Code's plug-in manager
• In the Discover tab, enter `lsp` in the search box
• Use `spacebar` to enable the ones you want, then `i` to install
Hope that helps!
I am disabling it for now since my flow is fine at the moment, I'll let others validate the usefulness first.
LSP Plugin Recommendation
LSP provides code intelligence like go-to-definition and error checking
Plugin: swift-lsp
Swift language server (SourceKit-LSP) for code intelligence Triggered by: •swift files
Would you like to install this LSP plugin? › 1. Yes, install swift-lsp 2. No, not now 3. Never for swift-lsp 4. Disable all LSP recommendations
I'd be disappointed if this were a feature only for the vscode version.
https://github.com/anthropics/claude-code/issues/14803#issue...
https://github.com/anthropics/claude-code/issues/13952#issue...
https://github.com/anthropics/claude-code/issues/13952#issue...
I was playing around with codex this weekend and honestly having a great time (my opinion of it has 180'd since gpt-5.2(-codex) came out) but I was getting annoyed at it because it kept missing references when I asked it to rename or move symbols. So I built a skill that teaches it to use rope for mechanical python codebase refactors: https://github.com/brian-yu/python-rope-refactor
Been pretty happy with it so far!
No LSP support is wild.
We have 50 years worth of progress on top of grep and grep is one of the worse ways to refactor a system.
Nice to see LLM companies are ignoring these teachings and speed running into disaster.
I'll have to check again because 6 months ago this stuff was pure trash and more frustrating than useful (beyond a boilerplate generate that also boils the ocean).
Opus 4.5 in Claude Code is a massive jump over 4.0 which is a massive jump over 3.7.
Each generation is being fine-tuned on a huge corpus of freshly-generated trajectories from the previous generation so things like tool use improve really quickly.
The answer is use tools that have semantic info to rename things.
Even though it has no semantic significance to the compiler, it does for all the human beings who will read it and get confused.
It seems to be very efficient context-wise, but at the same time made precise context-management much harder.
Opus 4.5 is quite a magnificent improvement over Sonnet 4.5, in CC, though.
Re tfa - I accidentally discovered the new lsp support 2 days ago on a side project in rust, and it’s working very well.
- Codex required around 30 passes on that loop, Claude did it in ~5-7.
- I thought Codex's was "prettier", but both were functional.
- I dug into Claude's result in more depth, and had to fix ~5-10 things.
- Codex I didn't dig into testing quite as deeply, but it seemed to need less fixing. Still not sure if that is because of a more superficial view.
- Still a work in progress, have not completed a full document signing workflow in either.And yes +1 for opus. Anthropic delivered a winner after fucking up the previous opus 4.1 release.
Codex is an outsourcing company, you give specs, they give you results. No communication in between. It's very good at larger analysis tasks (code coverage, health etc). Whatever it does, it does it sloooowwwllyyy.
Claude is like a pair programmer, you can follow what it's doing, interrupt and redirect it if it starts going off track. It's very much geared towards "get it done" rather than maximum code quality.
They feel like different coworker archetypes. Codex often does better end-to-end (plan + code in one pass). Claude Code can be less consistent on the planning step, but once you give it a solid plan it’s stellar at implementation.
I probably do better with Codex mostly due to familiarity; I’ve learned how it “thinks” and how to prompt it effectively. Opus 4.5 felt awkward for me for the same reason: I’m used to the GPT-5.x / Codex interaction style. Co-workers are the inverse, they adore Opus 4.5 and feel Codex is weird.
Surprised that you don't have internal tools or skills that could do this already!
Shows how much more work there is still to be done in this space.
Its hard to quantify what sort of value those examples generate (youtube and amazon were already massively popular). Personally I find it very useful, but it's still hard to quantify. It's not exactly automating a whole class of jobs, although there are several youtube transcription services that this may make obsoete.
This is why I roll my eyes every time I read doomer content that mentions an AI bubble followed by an AI winter. Even if (and objectively there's 0 chance of this happening anytime soon) everyone stops developing models tomorrow, we'll still have 5+ years of finding out how to extract every bit of value from the current models.
Of course there is a bubble. We can see it whenever these companies tell us this tech is going to cure diseases, end world hunger, and bring global prosperity; whenever they tell us it's "thinking", can "learn skills", or is "intelligent", for that matter. Companies will absolutely devalue and the market will crash when the public stops buying the snake oil they're being sold.
But at the same time, a probabilistic pattern recognition and generation model can indeed be very useful in many industries. Many of our problems can be approached by framing them in terms of statistics, and throwing data and compute at them.
So now that we've established that, and we're reaching diminishing returns of scaling up, the only logical path forward is to do some classical engineering work, which has been neglected for the past 5+ years. This is why we're seeing the bulk of gains from things like MCP and, now, "agents".
This is objectively not true. The models have improved a ton (with data from "tools" and "agentic loops", but it's still the models that become more capable).
Check out [1] a 100 LoC "LLM in a loop with just terminal access", it is now above last year's heavily harnessed SotA.
> Gemini 3 Pro reaches 74% on SWE-bench verified with mini-swe-agent!
Sure, the models themselves have improved, but not by the same margins from a couple of years ago. E.g. the jump from GPT-3 to GPT-4 was far greater than the jump from GPT-4 to GPT-5. Currently we're seeing moderate improvements between each release, with "agents" taking up center stage. Only corporations like Google are still able to squeeze value out of hyperscale, while everyone else is more focused on engineering.
This doesn't refute the fact that this simple idea can be very useful. Especially since the utility doesn't come from invoking the model in a loop, but from integrating it with external tools and APIs, all of which requires much more code.
We've known for a long time that feeding the model with high quality contextual data can improve its performance. This is essentially what "reasoning" is. So it's no surprise that doing that repeatedly from external and accurate sources would do the same thing.
In order to back up GP's claim, they should compare models from a few years ago with modern non-reasoning models in a non-agentic workflow. Which, again, I'm not saying they haven't improved, but that the improvements have been much more marginal than before. It's surprising how many discussions derail because the person chose to argue against a point that wasn't being made.
Those are improvements to the model, albeit in service of agentic workflows. I consider that distinct from improvements to agents themselves which are things like MCP, context management, etc.
[1]: https://opencode.ai/
I think so.
The opencode TUI is very good, but whenever I try it again the results are subjectively worse than Claude Code. They have the disadvantage of supporting many more models in terms refining prompts / tool usage.
The Claude Code secret sauce seems to be running evals on real world performance and then tweaking prompts and the models themselves to make it work better.
- accidental approvals when trying to queue a prompt because of the unexpected popovers - severe performance issues when pending approval (using 100% of all cores) - tool call failures
having used Crush, OpenCode, aider, mistral-vibe, Gemini CLI (and the Qwen fork), and Claude Code, the clear winner is CC. Gemini/Qwen come in second but they do lose input when you decline a requested permission on a tool call.
that said, CC also has its issues, like the flickering problem that happens in some terminals while scrolling executed command output.
I also use OpenCode extensively, but bounce around to test out the other ones.
I keep my setup modular/composable so I can swap pieces and keep it usable by anyone (agent, human, time traveler) depending on what the task needs. In the aughts I standardized on "keep worklogs and notes on tools and refine them into runbooks" so that has translated pretty well to agentic skills/tools. (a man page is a perfectly cromulent skill, btw.)
But their configuration setup is the easiest and best out of all the other CLI tools
But in all seriousness, LLMs have their strengths but we’re all wasting tokens and burning the planet unnecessarily getting LLMs to work so inefficiently. Use the best tool for the job; make the tools easier to use by LLMs. This mantra is applicable generally. Not just for coding.
those wanting lsp support in the loop have been using things such as: https://github.com/oraios/serena
Pretty sure Cursor has had this for a while.
CLIs can also have a small piece of code that connects to an LSP server. I don’t see why IDEs should be the sole beneficiary of LSP just because they were the first clients imagined by the LSP creators.
So until it manages to do that, I’ll keep being bullish on what works.
Thank you, whoever added the setting to revert back to the terminal experience.
Yet they preferred the CLI because it felt "more natural"
With agents, and Claude Code, we are *orchestrating* ... this is an unresolved UI/UX in industry. The same reasons `kubectl` didn't evolve to GUI probably apply here.
It's less about the codebase, more about the ability to conduct anything on the computer - you are closest to that in the terminal. https://backnotprop.com/blog/its-on-your-computer/
The comments are usually insightful. Even in that video the terminal ui is brought up and she mentions her preference.
I use Zed and unless there is some MCP server that provides the same thing as the LSP server, the Zed agent won't have access, even though it's in an IDE that supposedly has this information
https://forum.cursor.com/t/support-of-lsp-language-server-pr...
> Feature request for product/service
>
> Cursor IDE
>
> Describe the request
>
> It would be a huge step up if agent could interact with LSP (Language Server Protocol).
>
> It would offer :
>
> renaming all instances of a symbol over all files in one action
> quick navigation through code : fast find of all references to a property or method
> organize imports, format code, etc…
And last Friday a Cursor engineer replied "Thanks for the idea!"
So how does the AI agent in Cursor currently have access to LSP?
(I am most interested in having the agent use LSP for type checking, documentation of a method call, etc. rather than running slower commands)
(note, there is an open PR for Zed to pull LSP diagnostics into an AI agent thread https://github.com/zed-industries/zed/pull/42270 but it would be better if agents could make arbitrary LSP queries or something like that)
It would be so cool if LLMs could get the type of a variable when it's unclear (specially in languages with overloading and whatnot). Or could get autocomplete if they get stuck with a code. Really I think that agents and LSP should be hybrid, and maybe the agent could inform the LSP server of some things like things to warn (IDE diagnostics could be driven by a combination of LSP and AI agents)
I’ve not noticed the agent deciding to use it all that much.
Opus 4.5 is fairly consistent in running QA at proper times. Lint checks and all are already incorporated into a standard & native processes outside of IDE. I think lookup can be useful when definitions are hidden deep in hard to reach places on my disk... hasn't been a problem though the agent usually finds what it needs.
Anyway, here is what it stated it could do:
> Do you have access to an lsp tool?
Yes, I have an LSP tool with these operations:
- goToDefinition - Find where a symbol is defined
- findReferences - Find all references to a symbol
- hover - Get documentation/type info for a symbol
- documentSymbol - Get all symbols in a file
- workspaceSymbol - Search for symbols across the workspace
- goToImplementation - Find implementations of an interface/abstract method
- prepareCallHierarchy - Get call hierarchy item at a position
- incomingCalls - Find what calls a function
- outgoingCalls - Find what a function callsDug through their obfuscated JS and it looks like they forgot to re-add a function call in the LSP manager initialize function that actually adds/indexes the lsp registered from plugins.