Posted by JamesSwift 14 hours ago
I thought about moving after 10+ years when they abandoned the commit modal, and jacked up the plan prices, but I barely understand how to commit things in Vscode anyway. Let's see in 2026.
[0] https://blog.jetbrains.com/datagrip/2025/12/18/query-console...
(By “works anywhere”, I meant you can use it with any IDE or editor, or just run it from terminal, though it is cross-platform and should work on Windows, just not sure how well it would play with WSL.)
What's incredible is just how bad it works. I nearly always work with projects that mount multiple folders, and the IDE's MCP doesn't support that. So it doesn't understand what folders are open and can't interact with them. Junie the same issue, and the AI Assistant appears to have inherited it. The issue has been open for ages and ignored by Jetbrains.
I also tried out their full line completion, and it's incomprehensibly bad, at least for Go, even with "cloud" completion enabled. I'm back to using Augment, which is Claude-based autocompletion.
I'm not sure this is true, do you have a source? Maybe conflating this with the recent Agentic AI Foundation & MCP news?
An explainer for others:
Not only can analyzers act as basic linters, but transformations are built right in to them. Every time claude does search-and-replace to add a parameter I want to cry a little, this has been a solved science.
Agents + Roslyn would be productive like little else. Imagine an agent as an orchestrator but manipulation through commands to an API that maintains guard rails and compilability.
Claude is already capable of writing roslyn analyzers, and roslyn has an API for implementing code transformations ( so called "quick fixes" ), so they already are out there in library form.
It's hard to describe them to anyone who hasn't used a similarly powerful system, but essentially it enables transforms that go way beyond simple find/replace. You get accurate transformations that can be quite complex and deep reworks to the code itself.
A simple example would be transforming a foreach loop into a for loop, or transforming and optimizing linq statements.
And yet we find these tools unused with agentic find/replace doing the heavy lifting instead.
Whichever AI company solves LSP and compiler based deep refactoring will see their utility shoot through the roof for working with large codebases.
Like, the AI can't jump to definition! What are we fucking doing!?
It was code-named to disambiguate it from the old compiler. But Roslyn is almost 15 years old now, so I can't call it new, but it's newer than the really legacy stuff.
It essentially lets you operate on the abstract snytax tree itself, so there is background compilation that powers inspection and transformation.
Instant renaming is an obvious benefit, but you can do more powerful transformations, such as removing redundant code or transforming one syntax style into another, e.g. tranforming from a Fluent API into a procedural one or vice-versa.
https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet...
And not the dozens of others you have? Do you not consider them also separate families?
Yeah, they completely didn’t see any of this coming.
Fleet is a completely different codebase.
So they’re correct, there’s only two families of IDEs.
Tools like Claude Code (and Cursor) are treating the editor/CLI as a fluid canvas for the AI, whereas JetBrains treats AI as just a sidebar plugin. If they don't expose their internal refactoring tools to agents soon, the friction of switching to VS Code/CLI becomes negligible compared to the productivity gains of these agents.
After years of JetBrains PyCharm pro I'm seriously considering switch to cursor. Before supermaven being acquired, pycharm+supermaven was feeling like having superpowers ... i really wish they will manage to somehow catch up, otherwise the path is written: crisis, being acquired by some big corp, enshitification.
One thing that I'm really missing is the automatic cursor move.
They have an MCP server, but it doesn't provide easy access to their code metadata model. Things like "jump to definition" are not yet available.
This is really annoying, they just need to add a bit more polish and features, and they'll have a perfect counter to Cursor.
I much prefer their ides to say vscode, but their development has been a mess for a while with half-assed implementations and long standing bugs
They’ve dropped the ball over the past five years. Part of me thinks it was the war in Ukraine that did them in. The quality of tooling and the investment in Fleet and AI slop was the death nell for me. I was slated to renew at the grandfathered price on the 17th and decided to let my subscription lapse this year because the value prop just isn’t strong enough anymore.
This is 5% of what refactoring is, the rest is big scale re-architecting code where these tools are useless.
The agents can do this big scale architecturing if you describe exactly what you want.
IntelliJ has no moat here, because they can do well 5% of what refactoring is.
- rename variable/function
- extract variable/function
- find duplicate code
- add/remove/extract function parameter
- inline a function
- moving code between classes
- auto imports
Others are used more rarely and can probably be left out but I do think it would save a lot of tokens, errors and time. $x$.foo($args$)
Where you add filters like x's type is a subclass of some class, and args stands for 0-n arguments.You can also access the full intellij API via groovy scripts in the filters or when computing replacement variables, if you really want.
Though most of the time built in refactors like 'extract to _' or 'move to' or 'inline' or 'change type signature' or 'find duplicates' are enough.
I was playing around with codex this weekend and honestly having a great time (my opinion of it has 180'd since gpt-5.2(-codex) came out) but I was getting annoyed at it because it kept missing references when I asked it to rename or move symbols. So I built a skill that teaches it to use rope for mechanical python codebase refactors: https://github.com/brian-yu/python-rope-refactor
Been pretty happy with it so far!
No LSP support is wild.
We have 50 years worth of progress on top of grep and grep is one of the worse ways to refactor a system.
Nice to see LLM companies are ignoring these teachings and speed running into disaster.
It seems to be very efficient context-wise, but at the same time made precise context-management much harder.
Opus 4.5 is quite a magnificent improvement over Sonnet 4.5, in CC, though.
Re tfa - I accidentally discovered the new lsp support 2 days ago on a side project in rust, and it’s working very well.
And yes +1 for opus. Anthropic delivered a winner after fucking up the previous opus 4.1 release.
Codex is an outsourcing company, you give specs, they give you results. No communication in between. It's very good at larger analysis tasks (code coverage, health etc). Whatever it does, it does it sloooowwwllyyy.
Claude is like a pair programmer, you can follow what it's doing, interrupt and redirect it if it starts going off track. It's very much geared towards "get it done" rather than maximum code quality.
They feel like different coworker archetypes. Codex often does better end-to-end (plan + code in one pass). Claude Code can be less consistent on the planning step, but once you give it a solid plan it’s stellar at implementation.
I probably do better with Codex mostly due to familiarity; I’ve learned how it “thinks” and how to prompt it effectively. Opus 4.5 felt awkward for me for the same reason: I’m used to the GPT-5.x / Codex interaction style. Co-workers are the inverse, they adore Opus 4.5 and feel Codex is weird.
Surprised that you don't have internal tools or skills that could do this already!
Shows how much more work there is still to be done in this space.
Its hard to quantify what sort of value those examples generate (youtube and amazon were already massively popular). Personally I find it very useful, but it's still hard to quantify. It's not exactly automating a whole class of jobs, although there are several youtube transcription services that this may make obsoete.
This is why I roll my eyes every time I read doomer content that mentions an AI bubble followed by an AI winter. Even if (and objectively there's 0 chance of this happening anytime soon) everyone stops developing models tomorrow, we'll still have 5+ years of finding out how to extract every bit of value from the current models.
Of course there is a bubble. We can see it whenever these companies tell us this tech is going to cure diseases, end world hunger, and bring global prosperity; whenever they tell us it's "thinking", can "learn skills", or is "intelligent", for that matter. Companies will absolutely devalue and the market will crash when the public stops buying the snake oil they're being sold.
But at the same time, a probabilistic pattern recognition and generation model can indeed be very useful in many industries. Many of our problems can be approached by framing them in terms of statistics, and throwing data and compute at them.
So now that we've established that, and we're reaching diminishing returns of scaling up, the only logical path forward is to do some classical engineering work, which has been neglected for the past 5+ years. This is why we're seeing the bulk of gains from things like MCP and, now, "agents".
This is objectively not true. The models have improved a ton (with data from "tools" and "agentic loops", but it's still the models that become more capable).
Check out [1] a 100 LoC "LLM in a loop with just terminal access", it is now above last year's heavily harnessed SotA.
> Gemini 3 Pro reaches 74% on SWE-bench verified with mini-swe-agent!
Sure, the models themselves have improved, but not by the same margins from a couple of years ago. E.g. the jump from GPT-3 to GPT-4 was far greater than the jump from GPT-4 to GPT-5. Currently we're seeing moderate improvements between each release, with "agents" taking up center stage. Only corporations like Google are still able to squeeze value out of hyperscale, while everyone else is more focused on engineering.
• Use `/plugin` to open Claude Code's plug-in manager
• In the Discover tab, enter `lsp` in the search box
• Use `spacebar` to enable the ones you want, then `i` to install
Hope that helps!
I am disabling it for now since my flow is fine at the moment, I'll let others validate the usefulness first.
LSP Plugin Recommendation
LSP provides code intelligence like go-to-definition and error checking
Plugin: swift-lsp
Swift language server (SourceKit-LSP) for code intelligence Triggered by: •swift files
Would you like to install this LSP plugin? › 1. Yes, install swift-lsp 2. No, not now 3. Never for swift-lsp 4. Disable all LSP recommendations
https://github.com/anthropics/claude-code/issues/14803#issue...
https://github.com/anthropics/claude-code/issues/13952#issue...
https://github.com/anthropics/claude-code/issues/13952#issue...
[1]: https://opencode.ai/
- accidental approvals when trying to queue a prompt because of the unexpected popovers - severe performance issues when pending approval (using 100% of all cores) - tool call failures
having used Crush, OpenCode, aider, mistral-vibe, Gemini CLI (and the Qwen fork), and Claude Code, the clear winner is CC. Gemini/Qwen come in second but they do lose input when you decline a requested permission on a tool call.
that said, CC also has its issues, like the flickering problem that happens in some terminals while scrolling executed command output.
But their configuration setup is the easiest and best out of all the other CLI tools
I also use OpenCode extensively, but bounce around to test out the other ones.
I keep my setup modular/composable so I can swap pieces and keep it usable by anyone (agent, human, time traveler) depending on what the task needs. In the aughts I standardized on "keep worklogs and notes on tools and refine them into runbooks" so that has translated pretty well to agentic skills/tools. (a man page is a perfectly cromulent skill, btw.)
But in all seriousness, LLMs have their strengths but we’re all wasting tokens and burning the planet unnecessarily getting LLMs to work so inefficiently. Use the best tool for the job; make the tools easier to use by LLMs. This mantra is applicable generally. Not just for coding.
those wanting lsp support in the loop have been using things such as: https://github.com/oraios/serena
Pretty sure Cursor has had this for a while.
CLIs can also have a small piece of code that connects to an LSP server. I don’t see why IDEs should be the sole beneficiary of LSP just because they were the first clients imagined by the LSP creators.
So until it manages to do that, I’ll keep being bullish on what works.
Yet they preferred the CLI because it felt "more natural"
With agents, and Claude Code, we are *orchestrating* ... this is an unresolved UI/UX in industry. The same reasons `kubectl` didn't evolve to GUI probably apply here.
It's less about the codebase, more about the ability to conduct anything on the computer - you are closest to that in the terminal. https://backnotprop.com/blog/its-on-your-computer/
I use Zed and unless there is some MCP server that provides the same thing as the LSP server, the Zed agent won't have access, even though it's in an IDE that supposedly has this information
https://forum.cursor.com/t/support-of-lsp-language-server-pr...
> Feature request for product/service
>
> Cursor IDE
>
> Describe the request
>
> It would be a huge step up if agent could interact with LSP (Language Server Protocol).
>
> It would offer :
>
> renaming all instances of a symbol over all files in one action
> quick navigation through code : fast find of all references to a property or method
> organize imports, format code, etc…
And last Friday a Cursor engineer replied "Thanks for the idea!"
So how does the AI agent in Cursor currently have access to LSP?
(I am most interested in having the agent use LSP for type checking, documentation of a method call, etc. rather than running slower commands)
(note, there is an open PR for Zed to pull LSP diagnostics into an AI agent thread https://github.com/zed-industries/zed/pull/42270 but it would be better if agents could make arbitrary LSP queries or something like that)
Dug through their obfuscated JS and it looks like they forgot to re-add a function call in the LSP manager initialize function that actually adds/indexes the lsp registered from plugins.
I’ve not noticed the agent deciding to use it all that much.