I find when submitting a complex PR, i tend to do a self review, adding another layer of comments above those that are included in the code. Seems like a nice place to stuff prompts
> I also believe that observability is up for grabs again. We now have both the need and opportunity to take advantage of it on a whole new level. Most people were not in a position where they could build their own eBPF programs, but LLMs can
One of my big predictions for ‘26 is the industry following through with this line of reasoning. It’s now possible to quickly code up OSS projects of much higher utility and depth.
LLMs are already great at Unix tools; a small api and codebase that does something interesting.
I think we’ll see an explosion of small tools (and Skills wrapping their use) for more sophisticated roles like DevOps, and meta-Skills for how to build your own skill bundles for your internal systems and architecture.
And perhaps more ambitiously, I think services like Datadog will need to change their APIs or risk being disrupted; in the short term nobody is going to be able to move fast enough inside a walled garden to keep up with the velocity the Claude + Unix tools will provide.
UI tooling is nice, but it’s not optimized for agents.
https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
Where is the resulting software?
Usually the best rule of thumb is to be against anything these people are for.
Everywhere.
Remember Satya Nadella estimating 30% of code at Microsoft was written by AI? That was March. At this point it's ubiquitous—and invisible.
Show the PRs.
Wait until those people hit a snafu and have to debug something in prod after they mindlessly handed their brains and critical thinking to a water-wasting behemoth and atrophied their minds.
EDIT: typo, and yes I see the irony :D
(I don’t relish this future at all, myself, but I’m starting to think it really will happen soon.)
You've just described typical run of the mill company that has software. LLMs will make it easier to shoot yourself in the foot, but let's not rewrite history as if stackoverflow coders are not a thing.
Individual results may vary, but it seems credible that thoroughly learning and using an editor like Vim or Emacs could yield a 2x productivity boost. For the most part, this has never really been pushed. If a programmer wanted to use Nano (or Notepad!), some may have found that odd, but nobody really cared. Use whatever editor you like. Even if it means leaving a 2x productivity boost on the table!
Why is it being pushed so hard that AI coding tools in particular must be used?
With LLMs you can literally ask it to generate entire libraries without activating a single neuron in your nogging. Those two do NOT compare in the slightest.
The point of adding the "prompt", or the discussion with the LLM is learning. You can go back and see what was the exact conversation.
It's like having someone watch a livestream screen recording of you writing the code.
It's nice to have there IF you need to go back and learn something, but hardly a review requirement.
There’s even a research team that has bee using this approach to generate compilable C++ from binaries and run static analysis on it, to find more vulnerabilities than source analysis without involving dynamic tracing.
Yes! Who is building this?
Either way, git will make it trivial to see which prompt belongs with which commit: it'll be in the same diff! You can write a pre-commit hook to always include the prompts in every commit, but I have a feeling most Vibe coders always commit with -a anyway
Basically we're at a point where the agents kinda caught up to our tooling, and we need better / different UX or paradigms of sharing sessions (including context, choices, etc)
In many respects 2025 was a lost year for programming. People speak about tools, setups and prompts instead of algorithms, applications and architecture.
People who are not convinced are forced to speak against the new bureaucratic madness in the same way that they are forced to speak against EU ChatControl.
I think 2025 was less productive, certainly for open source, except that enthusiasts now pay the Anthropic tax (to use the term that was previously used for Windows being preinstalled on machines).
I think 2025 is more productive for me based on measurable metrics such as code contribution to my projects, better ability to ingest and act upon information, and generally I appreciate the Anthropic tax because Claude genuinely has been a step-change improvement in my life.
Isn‘t it generally agreed upon that counting contributions, LoC or similar metrics is a very bad way to gauge productivity?
This would've never happened without a Claude Pro (+ChatGPT) subscription.
And as I'm not American, none of them are aimed to be subscription based SaaS offerings, they're just simple CLI applications for my personal use. If someone else enjoys them, good for them. =)
I think the opposite. Natural language is the most significant new programming language in years, and this year has had a tremendous amount of progress in collectively figuring out how to use this new programming language effectively.
Hence the lost year. Instead of productively building things, we spent a lot of resources on trying to figure out how to build things.
DS tooling feels like it hit much a needed 2.0 this year. Tools are faster, easier, more reliable, and more reproducible.
Polars+pyarrow+ibis have replaced most of my pandas usage. UDFs were the thing holding me back from these tools, this year polars hit the sweet spot there and it's been awesome to work with.
Marimo has made notebooks into apps. They're easier to deploy, and I can use anywidget+llms to build super interactive visualizations. I build a lot of internal tools on this stack now and it actually just works.
PyMC uses jax under the hood now, so my MCMC workflows are GPU accelerated.
All this tooling improvement means I can do more, faster, cheaper, and with higher quality.
I should probably write a blog post on this.
"There’s an AI for that" lists 44,172 AI tools for 11,349 tasks. Most of them are probably just wrappers…
As Cory Doctorow uses enshittification for the internet, for AI/LLM there should be something like a dumbaification.
It reminds me late 90s when everything was "World Wide Web". :)
Gold rush it is.
Every single grifter from those times is slapping AI on everything that moves or doesn't move.
But the difference is that the blockchain was (and still is) a solution looking for a problem. LLMs can solve actual problems today.
Which one is that? Endless leetcode madness? Or constant bikeshedding about today's flavor of MVC (MVI, MVVM, MVVMI) or whatever else bullshit people come up with instead of actually shipping?