Posted by bigwheels 1/26/2026
2026 is just when it picks up - it'll get exponentially worse.
I think 2026 is the year of Business Analysts who were unable to code. Now CC et all are good enough that they can realize the vision as long as one knows exactly the requirements (software design not that important). Programmers who didn't know business could get by so far. Not anymore, because with these tools, the guy who knows business can now code fairly well.
It could also be BAs being lazy and not jumping ahead of the train that is coming towards them. It feels like in this race the engineer who is willing to learn business will still have an advantage over the business person who learns tech. At least for a little while.
... until CC doesn't get it quite right and the guy who knows business doesn't know code.
Any qualified guesses?
I'm not convinced more traders on wall street will allocate capital more effectively leading to economic growth.
Will more programmers grow the economy? Or should we get real jobs ;)
This makes it sound like we're back in the days of FrontPage/Dreamweaver WYSIWYG. Goodness.
If you have a ChatGPT subscription, try Codex with GPT-5.2-High or 5.2-codex High? In my experience, while being much slower, it produces far better results than Opus and seems even more aggressively subsidized (more generous rate limits).
Does this not undercut everything going on here. Like, what?
No doubt that good engineers will know when and how to leverage the tool, both for coding and improving processes (design-to-code, requirement collection, task tracking, basic code reviewal, etc) improving their own productivity and of those around them.
Motivated individuals will also leverage these tools to learn more and faster.
And yes, of course it's not the only tool one should use, of course there's still value in talking with proper human experts to learn from, etc, but 90% of the time you're looking for info the LLM will dig it from you reading at the source code of e.g. Postgres and its test rather than asking on chats/stack overflow.
This is a trasformative technology that will make great engineers even stronger, but it will weed out those who were merely valued for their very basic capability of churning something but never cared neither about engineering nor coding, which is 90% of our industry.
I'm still a little iffy on the agent swarm idea. I think I will need to see it in action in an interface that works for me. To me it feels like we are anthropomorphizing agents too much, and that results in this idea that we can put agents into roles and them combine them into useful teams. I can't help seeing all agents as the same automatons and I have trouble understanding why giving an agent with different guideliens to follow, and then having them follow along another agent would give me better results than just fixing the context in the first place. Either that or just working more on the code pipeline to spot issues early on - all the stuff we already test for.
Granted it's not a one-size-fits-all problem, but I'm curious if any teams have started setting up additional concrete safeguards or processes to mitigate that specific threat. It feels like a ticking time bomb.
It almost begs the question, what even is the reward? A degradation of your engineering team's engineering fundamentals, in return for...are we actually shipping faster?
the people who wrote it were contractors long gone, or employees that have moved companies/departments/roles, or of projects that were long since wrapped up, or of people who got laid off, or the people who wrote it simply barely understood it in the first place and certainly don't remember what they were thinking back then now.
basically "what moron wrote this insane mess... oh me" is the default state of production code anyway. there's really no quality bar already.
What we're entering, if this comes to fruition, is a whole new era where massive amounts of code changes that engineers are vaguely familiar with are going to be deployed at a much faster pace than anything we've ever seen before. That's a whole different ballgame than the management of a few legacy services.
I wonder if there's any value in some system that preserves the chat context of a coding agent and tags the commits with a reference to it, until the feature has been sufficiently battle tested. That way you can bring them back from the dead and interrogate them for insight if something goes wrong. Probably no more useful than just having a fresh agent look at the diff in most cases, but I can certainly imagine scenarios where it's like "Oh, duh, I meant to do X but looks like I accidentally did Y instead! Here's a fix." way faster than figuring it out from scratch. Especially if that whole process can be automated and fast, worst case you just waste a few tokens.
I'm genuinely curious though if there's anything you learned from those experiences that could be applied to agent driven dev processes too.
- observe error rate uptick
- maybe dig in with apm tooling
- read actual error messages
- compare what apm and logs said to last commit/deploy
- if they look even tangentially related deploy the previous commit (aka revert)
- if its still not fixed do a "debug push", basically stuff a bunch of print statements (or you can do better) around the problem to get more info
I won't say that solves every case but definitely 90% of them.I think your point about preserving some amount of intent/context is good, but also like what are most of us doing with agents if not "loop on error message until it goes away".
As an added plus: those, who already have wealth will benefit the most, instead of the masses. Since the distribution and dissemination of new projects is at the same level as before, meaning you would need a lot of money. So no matter how clever you are with an llm, if you don't have the means to distribute it you will be left in the dirt.