Posted by raphaelcosta 5 hours ago
For things that are appropriate to build with agents, I have come to hold the strong opinion that you need to go all-in. If you built it with an agent, then you fix it with an agent, you debug it with an agent, and you change it with an agent.
In that case you should not consider yourself the steward of the source code and worry about “cognitive debt”- it’s literally not your job anymore. Your job is the keeper of the specification and care and feeding of the agents.
If you adopt the mindset that “I’m not going to build the documentation for me, I’m going to build it for the agent”, and “I’m not going to try to use my development skills to debug something I didn’t write, I’m going to make specific interfaces for the agent to understand the state and activity of the running code”, etc.- you’ll be a lot happier and more successful.
If you are using agents for autocomplete in your editor, or you open a separate chat window to ask a question about your code- that’s a very low level of agent usage and all your existing dev skills and responsibilities still apply.
If you’re using a planning framework like superpowers (the skill) and just laying out the spec for the program, then keep your fingers out of the source code, and don’t waste your time reading it. Have the agent explain it, showing you in the IDE, and make the agent make any changes you want.
You can inject philosophy into the agent and ensure that it sticks to it. The LLM will, with sufficient drilling, begrudgingly implement it, most important of which is SIMPLE>COMPLEX on all levels and you have to either manually or agentically continuously monitor this.
Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more. This is the default path, and the path that way too many has taken.
That said, it still frequently introduces subtle bugs, so I have to review every change carefully.
The real trick is learning when to use it. Some tasks are much faster to do myself, while others are faster with Claude Code.
Of course, we have had compilers and tooling, but those are the pencil and drafting board of the draftsperson. An ecosystem of packages, dependencies and APIs has evolved, but those are often just spells the software magician invokes after reading the spellbook^H^H^H^H^H^H^H^H^H stackoverflow^H^H^H^H^H^H^H^H^H^H^H^H^H API documentation.
We are going to need to build a new set of boundaries and abstractions with new handover protocols to manage this mess.
Plus, 'agile' in quite a lot companies is really waterfall that's been broken into sprints without the planning of proper waterfall or the discovery and learning of real Agile. The software still gets built though. Maybe software is actually quite easy to plan.
It's the people that claim to "do agile" that invariably don't do it. But software development used to fail most of the time, and it doesn't do that anymore.
If the idea does not compute with human nature, the idea is flawed, its basis the knowledge of human nature is non-existent and thus it had no place in reality after all..
People who failed just did it wrong. /s
To be fair, the manifesto and methodology is quite good in theory. But I just have never heard of(or experienced) it working properly and the response is always that it wasn't implemented correctly.
So much of what makes high-functioning teams work is a sense of ownership and stewardship, and what makes low-functioning teams break is a lack thereof. Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.
In the past, that ownership could be individual or collective, but with AI and a lot more lane-crossing, ownership should tend toward smaller groups (or individuals).
A developer can design, but a designer needs to review it. A designer can code, but the owner of the code must review it.
This might feel like gatekeeping, but it's the only way.
Wait...
At one business I was a part of where that experiment was tried, it failed badly. In reality, people were being switched around on projects and the "owner" was changing every few months. The end result was quite messy, both in terms of technical debt and politics(about who's the final decision maker).
I've said this before, but people gloss over this fact.
>Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.
I've also said this before, but AI-glazers just respond with "I think we may just have to let go of pride & kudos and their connection to our identity."
Most people who vibecode don't give a shit about their work. Any solution is a solution as long as it works.
>This might feel like gatekeeping, but it's the only way.
Gatekeeping is not inherently bad. We want gatekeeping.
If I'm getting surgery, I want an actual doctor with proven credentials to do it.
And to anyone claiming that software doesn't kill, please look up "Therac-25" or the 65 people that died due to Tesla's "Full Self-Driving".
Now we all know horrible mangers who didn't keep up to date nor used their thinking. This will happen with AI useage too. What is more we are expecting people who are engineers to have a manager's mindset (by managing AI agents, products requirements, etc). Many engineers are horrible at this and have no desires or ability to become a manager. This is why they went to engineering in the first place.
Bingo. If I wanted to spend my life managing incompetent sycophants, I would've studied for an MBA to try to rise the ranks at McKinsey.
The funny part is that these are the same people who are upset that these folks up the food chain "do nothing".
So no wonder people aren't happy
That’s the neat trick kiddo, they won’t. Across the industry, the messaging is clear: use AI and be more productive. Management is salivating at the idea of getting rid of people and keeping a higher share of profits for themselves. Most ICs I talk to are increasingly expressing the feeling of burnout, fear of losing jobs and resentment that AI is being pushed the way it is being pushed. I have more than a few conversations where people have clearly expressed that they are mostly focused on keeping their jobs. They don’t care about cognitive debt and some are looking forward to the time when the debt comes due.
It is depressing, but it is the reality.
Across the board, I still see people loving to over design things that can be much simpler. This isn't much changed because of LLM, LLM just allowed them to create the complicated implementations much faster.
In terms of over engineering, I wouldn't be surprised if the human tendency for skeuomorphism (combined with an loss of technical skill) will create even weirder code.
Enshitification in this area will be shift. And there will be grand articles here on HN “nobody could possibly have seen this coming.” Yes we did.
Which means the stuff that replace it will also happen faster.
Overall, the quality of the software is likely similar, since AI do not have purpose, and software still largely reflect human will and thinking.
And that’s okay! Much like it’s okay to let other people write the code.
What is important is that the code written by Agentic AI is covered by automated tests adequately, and that you verify that the architectural plan is solid. But then this is also what you do with your colleagues’/juniors’ code.
The vendor was basically right at the end of the "fun" part of cranking out features, and just about to hit the "rubber meets the road" part where you start fixing bugs, finding new edge cases, discovering new hidden requirements, and realizing X% of your design assumptions were completely wrong. Oh yeah, and minor little mop-up tasks that don't wow the client, like integrating with a payment processor, integrating with our internal scheduling system, exporting invitee lists from our CRM into our app, etc.
It's possible we're in a similar cognitive debt situation to having to maintain a large, swiftly-AI-coded app. After about 6 months of stressful development, which started with what I call throwing dye in the water and eventually progressed to understanding one small feature or flow at a time, we have maybe 50% of the mental model we'd have if we'd built the app ourselves. Whole chunks of the app are still a black box to us.
It doesn't help that requirements have evolved so much since the original documentation that it's worse than useless because we can't trust it. So the code, which we don't understand, is the only documentation of the current requirements.
Of course, our internal clients are pissed because the final product is taking so much longer than expected, when they could see all these awesome shiny, happy-path, 80%-done features 6 months ago. We're in a constant fire drill. Everyone on the project is miserable. It's the least fun kind of development.
But these days, when I write in that formal style, people sometimes say it sounds like AI. That has been a difficult and frustrating point for me.
I still find the subtle difference hard to understand.
My primary editor is vim, and for a significant amount of time I was using it almost in puritan fashion, this was before LLM was mainstream.
However, I could not use vim to edit java, even with language server - I tried, but each time I went back to intellij - the rest of the code base in python, ruby and typescript was typically fine.
The reason was two fold, because everyone was using all of the features that intellij had to offer, the code was structured similar to intellij and obviously the java design patterns that was popular at the time. Everything went through factories and managers and interfaces and tracking them through a pure editor was almost impossible. The IDE handled it for you.
But everything else? Things I or others had to build from ground up was built with this cognitive limitation in mind, which means I can fit everything nicely and edit with vim, even without a language server with high efficiency.
Those cognitive limitation is good for the software. It's easy to explain, easy to debug, easy to add and subtract. And I've come to disregard the intellij way, or the current vibe coding till it works that is common everywhere now. The principle is KISS - keep it simple stupid. If AI will not do that, then you have to. It is a simple philosophical question that is more important than ever. And sadly most people still don't realize it - they will happily tack on the next "feature" in with the scaling they didn't need at that time with the design pattern that they don't need at the time and prematurely optimize themselves into cognitive and technical bankruptcy.