Posted by raphaelcosta 6 hours ago
I think it's great for writing tests and sanity checking changes but wouldn't let it write core driver code(I'm a systems programmer so YMMV). Maybe in a month I'll think differently.
When all you've got is a hammer...
...you'll eventually stop knowing how to use other tools.
In that situation, coming in cold to a library that you haven't worked on before to make a change is the normal case, not "cognitive debt."
If you have common coding standards that all your libraries abide by, then it's much easier to dive into a new one.
Also, being able to ask an AI questions about an unfamiliar library might actually help?
Smaller teams have more agency to move and usually team members with broader responsibility and understanding of the systems. Also possibly closer to stakeholders, so are already involved in specification creation and know where automation can add value. Add an AI agent and they can pick and choose where they can be most effective at a system level.
Bigger teams have clear boundaries that stop agency - blockers due to cross team dependencies, potentially no idea what stakeholders want, just piecemeal incremental change of a bigger system specified by someone else. If all they can do is automate that limited scope it's really just like faster typing.
Not every company is going to see those boundaries and stakeholders as features, and they'll be under pressure to "mitigate those blockers to execution". That's where the cognitive debt skyrockets.
But... as team size grows, LLMs can be more valuable in other ways. Larger teams typically have larger codebases to comprehend, more users, more bug reports to triage, etc. It's SO much easier to get up to speed on a big existing codebase now.
Large teams prioritize service resilience and depth of coverage.
The ability to generate code has seemingly transposed what people think of as a "high-performing team" from one that produces quality to one that produces quantity. With the short term gains obviously increasing long term technical debt.
Ever since LLMs started writing decent code, I started feeling like a part of that joy of code-writing has been taken away.
Using LLMs literally leaves a developer to do (what I find is) the worst part about software development: debugging someone else’s code.
Besides this, everything feels rushed. I am under the impression that I can’t “take my time” to think about a problem anymore. It almost feels wasteful now. I have to “just do it”.
It makes me nostalgic and I feel like I’ve lost something about coding that made me enjoy it.
But it is the reality we live in and I’m adapting to it. What I’m wondering is whether I should adapt or, rather, push back.
That being said: this feels a little like it was written using AI.
This exactly, has been the problem in cultures that have produced broken, lower quality things in general. Don't think deeply about the problem and don't think about the long term consequences. Just grab whatever solution gives some immediate solution the fastest. "Jugaar."
Many people are slipping into this culture now with the new pressure for immediate production pushed by the AI crowd. It's "jugaar." It's trading short term gain for long term breakage, chaos, and pain. It's also social and economic pressure to not do things properly.
Those who want to take the time time to really understand things, or to build things correctly are mocked or punished for being slow and simple minded. "Just do it this way, look everyone is doing it and making more money faster!" This is also part of that culture that drags everyone into jugaar.
I know older devs that reminisce for the days of programming straight to the metal in assembly (e.g. on DOS or Amiga) and “knowing exactly what the computer is doing” which feels somehow familiar!
Even more familiar are senior devs moving to management (I know this isnt an original metaphor).
The SWEs that go all-in on AI will never understand this, because they have never enjoyed the joy of code-writing. I would even go as far as saying that many of them even hate it.
Of this group, I think the majority are the same people that have joined the industry not because of an innate love for engineering, but because they saw an opportunity to make big bucks in big tech.
The software is necessarily complex due to legislative requirements, and the corpus of documentation the AI has access to just doesn't seem to capture the complexities and subtleties of the system and its related platforms.
I can churn out ACs quicker, but if I just move on to the next thing as if they're 'done' then quality is going to decline sharply. I'm currently entirely re-writing the first set of ACs it generated because the base premise was off.
This is both a prompt engineering and an availability-of-enough-context documentation problem, but both of those have fairly long learning curve work. Not many places do knowledge management very well, and so the requisite base information just may not be complete enough, and one missing 'patch' can very much change a lot of contexts.
I did a live demo in front of the CPAs, using their documentation, and Claude asked clarification questions they hadn't thought of and exposed gaps in the old manual processes.
Am I the only one that is finding quite the opposite? I feel like a kid again, back when I had no responsibilities and infinite time to play around and build things. Being able to look at my existing tooling and say "there's a rough edge here" and then whip out the equivalent of a Milwaukee Bandfile [1] and smooth it out is making it fun to go to work again.
[1] https://www.milwaukeetool.com/products/details/m12-fuel-1-2-...