Top
Best
New

Posted by briankelly 4/3/2025

Senior Developer Skills in the AI Age(manuel.kiessling.net)
421 points | 318 commentspage 2
pphysch 4/4/2025|
I'm skeptical that

1. Clearly define requirements

2. Clearly sketch architecture

3. Setup code tool suite

4. Let AI agent write the remaining code

Is better price-performance than going lighter on 1-3 and instead of 4, spending that time writing the code yourself with heavy input from LLM autocomplete, which is what LLMs are elite at.

The agent will definitely(?) write the code faster, but quality and understanding (tech debt) can suffer.

IOW the real takeaway is that knowing the requirements, architecture, and tooling is where the value is. LLM Agent value is dubious.

istjohn 4/4/2025|
We're just in a transitional moment. It's not realistic to expect LLM capabilities to leapfrog from marginally better autocomplete to self-guided autocoder without passing through a phase where it shows tantalizing hints of being able to go solo yet lacks the ability to follow through. Over the next couple years, the reliability, versatility, and robustness of LLMs as coders will steadily increase.
yoyohello13 4/4/2025||
I’ve been pretty moderate on AI but I’ve been using Claude cli lately and it’s been pretty great.

First, I can still use neovim which is a massive plus for me. Second it’s been pretty awesome to offload tasks. I can say something like “write some unit tests for this file, here are some edge cases I’m particularly concerned about” then I just let it run and continue with something else. Come back a few mins later to see what it came up with. It’s a fun way to work.

ramon156 4/5/2025|
This sounds scarily like tasking interns, which only makes me empathize more and more with people studying CS. At what point is it even profitable anymore to teach a junior dev and keep them long enough until they can be considered senior
Quarrelsome 4/3/2025||
This is extremely fascinating and finally something that feels extremely tangible as opposed to vibes based ideas around how AI will "take everyone's jobs" while failing to fill in the gaps between. This feels extremely gap filling.

I find it quite interesting how we can do a very large chunk of the work up front in design, in order to automate the rest of the work. Its almost as if waterfall was the better pattern all along, but we just lacked the tools at that time to make it work out.

skizm 4/3/2025||
Waterfall has always been the best model as long as specs are frozen, which is never the case.
thisdougb 4/3/2025|||
When I first started in dev, on a Unix OS, we did 'waterfall' (though we just called it releasing software, thirty years ago). We did a a major release every year, minor releases every three months, and patches as and when. All this software was sent to customers on mag tapes, by courier. Minor releases were generally new features.

Definitely times were different back then. But we did release software often, and it tended to be better quality than now (because we couldn't just fix-forward). I've been in plenty of Agile companies whose software moves slower than the old days. Too much haste, not enough speed.

Specs were never frozen with waterfall.

PaulDavisThe1st 4/4/2025||
The difference between agile and waterfall only really matters at the start of a project. Once it is deployed/released/in-use, the two approaches converge, more or less.
Quarrelsome 4/3/2025||||
sure, but if you're generating the code in a very small amount of time from the specs then suddenly its no longer the code that is the source, its the specs.

That's what waterfall always wanted to be and it failed because writing the code usually took a lot longer than writing the specs, but now perhaps, that is no longer the case.

datadrivenangel 4/4/2025||
Specs aren't done until the product is retired, thus, code ain't done either.
kristiandupont 4/4/2025||||
Only if you don't learn anything while developing. Which is also never the case.
zelphirkalt 4/4/2025|||
Many companies now engage in serial waterfalling.
reneherse 4/4/2025||
Great observations.

As a frontend designer, not a developer, I'm intrigued by the techniques presented by the author, though most devs commenting here seem to be objecting to the code quality. (Way above my pay grade, but hopefully a solvable problem.)

As someone who loves to nerd out on creative processes, it's interesting indeed to contemplate whether AI assisted dev would favor waterfall vs incremental project structure.

If indeed what works is waterfall dev similar to the method described in TFA, we'll want to figure out how to use iterative process elsewhere, for the sake of the many benefits when it comes to usability and utility.

To me that suggests the main area of iteration would be A) on the human factors side: UX and UI design, and B) in the initial phases of the project.

If we're using an AI-assisted "neo waterfall" approach to implementation, we'll want to be highly confident in the specifications we're basing it all on. On regular waterfall projects it's critical to reduce the need for post-launch changes due to their impact on project cost and timeline.[1] So for now it's best to assume we need to do the same for an AI-assisted implementation.

To have confidence in our specs document we'll need a fully fledged design. A "fully humane", user approved, feature complete UX and UI. It will need to be aligned with users' mental models, goals, and preferences as much as possible. It will need to work within whatever the technical constraints are and meet the business goals of the project.

Now all that is what designers should be doing anyway, but to me the stakes seem higher on a waterfall style build, even if it's AI-assisted.

So to shoulder that greater responsibility, I think design teams are going to need a slightly different playbook and a more rigorous process than what's typical nowadays. The makeup of the design team may need to change as well.

Just thinking about it now, here's a first take on what that process might be. It's an adaptation of the design tecniques I currently use on non-waterfall projects.

----------

::Hypothesis for a UX and UI Design Method for AI-assisted, "Neo-Waterfall" Projects::

Main premise: Designers will need to lead a structured, iterative, comprehensive rapid prototyping phase at the beginning of a project.

| Overview: |

• In my experience, the DESIGN->BUILD->USE/LEARN model is an excellent guide for wrangling the iterative cycles of a rapid prototyping phase. With each "DBU/L" cycle we define problems to be solved, create solutions, then test them with users, etc.

• We document every segment of the DBU/L cycle, including inputs and outputs, for future reference.

• The USE/LEARN phase of the DBU/L cycle gives us feedback and insight that informs what we explore in the next iteration.

• Through multiple such iterations we gain confidence in the tradeoffs and assumptions baked into our prototypes.

• We incrementally evolve the scope of the prototypes and further organize the UX object model with every iteration. (Object Oriented UX, aka OOUX, is the key to finding our way to both beautiful data models and user experiences).

• Eventually our prototyping yields an iteration that fulfills user needs, business goals, and heeds technical constraints. That's when we can "freeze" the UX and UI models, firm up the data model and start writing the specifications for the neo-waterfall implementation.

• An additional point of technique: Extrapolating from the techniques described in TFA, it seems designers will need to do their prototyping in a medium that can later function as a keyframe constraint for the AI. (We don't want our AI agent changing the UI in the implementation phase of the waterfall project, so UI files are a necessary reference to bound its actions.)

• Therefore, we'll need to determine which mediums of UI design the AI agents can perceive and work with. Will we need a full frontend design structured in directories containing shippable markup and CSS? Or can the AI agent work with Figma files? Or is the solution somewhere in between, say with a combination of drawings, design tokens, and a generic component library?

• Finally, we'll need a method for testing the implemented UX and UI against the USE criteria we arrived at during prototyping. We should be able to synthesize these criteria from the prototyping documentation, data modeling and specification documents. We need a reasonable set of tests for both human and technical factors.

• Post launch, we should continue gathering feedback. No matter how good our original 1.0 is, software learns, wants to evolve. (Metaphorically, that is. But maybe some day soon--actually?) Designing and making changes to brownfield software originally built with AI-assistance might be a topic worthy of consideration on its own.

----------

So as a designer, that's how I would approach the general problem. Preliminary thoughts anyway. These techniques aren't novel; I use variations of them in my consulting work. But so far I've only built alongside devs made from meat :-)

I'll probably expand/refine this topic in a blog post. If anyone is interested in reading and discussing more, I can send you the link.

Email me at: scott [AT] designerwho [DOT] codes

----------

[1] For those who are new to waterfall project structure, know that unmaking and remaking the "final sausage" can be extremely complex and costly. It's easy to find huge projects that have failed completely due to the insurmountable complexity. One question for the future will be whether AI agents can be useful in such cases (no sausage pun intended).

plandis 4/4/2025||
The main questions I have with using LLMs for this purpose in a business setting are:

1. Is the company providing the model willing to indemnify _your_ company when using code generation? I know GitHub Copilot will do this with the models they provide on their hardware, but if you’re using Claude Code or Cursor with random models do they provide equal guarantees? If not I wonder if it’s only a matter of time before that landmine explodes.

2. In the US, AFAICT, software that is mostly generated by non-humans is not copyrightable. This is not an issue if you’re creating code snippets from an LLM, but if you’re generating an entire project this way then none or only small parts of the code base you generate would then be copyrightable. Do you still own the IP if it’s not copyrightable? What if someone exfiltrates your software? Do you have no or little remedy?

scandox 4/3/2025||
One of the possibly obsolete things I enjoy about working with a human junior dev is that they learn and improve. It's nice to feel all this interaction is building something.
plandis 4/4/2025|
It’s common practice to put your preferences and tips/advice into a readme solely for the LLM to consume to learn about what you want it to do.

So you’d set things like code standards (and hopefully enforce them via feedback tools), guides for writing certain architectures, etc. Then when you have the LLM start working it will first read that readme to “learn” how you want it to generally behave.

I’ve found that I typically edit this file as time goes on as a way to add semi-permanent feedback into the system. Even if your context window gets too large when you restart the LLM will start at that readme to prime itself.

That’s the closest analogy I can think of.

thisdougb 4/3/2025||
This is interesting, thanks for posting. I've been searching for some sort of 'real' usage of AI-coding. I'm a skeptic of the current state of things, so it's useful to see real code.

I know Python, but have been coding in Go for the last few years. So I'm thinking how I'd implement this in Go.

There's a lot of code there. Do you think it's a lot, or it doesn't matter? It seems reasonably clear though, easy to understand.

I'd have expected better documentation/in-line comments. Is that something that you did/didn't specify?

ManuelKiessling 4/4/2025|
With this project, I was really only interested in the resulting application, and intentionally not in the resulting code.

I really wanted to see how far I can get with that approach.

I will ask it to clean up the code and its comments and report back.

gsibble 4/3/2025||
I completely agree, as a fellow senior coder. It allows me to move significantly faster through my tasks and makes me much more productive.

It also makes coding a lot less painful because I'm not making typos or weird errors (since so much code autocompletes) that I spend less time debugging too.

overgard 4/4/2025|
I dunno, I just had Copilot sneak in a typo today that took about ten minutes of debugging to find. I certainly could have made a similar typo myself if copilot hadn't done it for me, but, all the same copilot probably saved me a minute worth of typing today but cost me 10 minutes of debugging.
cube00 4/4/2025|||
The vibe bros would have you believe your prompt is at fault and that you need add "don't make typos".
overgard 4/4/2025||
True, I didn't have five paragraphs on the proper way to handle bounding boxes and the conceptual use of bounding boxes and "please don't confuse lower for upper". All my fault!
miningape 4/4/2025|||
Even if you made a similar typo you'd have a better understanding of the code having written it yourself. So it likely wouldn't have take 10 minutes to debug.
overgard 4/4/2025||
Conceptually it was a really simple bug (I was returning a bounding box and it returned (min, min) instead of (min, max)). So, I mean, the amount that AI broke it was pretty minor and it was mostly my fault for not seeing it when I generated it. But you know, if it's messing stuff up when it's generating 4 lines of code I'm not really going to trust it with an entire file, or even an entire function.
justanotherunit 4/4/2025||
Interesting post, but this perspective seems to be the main focus, like all the time. I find this statement to be completely wrong usage of AI:

“This is especially noteworthy because I don’t actually know Python. Yes, with 25+ years of software development experience, I could probably write a few lines of working Python code if pressed — but I don’t truly know the language. I lack the muscle memory and intimate knowledge of its conventions and best practices.”

You should not use AI to just “do” the hard job, since as many have mentioned, it does it poorly and sloppy. Use AI to quickly learn the advantages and disadvantages of the language, then you do not have to navigate through documentation to learn everything, just validate what the AI outputs. All is contextual, and since you know what you want in high level, use AI to help you understand the language.

This costs speed yes, but I have more control and gain knowledge about the language I chose.

ManuelKiessling 4/4/2025|
I agree 100%, but in this very specific case, I really just wanted a working one-off solution that I'm not going to spend much time on going forward, AND I wanted to use it as an excuse to see how far I can go with AI tooling in a tech stack I don't know.

That being said, using AI as a teacher can be a wonderful experience. For us seniors, but also and probably more importantly, for eager and non-lazy juniors.

I have one such junior on my team who currently speed-runs through the craft because he uses AI to explain EVERYTHING to him: What is this pattern? Why should I use it? What are the downsides? And so on.

Of course I also still tutor him, as this is a main part of my job, but the availability of an AI that knows so much and always has time for him and never gets tired etc is just fantastic.

justanotherunit 4/5/2025||
Excellent insight, and that explains a lot of your decisions. Your junior example is a prime example of why AI can be such an awesome tool, just used correctly. Just awesome!
mmazing 4/4/2025||
When working with AI for software engineering assistance, I use it mainly to do three things -

1. Do piddly algorithm type stuff that I've done 1000x times and isn't complicated. (Could take or leave this, often more work than just doing it from scratch)

2. Pasting in gigantic error messages or log files to help diagnose what's going wrong. (HIGHLY recommend.)

3. Give it high level general requirements for a problem, and discuss POTENTIAL strategies instead of actually asking it to solve the problem. This usually allows me to dig down and come up with a good plan for whatever I'm doing quickly. (This is where real value is for me, personally.)

This allows me to quickly zero in on a solution, but more importantly, it helps me zero in strategically too with less trial and error. It let's me have an in-person whiteboard meeting (as I can paste images/text to discuss too) where I've got someone else to bounce ideas off of.

I love it.

miningape 4/4/2025|
Same 3 is the only use case I've found that works well enough. But I'll still usually take a look on google / reddit / stackoverflow / books first just because the information is more reliable.

But it's usually an iterative process, I find pattern A and B on google, I'll ask the LLM and it gives A, B and C. I'll google a bit more about C. Find out C isn't real. Go back and try other people commenting on it on reddit, go back to the LLM to sniff out BS, so on and so on.

vessenes 4/4/2025|
This post is pretty much my exact experience with the coding tools.

Basically the state of the art right now can turn me into an an architect/CTO that spends a lot of time complaining about poor architectural choices. Crucially Claude does not quite understand how to greenfield implement good architectures. 3.7 is also JUST . SO. CHATTY. It’s better than 3.5, but more annoying.

Gemini 2.5 needs one more round of coding tuning; it’s excellent, has longer context and is much better at arch, but still occasionally misformats or forgets things.

Upshot — my hobby coding can now be ‘hobby startup making’ if I’m willing to complain a lot, or write out the scaffolding and requirements docs. It provides nearly no serotonin boost from getting into flow and delivering something awesome, but it does let me watch YouTube on the side while it codes.

Decisions..

More comments...