Top
Best
New

Posted by briankelly 4/3/2025

Senior Developer Skills in the AI Age(manuel.kiessling.net)
421 points | 318 commentspage 3
windows2020 4/3/2025|
In my travels I find writing code to be natural and relaxing--a time to reflect on what I'm doing and why. LLMs haven't helped me out too much yet.

Coding by prompt is the next lowering of the bar and vibe coding even more so. Totally great in some scenarios and adds noise in others.

denkmoon 4/4/2025||
Senior developer skills haven't changed. Wrangling paragraphs of business slop into real technical requirements, communicating these to various stakeholders, understanding how all the discrete parts of whatever system you're building fit together, being responsible for timelines, getting the rest of the team coordinated/aligned and on-track, etc.

Actually coding is a relatively small part of my job. I could use an LLM for the others parts but my employer does not appreciate being given word salad.

8note 4/4/2025|
with the ai age, i think its also making sure team members use ai in similar was to get repeatable results
switch007 4/4/2025||
As much as I am still sceptical about AI tools, the past month has been a revolution as a senior dev myself.

I'm blasting through tickets, leaving more time to tutor and help junior colleagues and do refactoring. Guiding them has then been a multiplier, and also a bit of an eye opener about how little real guidance they've been getting up until now. I didn't realise how resource constrained we'd been as a team leading to not enough time guiding and helping them.

I don't trust the tools with writing code very often but they are very good at architecture questions, outputting sample code etc. Supercharged google

As a generalist, I feel less overwhelmed

It's probably been the most enjoyable month at this job.

atemerev 4/3/2025||
This is excellent, and matches my experience.

Those lamenting the loss of manual programming: we are free to hone our skills on personal projects, but for corporate/consulting work, you cannot ignore 5x speed advantage. It's over. AI-assisted coding won.

skydhash 4/3/2025||
Is it really 5x? I'm more surprised about 25+ years of experience, and being hard pressed to learn enough python to code the project. It's not like he's learning programming again, or being recently exposed to OOP. Especially when you can find working code samples about the subproblems in the project.
atemerev 4/4/2025||
It is 5x if you are already a senior SE knowing your programming language really well, constantly suggesting good architecture yourself ("seed files" is a brilliant idea), and not accepting any slop / asking to rewrite things if something is not up to your standards (of course, every piece of code should be reviewed).

Otherwise, it can be 0.2x in some cases. And you should not use LLMs for anything security-related unless you are a security expert, otherwise you are screwed.

(this is SOTA as of April 2025, I expect things to become better in the near future)

skydhash 4/4/2025||
> It is 5x if you are already a senior SE knowing your programming language really well, constantly suggesting good architecture yourself ("seed files" is a brilliant idea), and not accepting any slop / asking to rewrite things if something is not up to your standards (of course, every piece of code should be reviewed).

If you know the programming language really well, that usually means you know what libraries are useful, memorized common patterns, and have some project samples laying out. The actual speed improvement would be on typing the code, but it's usually the activity that requires the least time on any successful project. And unless you're a slow typist, I can't see 5x there.

If you're lacking in fundamental, then it's just a skill issue, and I'd be suspicious of the result.

atemerev 4/4/2025||
"Given this code, extract all entities and create the database schema from these", "write documentation for these methods", "write test examples", "write README.md explaining how to use scripts in this directory", "refactor everything in this directory just like this example", etc etc

Everything boring can be automated and it takes five seconds compared to half an hour.

skydhash 4/4/2025||
It can only be automated if the only thing you care about is having the code/text, and not making sure they are correct.

> Given this code, extract all entities and create the database schema from these

Sometimes, the best representation for storing and loading data is not the best for manipulating it and vice-versa. Directly mapping code entities to database relations (assuming it's SQL) is a sure way to land yourself in trouble later.

> write documentation for these methods

The intent of documentation is to explain how to use something and the multiple why's behind an implementation. What is there can be done using a symbol explorer. Repeating what is obvious from the name of the function is not helpful. And hallucinating something that is not there is harmful.

> write test examples

Again the type of tests matters more than the amount. So unless you're sure that the test is correct and the test suite really ensure that the code is viable, it's all for naught.

...

Your use cases assume that the output is correct. And as the hallucination risk from LLM models is non-zero, such assumption is harmful.

atemerev 4/5/2025||
Well, of course I check the output and correct it as needed. It is still much faster than writing it myself. And less boring.

As for the documentation part — I infer that you hadn't used state of the art models, had you? They do not write symbol docs mechanistically. They understand what the code is _doing_. Up to their context limits, which are now 128k for most models. Feed them 128k of code and more often than not it will understand what it is about. In seconds (compared to hours for humans).

skydhash 4/5/2025||
> They do not write symbol docs mechanistically. They understand what the code is _doing_.

What the code is doing is important only when you intend to modify it. Normally, what's important is how to use it. That's the whole point of design: Presenting an API that hides how things happens in favor of making it easier (natural) to do something. The documentation should focus on that abstract design and the relation to the API. The concrete implementation rarely matters if you're on the other side of the API.

Marsymars 4/4/2025||
This is actually a pretty compelling reason for me to suggest that my company not hire consultants/contractors to write code for us. A ton of our dev budget is already spent on untangling edge-case bugs from poorly written/understood code.
hinkley 4/4/2025||
The biggest thing I worry about with AI is that its current incarnation is anathema to the directions I think software needs to go next, and I’m at a loss to see what the judo-throw will look like that achieves that.

Rob Pike has the right idea but the wrong execution. As the amount of second and third party code we use increases, the search time goes up, and we need better facilities to reduce the amount of time you need to spend looking at the internals of one package because you need that time to look at three others. So clarity and discoverability both need to matter, and AI has no answers here, only more problems.

IMO, a lot of the success of Java comes from having provided 80% of the source code with the JDK. You could spend so much time single stepping into code that was not yours to figure out why your inputs didn’t cause the outputs you expected. But those are table stakes now.

g8oz 4/4/2025||
The bit about being able to get something workable going in an unfamiliar tech stack hits home. In a similar vein I was able to configure a VyOS router, a nushell based api client and some MSOffice automation in Powershell with AI assistance. Not a big deal in and of itself but still very useful.
noodletheworld 4/3/2025||
> Once again, the AI agent implemented this entire feature without requiring me to write any code manually.

> For controllers, I might include a small amount of essential details like the route name: [code]

Commit history: https://github.com/dx-tooling/platform-problem-monitoring-co...

Look, I honestly think this is a fair article and some good examples, but what is with this inane “I didn’t write any of it myself” claim that is clearly false that every one of these articles keeps bringing up?

What’s wrong with the fact you did write some code as part of it? You clearly did.

So weird.

ManuelKiessling 4/4/2025|
True, this needs to be fixed.

What I wanted to express was that I didn’t do any of the implementation, that is, any logic.

I need to phrase this better.

ManuelKiessling 4/4/2025||
Wait, I just saw what you meant: so no, for the Python tool my message stands. I did not write any code for it myself.
aerhardt 4/4/2025||
Not OP but did you edit the code via prompts, or was the whole thing a one-shot? That particular aspect is very confusing to me, I think you should clarify it.
ManuelKiessling 4/4/2025||
Absolutely not a one-shot, many iterations.
scelerat 4/4/2025||
I find myself spending so much time correcting bad -- or perhaps, more appropriately, misguided -- code that I constantly wonder if I'm saving time. I think I am, but a much higher percentage of my time is doing hard work of evaluating and thinking about things rather than mentally easy things the AI is good at, but what used to give me a little bit of a break.

Sometimes I liken the promise of AI to my experience with stereoscopic images (I have never been able to perceive them) -- I know there's something there but I frequently don't get it.

yapyap 4/4/2025||
> Context on Code Quality (via HackerNews): The HackerNews discussion included valid critiques regarding the code quality in this specific Python project example (e.g., logger configuration, custom config parsing, potential race conditions). It’s a fair point, especially given I’m not a Python expert. For this particular green-field project, my primary goal was rapid prototyping and achieving a working solution in an unfamiliar stack, prioritizing the functional outcome over idiomatic code perfection or optimizing for long-term maintainability in this specific instance. It served as an experiment to see how far AI could bridge a knowledge gap. In brown-field projects within my areas of expertise, or projects demanding higher long-term maintainability, the human review, refinement, and testing process (using the guardrails discussed later) is necessarily much more rigorous. The critiques highlight the crucial role of experienced oversight in evaluating and refining AI-generated code to meet specific quality standards.

We all know how big companies handle software, if it works ship it. Basically once this shit starts becoming very mainstream companies will want to shift into their 5x modes (for their oh so holy investors that need to see stock go up, obviously.)

So once this sloppy prototype is seen as working they will just ship the shit sandwhich prototype. And the developers won’t know what the hell it means so when something breaks in the future, and that is when not if. They will need AI to fix it for them, cause once again they do not understand what is going on.

What I’m seeing here is you proposing replacing one of your legs with AI and letting it do all the heavy lifting, just so you can lift heavier things for the moment.

Once this bubble crumbles the technical debt will be big enough to sink companies, I won’t feel sorry for any of the AI boosties but do for their families that will go into poverty

namaria 4/5/2025|
Yup this is full on addiction mechanics. You use these generative tools, it feels great, the team and the organization feel great. But it's not warranted, the underlying thing is inherently flawed.

When the good feeling fades and you need to up the dosage, you will find that your ability to function is declining and your dependency on the generative tools is increasing. Besides no one is thinking about the end game. If (and its a big if) this goes to plan and these generative tools can do everything. Well at that point the only software needed is the generative tool itself isn't it? There would be no need for anything else so anyone building stuff on top of it, or using it to build stuff, would be SOL.

So best case scenario we all get addicted to fundamentally flawed technology because our ability to function independently has eroded too far, worst case there will be only foundational model companies operating in software.

bsdimp 4/4/2025|
I yesterday set chatgpt to a coding task. It utterly failed. Its error handling was extensive, but wrong. It didn't know file formats. It couldn't write the code when i told it the format. The structure of the code sucked. The style was worse. I've never had to work so hard for such garbage. I could have knocked it out from scratch faster with higher quality.
More comments...