Posted by nilirl 21 hours ago
The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.
I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.
I think the same will happen with agents working on a org specific paved path set by senior devs.
I have personally noticed this a lot how multiple people can work on the same problem, but the more senior developers get way more miledge out of AI compared to those that are early in their carreers.
Another difference I've noticed is how many agents one can keep running without losing awareness.
It generally just raised the bar on what management will expect from developers which will result in a shrinking workforce. The only ones that will benefit are AI companies and the upper management since less employees means less management so lower management will get screwed too.
Jevons paradox is already rearing its head, I've seen data suggesting open roles in tech are at their highest since the post-pandemic slump [1]. If you're a senior leader at a company and your engineers are now capable of multiple-times more productivity, is the logical choice to fire half, or set way more ambitious goals? One assumes engineers are hired because their outputs are worth more than their cost. If outputs, at least for those capable of wielding new tools, are higher, so is the value of that employee to you.
The universal thing I'm hearing from friends at small-mid-size tech companies, and experiencing myself, is that there is way more work and demand for it from senior leaders than they're capable of with their current teams.
1: https://www.ciodive.com/news/tech-job-postings-hit-3-year-hi...
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
Fine, then, I'll keep the experience to myself.