Posted by alainrk 6 hours ago
Team decides on vague requirements, then you actually have to implement something. Well that 'implementing' means iterating until you discover the correct thing. Usually in lots of finicky decisions.
Sometimes you might not care about those decisions, so you one shot one big change. But in my experience, the day-to-day on a production app you can 100% write all the code with Claude, but you're still trying to translate high level requirements into "low"-level decisions.
But in the end its nice not to care about the code monkey work going all over a codebase, adding a lot of trivial changes by hand, etc.
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
Oh, you accepted that? I feel sorry for you. Many of us never did.
I disagree. At least for a little while until models improve to truly superhuman reasoning*, frameworks and libraries providing abstractions are more valuable than ever. The risk/reward for custom work vs library has just changed in unforeseen ways that are orthogonal to time and effort spent.
Not only do LLMs make customization of forks and the resulting maintenance a lot easier, but the abstractions are now the most valuable place for humans to work because it creates a solid foundation for LLMs to build on. By building abstractions that we validate as engineers, we’re encoding human in the loop input without the end-developer having to constantly hand hold the agent.
What we need now is better abstractions for building verification/test suites and linting so that agents can start to automatically self improve their harness. Skills/MCP/tools in general have had the highest impact short of model improvements and there’s so much more work to be done there.
* whether this requires full AGI or not, I don’t know.
I did asked AI to generate landing page. This gave me the initial headers, footers and styles that I used for my webapp but I threw away everything else.
The actual problem most teams have isn't writing code — it's understanding what the code they already depend on is doing. You can vibe-code a whole app in a weekend, but when one of your 200 transitive dependencies ships a breaking change in a patch release, no amount of AI is going to help you debug why your auth flow suddenly broke.
The skill that's actually becoming more valuable isn't "writing code from scratch" — it's maintaining awareness of the ecosystem you're building on. Knowing when Node ships a security fix that affects your HTTP handling, or when a React minor changes the reconciliation behavior, or when Postgres deprecates a function you use in 50 queries.
That's the boring, unsexy part of engineering that AI doesn't solve and most developers skip until something catches fire.
What? Coding agents are very capable at helping fix bugs in specific domains. Your examples are like, the exact place where AI can add value.
You do an update, things randomly break: tell Claude to figure it out and it can go look up the breaking changes in the new versions, read your code and tell you what happened and fix it for you.
It does not work much for Django, as every project I saw using it has a different shape, but it works very well for Rails, as all projects share the same structure. However, even for Django, there are some practices that a newcomer to a project should expect to find in the code, because it's Django. So, maybe onboarding on a LLM coded project is just picking the same LLM as all the other developers, making it read the code and learning what kind of prompts the other developers use.
By the way, does anybody mind to share first hand experiences of projects in which every developer is using agents? How do those agents cope with the code of the other agents?
The main burden I see is validation of the output and getting reproducable results. As with many AI solutions.
I think you are missing Consistency, unless you don't count frameworks that you write as frameworks? There are 100 different ways of solving the same problem, and using a framework--- off the shelf or home made--- creates consistency in the way problems are solved.
This seems even more important with AI, since you lose context on each task, so you need it to live within guardrails and best practices or it will make spaghetti.