Posted by napolux 1 day ago
On the optimistic take side - I suspect it might end up being true that software might be infused into more niches but not sure it follows that this helps on the jobs market side. Or put different demand for software and SWE might decouple somewhat for much of that additional software demand.
This is really just another form of automation, speeding things up. We can now make more customized software more quickly and cheaply. The market is already realizing that fact, and demand for more performant, bespoke software at lower costs/prices is increasing.
Those who are good at understanding the primary areas of concern in software design generally, and who can communicate well, will continue to be very much in demand.
The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs? UBI, redistribution of wealth through taxes? I'm not so convinced about that ...
There is no reason why people will left without jobs. Ultimately, "job" is simply a superstructure for satisfying people's needs. As long as people have needs and the ability to satisfy them, there will be jobs in the market. AI change nothing in those aspects.
The people who lose their jobs prove this was always the case. No job comes with a guarantee, even ones that say or imply they do. Folks who believe their job is guaranteed to be there tomorrow are deceiving themselves.
Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?
A humble way for devs to look at this, is that in the new LLM era we are all juniors now.
A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.
We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.
I think of it a bit like ebike speed limits. Previously to go above 25mph on a 2-wheeled transport you needed a lot of time training on a bicycle, which gave you the skills, or you needed your motorcycle licence, which required you to pass a test. Now people can jump straight on a Surron and hare off at 40mph with no handling skills and no license. Of course this leads to more accidents.
Not to say LLMs can't solve this eventually, RL approaches look very strong and maybe some kind of self-play can be introduced like AlphaZero. But we aren't there yet, that's for sure.
But the comparison I made was between the junior with a good attitude and expert grasp on LLMs, and the stick-in-the-mud/disinterested "senior". Those are where the senior and junior roles will be more ambiguous in demarcation as time moves forward.
1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.
The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).
That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.
2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).
In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?