Top
Best
New

Posted by napolux 1 day ago

The next two years of software engineering(addyosmani.com)
288 points | 327 commentspage 4
streetcat1 20 hours ago|
For some reason miss two important points:

1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?

cyberpunk 6 hours ago||
AI assists the maintenance. A lot of posts seem to think like once the code is committed the AI’s what, just go away? If you can write a test for a bug, likely it can be either fully or partially fixed by an ai even today.
NitpickLawyer 18 hours ago|||
I think this post is a great example of a different point made in this thread. People confuse vibe-coding with llm-assisted coding all the time (no shade for you, OP). There is an implied bias that all LLM code is bad, unmaintainable, incomprehensible. That's not necessarily the case.

1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.

The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).

That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.

2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).

haspok 13 hours ago||
Perhaps thinking about AI generated code in terms of machine code generated by a compiler helps. Who maintains the compiled program? Nobody. If you want to make changes to it, you recompile the source.

In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?

gassi 23 hours ago||
> Addy Osmani is a Software Engineer at Google working on Google Cloud and Gemini

Ah, there it is.

mikemarsh 10 hours ago||
Yep, it never fails. Here's another prediction for "The next two years of software engineering"; AI vendors will start to utilize their senior devs' personal domains to write their advertising pieces to attempt to mitigate scrutiny when such things are posted to social media.
falloutx 14 hours ago|||
Ahhhh, this is like that guy who works at Claude Code and runs 100 agents at the same time to replace 100 juniors. Everyone is convinced he will be the last software engineer on earth.
lesser-shadow 22 hours ago||
[dead]
tommica 18 hours ago||
One thing that fucks with juniors is the expecration of paying for subscriptions for AI models. If you need to know how the AI tools work, you need to learn them with your own money.

Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.

boulos 17 hours ago||
You can get by pretty well with the ~$20/month plans for either Claude or Gemini. You don't need to be doing the $200/month ones just to get a sense of how they work.
tommica 16 hours ago||
Again, not everyone can afford it, and it becomes a hurdle. Computers are acquirable, but 20$ extra a month might not be.

And yes, that plan can get you started, but when I tested it, I managed to get 1 task done, before having to wait 4 hours.

falloutx 14 hours ago|||
This is why opencode is giving free access to one or two models, unlimited access.
ares623 17 hours ago||
If the AI gets so good then they shouldn’t need to pre-learn.
mishkovski 9 hours ago||
This article reads like it was written by an AI.
qsera 14 hours ago||
I would like to see how things will be when using AI would require half of a devs current paycheck.
PraddyChippzz 1 day ago||
The points mentioned in the article, regarding the things to focus on, is spot on.
ahmetomer 1 day ago||
> Junior developers: Make yourself AI-proficient and versatile. Demonstrate that one junior plus AI can match a small team’s output. Use AI coding agents (Cursor/Antigravity/Claude Code/Gemini CLI) to build bigger features, but understand and explain every line if not most. Focus on skills AI can’t easily replace: communication, problem decomposition, domain knowledge. Look at adjacent roles (QA, DevRel, data analytics) as entry points. Build a portfolio, especially projects integrating AI APIs. Consider apprenticeships, internships, contracting, or open source. Don’t be “just another new grad who needs training”; be an immediately useful engineer who learns quickly.

If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.

falloutx 14 hours ago||
And you think juniors aren't doing this? At this point everyone in the market does more Vibe coding than those who are not in the market. Market is saturated most because Execs cutting jobs not because juniors are not good.
ares623 17 hours ago||
What. That’s written in a way that’s like “men writing women”. Not putting themselves in the shoes of a junior who has no context or almost no opportunities.
mawadev 16 hours ago||
I mean it's pretty simple: management will take bad quality (because they don't understand the field) over having and paying more employees any day. Software engineer positions will shrink and be unrecognizable: one person expected to be doing the work of multiple departments to stay employed. People may leave the field or won't bother learning it. When the critical mass is reached, AI will be paywalled and rug pulled. Then the field evens itself out again over a long, expensive period of time for every company that fell for it, lowering the expectations back to reality.
falloutx 14 hours ago|
This is truly the problem: You either get fired or you get to work 10x more to survive. Only question is how many of us will be in 1st group and how many in the 2nd group, its a lose lose situation.
mawadev 13 hours ago||
Exactly. Some jobs moved from database, backend, frontend and devops to "fullstack", which means 4 jobs with the pay of one. People do that job, but with only 8h-10h in a day the quality is as expected. I think overall people will try to move out of the field, no matter how much of a force multiplier AI might be. Its simply a worse trade to carry so much responsibility and burden when you can work in IT or outside of IT in a less cognitively demanding field with set hours and expectations for the same pay (in EU, very hyperbolic statement tbh). Especially when the profit you bring dwarfs the compensation with all the frustrations that come with knowing that and being kept down in the corporate ladder.
xkcd1963 12 hours ago||
Please dear developers be as lazy as possible and use LLMs. The amount of bugs that get shipped enable me a comfortable life as opsec.
globular-toast 15 hours ago|
This article suggests it is specialists who are "at risk", but as much more of a generalist I was thinking the opposite and starting to regret not specialising more.

My value so far in my career has been my very broad knowledge of basically the entire of computer science, IT, engineering, science, mathematics, and even beyond. Basically, I read a lot, at least 10x more than most people it seems. I was starting to wonder how relevant that now is, given that LLMs have read everything.

But maybe I'm wrong about what my skill actually is. Everyone has had LLMs for years now and yet I still seem better at finding info, contextualising it and assimilating it than a lot of people. I'm now using LLMs too but so far I haven't seen anyone use an LLM to become like me.

So I remain slightly confused about what exactly it is about me and people like me that makes us valuable.

cowl 13 hours ago||
LLMs have read EVERYTHING yes. that includes a lot of not optimal solutions, repeating mantras about past best practices that are not relevant anymore, thousands of blog posts about how to draw an owl by drawing two circles and leaving the rest as an exercise to the reader etc.

The value of a good engineer is his current-context judgment. Something that LLMs can not do Well.

Second point, something that is being mentioned occasionally but not discussed seriously enough, is that the Dead Internet Theory is becoming a reality. The amount of good, professionally written training materials is by now exhausted and LLMs will start to feed on their own slop. See How little the LLM's core competency increased in the last year even with the big expansion of their parameters.

Babysitting LLM's output will be the big thing in the next two years.

falloutx 14 hours ago||
I mean there is no strat that saves you 100% from it. The layoffs are kind of random, based on teams they dont see any vision for, or engineers who dont perform. Generalising is better imo.
More comments...