Posted by straydusk 2 days ago
I can’t imagine any other example where people voluntarily move for a black box approach.
Imagine taking a picture on autoshot mode and refusing to look at it. If the client doesn’t like it because it’s too bright, tweak the settings and shoot again, but never look at the output.
What is the logic here? Because if you can read code, I can’t imagine poking the result with black box testing being faster.
Are these people just handing off the review process to others? Are they unable to read code and hiding it? Why would you handicap yourself this way?
Early resistance to high-level (i.e. compiled) languages came from assembly programmers who couldn’t imagine that the compiler could generate code that was just as performant as their hand-crafted product. For a while they were right, but improved compiler design and the relentless performance increases in hardware made it so that even an extra 10-20% boost you might get from perfectly hand-crafted assembly was almost never worth the developer time.
There is an obvious parallel here, but it’s not quite the same. The high-level language is effectively a formal spec for the abstract machine which is faithfully translated by the (hopefully bug-free) compiler. Natural language is not a formal spec for anything, and LLM-based agents are not formally verifiable software. So the tradeoffs involved are not only about developer time vs. performance, but also correctness.
Put another way, if you don't know what correct is before you start working then no tradeoff exists.
This goes out the window the first time you get real users, though. Hyrum's Law bites people all the time.
"What sorts of things can you build if you don't have long-term sneaky contracts and dependencies" is a really interesting question and has a HUGE pool of answers that used to be not worth the effort. But it's largely a different pool of software than the ones people get paid for today.
Not really. Many users are happy for their software to change if it's a genuine improvement. Some users aren't, but you can always fire them.
Certainly there's a scale beyond which this becomes untenable, but it's far higher than "the first time you get real users".
> For many projects, maybe ~80% of the thinking about how the software should work happens after some version of the software exists and is being used to do meaningful work.
Some version of the software exists and now that's your spec. If you don't have a formal copy of that and rigorous testing against that spec, you're gonna get mutations that change unintended things, not just improvements.
Users are generally ok with - or at least understanding of - intentional changes, but now people are talking about no-code-reading workflows, where you just let the agents rewrite stuff on the fly to build new things until all the tests pass again. The in-code tests and the expectations/assumptions about the product that your users have are likely wildly different - they always have been, and there's nothing inherent about LLM-generated code or about code test coverage percentages that change this.
Fixed that for you.
The "now that producing plausible code is free, verification becomes the bottleneck" people are technically right, of course, but I think they're missing the context that very few projects cared much about correctness to begin with.
The biggest headache I can see right now is just the humans keeping track of all the new code, because it arrives faster than they can digest it.
But I guess "let go of the need to even look at the code" "solves" that problem, for many projects... Strange times!
For example -- someone correct me if I'm wrong -- OpenClaw was itself almost entirely written by AI, and the developer bragged about not reading the code. If anything, in this niche, that actually helped the project's success, rather than harming it.
(In the case of Windows 11 recently.. not so much ;)
It's certainly hard to find, in consumer-tech, an example of a product that was displaced in the market by a slower moving competitor due to buggy releases. Infamously, "move fast and break things" has been the rule of the land.
In SaaS and B2B deterministic results becomes much more important. There's still bugs, of course, but showstopper bugs are major business risks. And combinatorial state+logic still makes testing a huge tarpit.
The world didn't spend the last century turning customer service agents and business-process-workers into script-following human-robots for no reason, and big parts of it won't want to reintroduce high levels of randmoness... (That's not even necessarily good for any particular consumer - imagine an insurance company with a "claims agent" that got sweet talked into spending hundreds of millions more on things that were legitimate benefits for their customers, but that management wanted to limit whenever possible on technicalities.)
Vibe coding is closer to compiling your code, throwing the source away and asking a friend to give you source that is pretty close to the one you wrote.
"Hey Claude, translate this piece of PHP code into Power10 assembly!"
> and wouldn't look at the code anymore than, say, a PHP developer would look at the underlying assembly
This really puts down the work that the PHP maintainers have done. Many people spend a lot of time crafting the PHP codebase so you don't have to look at the underlying assembly. There is a certain amount of trust that I as a PHP developer assume.
Is this what the agents do? No. They scrape random bits of code everywhere and put something together with no craft. How do I know they won't hide exploits somewhere? How do I know they don't leak my credentials?
I just am describing what I'm doing now, and what I'm seeing at the leading edge of using these tools. It's a different approach - but I think it'll become the most common way of producing software.
The output of code isn't just the code itself, it's the product. The code is a means to an end.
So the proper analogy isn't the photographer not looking at the photos, it's the photographer not looking at what's going on under the hood to produce the photos. Which, of course, is perfectly common and normal.
I’ll bite. Is this person manually testing everything that one would regularly unit test? Or writing black box tests that he does know are correct because of being manually written?
If not, you’re not reviewing the product either. If yes, it’s less time consuming to actually read and test the damn code
So a percentage of your code, based on your gut feeling, is left unseen by any human by the moment you submit it.
Do you agree that this rises the chance of bugs slipping by? I don’t see how you wouldn’t.
And considering the fact that your code output is larger, the percentage of it that is buggy is larger, and (presumably) you write faster, have you considered the conclusion in terms of the compounding likelihood of incidents?
Coverage doesn't matter if the tests aren't good. If you're not verifying the tests are actually doing something useful, talking about high coverage is just wanking.
"have the agent create progressively bigger integration tests, until I hit e2e/manual validation."
Same thing. It doesn't matter how big the tests are if they're not testing the right thing. Also why is e2e slashed with manual? Those are orthogonal. E2E tests can [and should] be fully automated for many [most?] systems. And manual validation doesn't have to wait for full e2e.
My approach is similar. I invest in the harness layer (tests, hooks, linting, pre-commit checks). The code review happens, it's just happening through tooling rather than my eyeballs.
I've found that focusing my attention upstream (specs, constraints, test harness) yields better outcomes than poring over implementation details line by line. The code is still there if I need it. I just rarely need it.
Specious analogies don't help anything.
If the process becomes reliable enough, then there is no reason. For now, that still requires developers to pay attention for important projects, but there are also a lot of AI written tools I rely on day to day that I don't, because the opportunity cost of spending time reading them is lower than the cost of accepting the risk that they do something wrong.
There are also a whole lot of tools I do read thoroughly, because the risk profile is different.
But that category is getting smaller day by day, not just with model improvements, but with improved harnesses.
If the app is too bright, you tweak the settings and build it again.
Photography used to involve developing film in dark rooms. Now my iPhone does... god knows what to the photo - I just tweak in post, or reshoot. I _could_ get the raw, understand the algorithm to transform that into sRGB, understand my compression settings, etc - but I don't need to.
Similarly, I think there will be people who create useful software without looking at what happens in between. And there will still be low-level software engineers for whom what happens in between is their job.
It is right often enough that your time is better spent testing the functionality than the code.
Sometimes it’s not right, and you need to re-instruct (often) or dive in (not very often).
Anyone overseeing work from multiple people has to? At some point you have to let go and trust people‘s judgement, or, well, let them go. Reading and understanding the whole output of 9 concurrently running agents is impossible. People who do that (I‘m not one of them btw) must rely on higher level reports. Maybe drilling into this or that piece of code occasionally.
Indeed. People. With salaries, general intelligence, a stake in the matter and a negative outcome if they don’t take responsibility.
>Reading and understanding the whole output of 9 concurrently running agents is impossible.
I agree. It is also impossible for a person to drive two cars at once… so we don’t. Why is the starting point of the conversation that one should be able to use 9 concurring agents?
I get it, writing code no longer has a physical bottleneck. So the bottleneck becomes the next thing, which is our ability to review outputs. It’s already a giant advancement, why are we ignoring that second bottleneck and dropping quality assurance as well? Eventually someone has to put their signature on the thing being shippable.
That's not a black box though. Someone is still reading the code.
> At some point you have to let go and trust people‘s judgement
Where's the people in this case?
> People who do that (I‘m not one of them btw) must rely on higher level reports.
Does such a thing exist here? Just "done".
But you are not. That’s the point?
> Where's the people in this case?
Juniors build worse code than codex. Their superiors also can‘t check everything they do. They need to have some level of trust for doing dumb shit, or they can’t hire juniors.
> Does such a thing exist here? Just "done".
Not sure what you mean. You can definitely ask the agent what it built, why it built it, and what could be improved. You will get only part of the info vs when you read the output, but it won’t be zero info.
LLM: "Because the embeddings in your prompt are close to some embeddings in my training data. Here's some seemingly explanatory text with that is just similar embeddings to other 'why?' questions."
You: "What could be improved?"
LLM: "Here's some different stuff based on other training data with embeddings close to the original embeddings, but different.
---
It's near zero useful information. Example imformation might be "it builds" (baseline necessity, so useless info), "it passes some tests" (fairly baseline, more useful, but actually useless if you don't know what the tests are doing), or "it's different" (duh).
If that was true we’d have what they call AGI.
So no, it doesn’t actually give you those since it can’t reason and logic in such a way.
It doesn't do that currently even if you think it does.
I can think of a few. The last 78 pages of any 80-page business analysis report. The music tracks of those "12 hours of chill jazz music" YouTube videos. Political speeches written ahead of time. Basically - anywhere that a proper review is more work than the task itself, and the quality of output doesn't matter much.
And will you enjoy paying someone else to let the AI to do that?
To be fair - it is not accurate to say I absolutely never read the code. It's just rare, and it's much more the exception than the rule.
My workflow just focuses much more on the final product, and the initial input layer, not the code - it's becoming less consequential.
It's producing seemingly working code faster than you can closely review it.
I personally think we are not that far from it, but it will need something built on top of current CLI tools.
I don't know... it depends on the use case. I can't imagine even the best front-end engineer ever can read HTML faster than looking at the rendered webpage to check if the layout is correct.
The AI also writes the black box tests, what am I missing here?
If the AI misinterpreted your intentions and/or missed something in productive code, tests are likely to reproduce rather than catch that behavior.
In other words, if “the ai is checking as well” no one is.
etc...etc...
> In other words, if “the ai is checking as well” no one is.
"I tried nothing, and nothing at all worked!"
code is not the output. functionality is the output, and you do look at that.
Are you writing black box testing by hand, or manually checking, everything that would normally be a unit test? We have unit tests precisely because of how unworkable the “every test is black box” approach is.
Almost everyone does this. Hardly anyone taking pictures understands what f-stop or focal length are. Even those who do seldom adjust them.
There dozens of other examples where people voluntarily move to a black box approach. How many Americans drive a car with a manual transmission?
Something I know very little about is coding. I know there are different languages with pros and cons to each. I know some work across operating systems while others don't but other than that I don't know too much.
For the first time I just started working on my own app in Codex and it feels absolutely amazing and magical. I've not seen the code, would have basically no idea how to read it, but i'm working on a niche application for my job that it is custom tailored to my needs and if it works I'll be thrilled. Even better is that the process of building is just feels so special and awesome.
This really does feel like it is on the precipice of something entirely different. I think back to computers before a GUI interface. I think back to even just computers before mobile touch interfaces. I am sure there are plenty of people who thought some of these things wouldn't work for different reasons but I think that is the wrong idea. The focus should be on who this will work for and why and there, I think, there are a ton of possibilities.
For reference, I'm a middle school Assistant Principal working on an app to help me with student scheduling.
I really wanted to learn the coding, the design patterns, etc, but truthfully, it was never gonna happen without a Claude. I could never get past the unknown-unknowns (and I didn't even grasp how broad is the domain of knowledge it actually requires.) Best case I would have started small chunks and abandoned it countless times, piling on defeatism and disappointment each time.
Now in under two weeks of spare time and evenings, I've got a working prototype that's starting to resemble my dream. Does my code smell? Yes. Is it brittle? Almost certainly. Is it a security risk? I hope not. (It's not.)
I want to be intentional about how I use AI; I'm nervous about how it alters how we think and learn. But seeing my little toy out in the real world is flippin incredible.
It very probably is, but if it's a personal project you're not planning on releasing anywhere, it doesn't matter much.
You should still be very cognizant that LLMs will currently fairly reliably implement massive security risks once a project grows beyond a certain size, though.
It’s really just a case of knowing how to use the tools. Said another way, the risk is being unaware of what the risks are. And awareness can help one get out of the bad habits that create real world issues.
Programmers dream of getting a green field project. They want to "start it the right way this time" instead of being stuck unwinding technical debt on legacy projects. AI creates new legacy projects instantly.
Never thought this would be something people actually take seriously. It really makes me wonder if in 2 - 3 years there will be so much technical debt that we'll have to throw away entire pieces of software.
The author of the article has a bachelor's degree in economics[1], worked as a product manager (not a dev) and only started using GitHub[2] in 2025 when they were laid off[3].
[1] https://www.linkedin.com/in/benshoemaker000/
But sure, go with the ad hominem.
You have to remember that the number of software developers saw a massive swell in the last 20 years, and many of these folks are Bootcamp-educated web/app dev types, not John Carmack. They typically started too late and for the wrong reasons to become very skilled in the craft by middle age, under pre-AI circumstances and statistically (of course there are many wonderful exceptions; one of my best developers is someone who worked in a retail store for 15 years before pivoting).
AI tools are now available to everyone, not just the developers who were already proficient at writing code. When you take in the excitement you always have to consider what it does for the average developer and also those below average: A chance to redefine yourself, be among the first doing a new thing, skip over many years of skill-building and, as many of them would put it, focus on results.
It's totally obvious why many leap at this, and it's even probably what they should do, individually. But it's a selfish concern, not a care for the practice as-is. It also results in a lot of performative blog posting. But if it was you, you might well do the same to get ahead in life. There's only to so many opportunities to get in on something on the ground floor.
I feel a lot of senior developers don't keep the demographics of our community of practice into account when they try to understand the reception of AI tools.
I have rarely had the words pulled out of my mouth.
The percentage of devs in my career that are from the same academic background, show similar interests, and approach the field in the same way, is probably less than %10, sadly.
> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
Notice "don't read the diffs anymore".
In fact, this is practically the anniversary of that tweet: https://x.com/karpathy/status/2019137879310836075?s=20
Please tell me, "Were people excited about high-level languages just programmers who 'couldn't hack it' with assembly? Maybe you are one of those? Were GUI advocates just people who couldn't master the command line?"
Honestly, I still think there's truth to what I wrote, and I don't think your counter-examples prove it wrong per-se. The prompt I responded to ("why are people taking this seriously") also led fairly naturally down the road of examining the reasons. That was of course my choice to do, but it's also just what interested me in the moment.
And I totally agree with him. Throwing some kind of fallacy in the air for the show doesn't make your argument, or lack of, more convincing.
It's the equivalent of saying anyone excited about being able to microwave Frozen meals is a hack who couldn't make it in the kitchen. I'm sorry, but if you don't see how ridiculous that assertion is then I don't know what to tell you.
>And I totally agree with him. Throwing some kind of fallacy in the air for the show doesn't make your argument, or lack of, more convincing.
A series of condescending statements meant to demean with no objective backing whatsoever is not an argument. What do you want me to say ? There's nothing worth addressing, other than pointing out how empty it is.
You think there aren't big shots, more accomplished than anyone in this conversation who are similarly enthusiastic?
You and OP have zero actual clue. At any advancement, regardless of how big or consequential, there are always people like that. It's very nice to feel smart and superior and degrade others, but people ought to be better than that.
So I'm sorry but I don't really care how superior a cook you think you are.
I think both things can be true simultaneously.
You're arguing against a straw man.
I've worked on "legacy systems" written 30 to 45 years ago (or more) and still running today (things like green-screen apps written in Pick/Basic, Cobol, etc.). Some of them were written once and subsystems replaced, but some of it is original code.
In systems written in the last.. say, 10 to 20 years, I've seen them undergo drastic rates of change, sometimes full rewrites every few years. This seemed to go hand-in-hand with the rise of agile development (not condemning nor approving of it) - where rapid rates of change were expected.. and often the tech the system was written in was changing rapidly also.
In hardware engineering, I personally also saw a huge move to more frequent design and implementation refreshes to prevent obsolescence issues (some might say this is "planned obsolescence" but it also is done for valid reasons as well).
I think not reading the code anymore TODAY may be a bit premature, but I don't think it's impossible to consider that someday in the nearer than further future, we might be at a point where generative systems have more predictability and maybe even get certified for safety/etc. of the generated code.. leading to truly not reading the code.
I'm not sure it's a good future, or that it's tomorrow, but it might not be beyond the next 20 year timeframe either, it might be sooner.
What is your opinion and did you vote this down because you think it's silly, dangerous or you don't agree?
We have been throwing away entire pieces of software forever. Where's Novell? Who runs 90s Linux kernels in prod?
Code isn't a bridge or car. Preservation isn't meaningful. If we aren't shutting the DCs off we're still burning the resources regardless if we save old code or not.
Most coders are so many layers of abstraction above the hardware at this point anyway they may as well consider themselves syntax artists as much as programmers, and think of Github as DeviantArt for syntax fetishists.
Am working on a model of /home to experiment with booting Linux to models. I can see a future where Python in my screen "runs" without an interpreter because the model is capable of correctly generating the appropriate output without one.
Code is ethno objects, only exists socially. It's not essential to computer operations. At the hardware level it's arithmetical operations against memory states.
Am working on my own "geometric primitives" models that know how to draw GUIs and 3D world primitives, text; think like "boot to blender". Rather store data in strings, will just scaffold out vectors to a running "desktop metaphor".
It's just electromagnetic geometry, delta sync between memory and display: https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...
Sometimes it feels like pre-AI education is going to be like low-background steel for skilled employees.
That happens just as often without AI. Maybe the people that like it all thave experience with trashing multiple sets of products over the course of their life?
But for architectural issues, you need to be able to articulate how you would have written the code in the first place, once you understand the existing behavior and its problems. That is my interpretation of GP’s comment.
I think 2-3 years is generous.
Don't get me wrong, I've definitely found huge productivity increases in using various LLM workflows in both development as well as operational things. But removing a human from the loop entirely at this point feels reckless bordering on negligent.
My overall stance on this is that it's better to lean into the models & the tools around them improving. Even in the last 3-4 months, the tools have come an incredible distance.
I bet some AI-generated code will need to be thrown away. But that's true of all code. The real questions to me are - are the velocity gains be worth it? Will the models be so much better in a year that they can fix those problems themselves, or re-write it?
I feel like time will validate that.
The only way to harness it is to somehow package code producing LLMs into an abstraction and then somehow validate the output. Until we achieve that, imo doesn't matter how closely people watch out the output, things will be getting worse.
Depending on what you're working on, they are already at that point. I'm not into any kind of AI maximalist "I don't read code" BS (I read a lot of code), but I've been building a fairly expensive web app to manage my business using Astro + React and I have yet to find any bug or usability issue that Claude Code can't fix much faster than I would have (+). I've been able to build out, in a month, a fully TDD app that would have conservatively taken me a year by myself.
(+) Except for making the UI beautiful. It's crap at that.
The key that made it click is exactly what the person describes here: using specs that describe the key architecture and use cases of each section. So I have docs/specs with files like layout.md (overall site shell info), ui-components.md, auth.md, database.md, data.md, and lots more for each section of functionality in the app. If I'm doing work that touches ui, I reference layout and ui-components so that the agent doesn't invent a custom button component. If I'm doing database work, reference database.md so that it knows we're using drizzle + libsql, etc.
This extends up to higher level components where the spec also briefly explains the actual goal.
Then each feature building session follows a pattern: brainstorm and create design doc + initial spec (updates or new files) -> write a technical plan clearly following TDD, designed for batches of parallel subagents to work on -> have Claude implement the technical plan -> manual testing (often, I'll identify problems and request changes here) -> automated testing (much stricter linting, knip etc. than I would use for myself) -> finally, update the spec docs again based on the actual work that was done.
My role is less about writing code and more about providing strict guardrails. The spec docs are an important part of that.
I can't imagine not reading the code I'm responsible for any more than I could imagine not looking out the windscreen in a self driving Tesla.
But if so many people are already there, and mostly highly skilled programmers imagine in 2 years time with people who've never programmed!
> For 12 years, I led data and analytics at Indeed - creating company-wide success metrics used in board meetings, scaling SMB products 6x, managing organizations of 70+ people.
He's a manager that made graphs on Power BI.
They're not here because they want to build things, they're here to shit a product out and make money. By the time Claude has stopped being able to pipe together ffmpeg commands or glue together 3 JS libraries, they've gone on to another project and whoever bought it is a sucker.
It's not that much different from the companies of the 2000s promising a 5th generation language with a UI builder that would fix everything.
And then, as a very last warning: the author of this piece sells AI consulting services. It's in his interest to make you believe everything he has to say about AI, because by God is there going to be suckers buying his time at indecently high prices to get shit advice. This sucker is most likely your boss, by the way.
I'd have the decency to know and tell people that it's a steaming pile of shit and that I have no idea how it works though, and would not have the shamelessness to sell a course on how to put out LLM vomit in public though.
Engineering implies respect for your profession. Act like it.
What have you tried to use them for?
And for anything really serious? Opus 4.5 struggles to maintain a large-scale, clean architecture. And the resulting software is often really buggy.
Conclusion: if you want quality in anything big in February 2026, you still need to read the code.
That goes a bit against the article, but it's not reading code in the traditional sense where you are looking for common mistakes we humans tend to make. Instead you are looking for clues in the code to determine where you should improve in the docs and specs you fed into your agent, so the next time you run it chances are it'll produce better code, as the article suggests.
And I think this is good. In time, we are going to be forced to think less technically and more semantically.
The answer is clear: I didn’t write the code, I didn’t read it, I have no idea what it does, and that’s why it has a bug.
Become a CTO, CEO or even a venture investor. "Here's $100K worth tokens, analyze market, review various proposals from Agents, invest tokens, maximize profit".
You know why not? Because it will be more obvious it doesn't work as advertised.
Also, creating software is much more testable and verifiable than what a CEO does. You can usually tell when the code isn't right because it doesn't work or doesn't pass a test. How can you verify that your AI CEO is giving you the right information or planning its business strategy effectively?
It's one of the biggest reasons that software development and art are the two domains in which AI excels. In software you can know when it's right, and in art it doesn't matter if it's right.
Tests (as usually written, in unit-test form) only tell you that it's not completely broken, they're not a good indicator of it working well otherwise "vibecoded slop" wouldn't be a thing. And the tests themselves are usually vibecoded too which doesn't help much in detecting issues off the happy path.
>you verify that your AI CEO is giving you the right information or planning its business strategy effectively
The same could be said for human CEOs. A lot of them don't really have good success rates either.
You can certainly end up with vibecoded slop that passes all the tests, but it won't pass other forms of evaluation (necessarily true, otherwise you could not identify it as vibecoded slop.)
> The same could be said for human CEOs. A lot of them don't really have good success rates either.
This is part of my point. The tight feedback loop that enables us to judge a model's efficacy in software, doesn't exist for the role of CEO.
It sounds like you're describing Manna by Marshall Brain
* AI can replace knowledge workers - most of existing software engineers, managers of all levels will loose their job and have to re-qualify.
* AI requires human in the loop.
In the first scenario, I see no reason to waste time and should start building plan B now (remaining job markets will be saturated at that point).
In the second scenario, tech-debt and zettabytes of slop will harm companies which relied on it heavily. In the age of failing giants and crumbling infrastructure, engineers and startups that can replace gigawatt burning data center with a few kilowatt rack, by manually coding a shell script that replaces Hadoop, will flourish.
Most probably it will be a spectrum - some roles can be replaced, some not.
Which is perhaps what they should do, of course. Any transition is a chance to get ahead and redefine yourself.
Agents are a quality/velocity tradeoff (which is often good), if you can't debug stuff without them that's a problem as you'll get into holes, but that doesn't mean you have to write the code by hand.
Note though we're talking about "not reading code" in context, not the writing of it.
Parent post sounds like a very accurate description.
What it's trying to express is that the (T)PM job still should still be safe because they can just team-lead a dozen agents instead of software developers.
Take with a grain of salt when it comes to relevance for "coding", or the future role breakdown in tech organizations.
I'm not trying to express that my particular flavor of career is safe. I think that the ability to produce software is much less about the ability to hand-write code, and that's going to continue as the models and ecosystem improve, and I'm fascinated by where that goes.
So basically a return to waterfall design.
Rather than YOLO planning (agile), we go back to YOLO implementation (farming it out to dozens of replaceable peons, but this time they're even worse).
You can guess what kind of software he is building.
When you read the 100th blog post about how AI is changing software development, just remember that these are the authors.
I think these tools are great for allowing non-technical people like OP to create landing pages and small prototypes, but they're useless for anything serious. That said, I applaud OP for embodying the "when in a gold rush, sell shovels" mentality.
I'm mostly developing my own apps and working with startups.