Posted by alainrk 3 hours ago
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).
The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.
Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").
Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.
'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
'Revise that so that Sam is Harsher and Frodo more stubborn.'
Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.
CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?
I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.
I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.
These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
Same experience, I don't know why people keep saying code was easy part, sure, only when you are writing a boilerplate which is easy and expectations are clear.
I agree code is easier than some other parts, but not the easiest, industry employed millions of us, to write that easy thing.
When working on large codebases or building something in the flow, I just don't want to read all the OAuth2 scopes Google requires me to obtain, my experience was never: "now I will integrate Gmail, let me do gmail.FetchEmails(), cool it works, on to the next thing"
Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!
Probably less time, because you understood the details better.
I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.
If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.
Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.
If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.
> We don’t have to look at assembly, because a compiler produces the same result every time.
This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.
Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!
Re: “other side” - I’m quoting the grandparent’s framing.
I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.
Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?
Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?
The deeper I look in the rabbit hole they think we're walking towards the more issues I see.
Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.
sorry for being blunt, but if you have tried once, twice and came to this conclusion, it is definitely a skill issue, I never got comfortable by writing 3 lines of Java, Python or Go or any other language, it took me hundreds of hours spent doing non-sense, failing miserably and finding out that I was building things which already exists in std lib.
I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.
In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?
Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.
I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.
I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.
It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.
* auth + RBAC, known problem, just needs integration
* 3rd party integration, they have API, known problem, just needs integration
* make webpage responsive, millions of CSS lines
* even video gaming, most engines are already written, just add your character and call couple of APIs to move them in the 3D space
You can only complain about creativity if you were actually being creative. 99.99999% of the industry was not.
But sure, for the 0.000001% of the industry coming up with new deep learning algorithms instead of just TF/PyTorch monkey-ing, maybe the LLMs won’t help as much as a good foundation in some pretty esoteric mathematics.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.
I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.
If the ends justifies the means is a well-worn disagreement/debate, and I think the only solid conclusion we've come to as a society is that it depends.
The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.
Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.
Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.
Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.
> Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
Is exactly what programs are. Not the minutiae of the language within.
Take a tool like Gradle. Bigger pain in the ass using an actual cactus as a desk chair. It has a staggering rate of syntax and feature churn with every version upgrade, sprawling documentation that is clearly written by space aliens, every problem is completely ungoogleable as every single release does things differently and no advice stays valid for more than 25 minutes.
It's a comically torturous DevEx. You can literally spend days trying to get your code to compile again, and not a second of that time will be put toward anything productive. Sheer frustration. Just tears. Mad laughter. Rocking back and forth.
"Hey Claude, I've upgraded to this week's Gradle and now I'm getting this error I wasn't getting with last week's version, what could be going wrong?" makes all that go away in 10 minutes.
Yeah, this was always the easy part.
An LLM isn’t coming up with the eye of Sauron, or the entire backstory of the ring, or gollum, etc etc
The LLM can’t know Tolkien had a whole universe built in his head that he worked for decades to get on to paper.
I’m so tired of this whole “an LLM just does what humans already do!” And then conflating that with “fuck all this LLM slop!”
I guess if you specialise in maintaining a code base with a single language and a fixed set of libraries then it becomes easier to remember all the details, but for me it will always be less effort to just search the names for whatever tools I want to include in a program at any point.
Similar to how you outlined multi-language vs specialist, I wonder if "full stack" vs "niche" work unspokenly underlies some of the camps of "I just trust the AI" vs "it's not saving me any time".
There's a joke that's not entirely a joke that the job of a Google SWE is converting from one protobuf to another. That's generally not very fun code, IMO (which may differ from your opinion and that's why they're opinions!). Otoh, figuring out and writing some interesting logic catches my brain in a way that dealing with formats and interoperability stuff doesn't usually.
We're all did but we all probably have things we like more than others.
But those aren't the stories you hear about with people coding with AI, which is what prompted my response.
I do agree that this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
seems more than a little overblown. But I do sympathize with not feeling motivated to write a lot of glue and boilerplate, and that "meh" often derails me on personal projects where it's just my internal motivation competing against my internal de-motivation. LLMs have been really good there, especially since many of those are cases where only I will run or deal with the code and it won't be exposed to the innertubes.
Maybe the author can't touch type, but that's a separate problem with its own solution. :)
“He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.
Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?
If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.
I want to see the future unfold during my career, not just have it be incrementalism until I retire.
I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.
God, just the thought of another ten years of the same is killing me. It's so fucking mundane.
The future is exciting.
Bring it.
I don't see agentic programming coming to take my lunch any time soon.
What I do see it threatening is repetitive quasi carbon copy development work of the kind you've mentioned - like building web applications.
Nothing wrong with using these tools to deal with that, but I do think that a lot of the folks from those domains lack experience with heavier work, and falsely extrapolate the impact it's having within their domain to be applicable across the board.
> The future is exciting.
Not the GP, but I honestly wanted to be excited about LLMs. And they do have good uses. But you quickly start to see the cracks in them, and they just aren't nearly as exciting as I thought they'd be. And a lot of the coding workflows people are using just don't seem that productive or valuable to me. AI just isn't solving the hard problems in software development. Maybe it will some day.
I don't read the OP as saying that: to me they're saying you're still going to have plumbing and bullshit, it's just your plumbing and bullshit is now going to be in prompt engineering and/or specifications, rather than the code itself.
Then make them. What's stopping you?
Got a prescription for that too?
I've made films for fifteen years. I hate the process.
Every one of my friends and colleagues that went to film school found out quickly that their dreams would wither and die on the vine due to the pyramid nature of studio capital allocation and expenditure. Not a lot of high autonomy in that world. Much of it comes with nepotism.
There are so many things I wish to do with technology that I can't because of how much time and effort and energy and money are required.
I wish I could magic together a P2P protocol that replaced centralized social media. I wish I could build a completely open source GPU driver stack. I wish I could make Rust compile faster or create an open alternative to AWS or GCP. I wish for so many things, but I'm not Fabrice Bellard.
I don't want to constrain people to the shitty status quo. Because the status quo is shitty. I want the next generation to have better than the bullshit we put up with. If they have to suffer like we suffered, we failed.
I want the future to climb out of the pit we're in and touch the stars.
To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.
Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.
They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b
People on Reddit posting AI art are getting death threats. It's absurd.
The alternative is that your bespoke solution has undiscovered security vulnerabilities, probably no security community, and no easy fix for either of those.
You get the privilege of patching Node.js.
Similarly, as a hiring manager, you can hire a React developer. You can't hire a "proprietary AI coded integrated project" developer.
This piece seems to say more about React than it says about a general shift in software engineering.
Don't like React? Easiest it's ever been not to use it.
Don't like libraries, abstractions and code reuse in general? Avoid them at your peril. You will quickly reach the frontier of your domain knowledge and resourcing, and start producing bespoke square wheels without a maintenance plan.
It's quite easy to make things without react, it's not our fault that business leaders don't let devs choose how to solve problems but hey who am I to complain? React projects allow me to pay my bills! I've never seen a good "react" project yet and I've been working professionally with react since before class components were a thing.
Every react code base has their own unique failures due to npm ecosystem, this will never change. In fact, the best way to anticipate what kind patterns are in a given react project is to look at their package.json.
It's worth repeating too, that not everything needs to be a react project. I understand the author enjoys the "vibe", but that doesn't make it a ground truth. AI can be a great accelerator, but we should be very cognizant of what we abdicate to it.
In fact I would argue that the post reads as though the developer is used to mostly working alone, and often choosing the wrong tool for the job. It certainly doesn't support the claim of the title
The trend of copying code from StackOverflow has just evolved to the AI era now.
I also expect people will attempt complete rewrites of systems without fully understanding the implications or putting safeguards in place.
AI simply becomes another tool that is misused, like many others, by unexperienced developers.
I feel like nothing has changed on the human side of this equation.
In recent months, we have MCPs, helping lots of people realize that huh, when services have usable APIs, you can connect them together!
In the current case: AI can do the tedious things for me -> Huh, discarding vast dependency trees (because I previously wanted the tedious stuff done for me too) lessens my risk surface!
They really are discovered truths, but no one's forcing them to come with an understanding of the tradeoffs happening.
Which are often the top reason to use a framework at all.
I could re-implement a web frame work in python if I needed to but then I would lose all the testing, documentation, middle-ware and worst of all the next person would have to show up and re learn everything I did and understand my choices.
The benefits of frameworks were always having something well tested that you knew would do the job, and that after a bit of use you'd be familiar with, and the same still stands.
LLMs still aren't AGI, and they learn by example. The reason they are decent at writing React code is because they were trained on a lot of it, and they are going to be better at generating based on what they were trained on, than reinventing the wheel.
As the human-in-the-loop, having the LLM generate code for a framework you are familiar with (or at least other people are familiar with) also let's you step in and fix bugs if necessary.
If we get to a point, post-AGI, where we accept AGI writing fully custom code for everything (but why would it - if it has human-level intelligence, wouldn't it see the value in learning and using well-debugged and optimized frameworks?!), then we will have mostly lost control of the process.
https://theorg.com/org/unobravo-telehealth-psychology-servic...
It is interesting how many telehealth and crypto people are promoting AI (David Sacks being the finest of all specimens).
The article itself is of course an AI assisted mashup of all propaganda talking points. People using Unobravo should take note.
> I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am.
Without a significant development period of this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
A professional mathematician should use every computer aid at their disposal if it's appropriate. But a freshman math major who isn't spending most of their time with just a notebook or chalk board is probably getting in the way of their own progress.
Granted, this was already an issue, to a lesser extent, with the frameworks that the author scorns. It's orders of magnitude worse with generative AI.
What I don't know is what state I'd be in right now, if I'd had AI from the start. There are definitely a ton of brain circuits I wouldn't have right now.
Counterpoint: I've actually noticed them holding me back. I have 20 years of intuition built up now for what is hard and what is easy, and most of it became wrong overnight, and is now limiting me for no real reason.
The hardest part to staying current isn't learning, but unlearning. You must first empty your cup, and all that.
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.
If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.
With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.
Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.