Top
Best
New

Posted by alainrk 3 hours ago

Coding agents have replaced every framework I used(blog.alaindichiappari.dev)
118 points | 131 comments
ipsento606 30 minutes ago|
> Software engineers are scared of designing things themselves.

When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.

Those beliefs aren't always true, but they're often true.

Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.

feastingonslop 8 minutes ago|
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
mnicky 1 minute ago||
Critically, they will also enable faster future migration to a framework in case it proves useful.
rglover 3 minutes ago||
A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).

The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.

Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").

abcde666777 2 hours ago||
It's strange to me when articles like this describe the 'pain of writing code'. I've always found that the easy part.

Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.

'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'

'Revise that so that Sam is Harsher and Frodo more stubborn.'

Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.

simonw 27 minutes ago||
Have you really never found writing code painful?

CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?

I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.

I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.

These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.

throwaw12 22 minutes ago||
> I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.

Same experience, I don't know why people keep saying code was easy part, sure, only when you are writing a boilerplate which is easy and expectations are clear.

I agree code is easier than some other parts, but not the easiest, industry employed millions of us, to write that easy thing.

When working on large codebases or building something in the flow, I just don't want to read all the OAuth2 scopes Google requires me to obtain, my experience was never: "now I will integrate Gmail, let me do gmail.FetchEmails(), cool it works, on to the next thing"

capyba 1 hour ago|||
Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.

Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!

jatora 56 minutes ago|||
I dont really understand how this is possible. I've built some very large applications, and even a full LLM data curation,tokenizer, pretrain, posttrain SFT/DPO pipeline with LLM's and it most certainly took far less time than if i had done it manually. Sure it isnt all optimal...but it most certainly isnt subpar, and it is fully functional
Ocha 44 minutes ago||
So you skipped the code review and just checked that it does what you needed it to do?
enraged_camel 19 minutes ago||
I don't know how anyone can make this assumption in good faith. The poster did not imply anything along those lines.
wtetzner 1 hour ago||||
> I would have finished the project in the same amount of time

Probably less time, because you understood the details better.

dvfjsdhgfv 1 hour ago||||
> if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.

I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.

hakunin 1 hour ago|||
Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.

If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.

Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.

If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.

ctoth 36 minutes ago||
> other side???

> We don’t have to look at assembly, because a compiler produces the same result every time.

This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.

Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!

hakunin 28 minutes ago|||
You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.

Re: “other side” - I’m quoting the grandparent’s framing.

danny_codes 26 minutes ago|||
GCC is, I imagine, several orders of magnitude mor deterministic than an LLM.
hakunin 21 minutes ago||
It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.
dkersten 1 hour ago||||
People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.

I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.

Thanemate 22 minutes ago||||
>I believe the argument from the other camp is that you don't need to understand the code anymore

Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?

Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?

The deeper I look in the rabbit hole they think we're walking towards the more issues I see.

testuser312 1 hour ago||||
At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.

Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.

scrame 1 hour ago|||
quite a bit of software you would need to understand the assembly. not everything is web-services.
throwaw12 26 minutes ago|||
skill issue.

sorry for being blunt, but if you have tried once, twice and came to this conclusion, it is definitely a skill issue, I never got comfortable by writing 3 lines of Java, Python or Go or any other language, it took me hundreds of hours spent doing non-sense, failing miserably and finding out that I was building things which already exists in std lib.

wtetzner 1 hour ago|||
> It's strange to me when articles like this describe the 'pain of writing code'.

I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.

In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?

Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.

falkensmaize 1 hour ago|||
What I never understand is that people seem to think the conception of the idea and the syntactical nitty gritty of the code are completely independent domains. When I think about “how software works” I am at some level thinking about how the code works too, not just high level architecture. So if I no longer concern myself with the code, I really lose a lot of understanding about how the software works too.
geetee 1 hour ago||||
Writing the code is where I discover the complexity I missed while planning. I don't truly understand my creation until I've gone through a few iterations of this. Maybe I'm just bad at planning.
thwarted 43 minutes ago||||
At first I thought you were referring to the debates over using vim or using emacs, but I think you mean to refer to the discussions about learning to use/switching to powerful editors like vim or emacs. If you learn and use a sharp, powerful editor and learn to type fast, the "burden" of editing and typing goes away.
everforward 1 hour ago|||
I was talking to a coworker that really likes AI tooling and it came up that they feel stronger reading unfamiliar code than writing code.

I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.

I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.

It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.

kmac_ 30 minutes ago|||
Current models won't write anything new, they are "just" great at matching, qualifying, and copying patterns. They bring a lot of value right now, but there is no creativity.
throwaw12 18 minutes ago||
95% of the industry wasn't creating creative value, it was repetitive.

* auth + RBAC, known problem, just needs integration

* 3rd party integration, they have API, known problem, just needs integration

* make webpage responsive, millions of CSS lines

* even video gaming, most engines are already written, just add your character and call couple of APIs to move them in the 3D space

kmac_ 5 minutes ago|||
That's why they bring a lot of value. Plus, new models and methods enable solutions that weren't available a decade ago.
bilbo0s 3 minutes ago|||
So true.

You can only complain about creativity if you were actually being creative. 99.99999% of the industry was not.

But sure, for the 0.000001% of the industry coming up with new deep learning algorithms instead of just TF/PyTorch monkey-ing, maybe the LLMs won’t help as much as a good foundation in some pretty esoteric mathematics.

jesse_dot_id 1 hour ago|||
People are different. Some are painters and some are sculptors. Andy Warhol was a master draftsman but he didn't get famous off of his drawings. He got famous off of screen printing other people's art that he often didn't own. He just pioneered the technique and because it was new, people got excited, and today he's widely considered to be a generational artistic genius.

I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.

If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.

I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.

thwarted 55 minutes ago||
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.

If the ends justifies the means is a well-worn disagreement/debate, and I think the only solid conclusion we've come to as a society is that it depends.

jack_pp 18 minutes ago||
That's a moral debate, not suitable for this discussion.

The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.

Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.

Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.

Aperocky 59 minutes ago|||
Tolkien's book is an art, programs are supposed to do something.

Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.

> Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'

Is exactly what programs are. Not the minutiae of the language within.

alainrk 1 hour ago|||
I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?
estimator7292 1 hour ago|||
If you're gonna take this track, at least be honest with yourself. Does your boss get more value out of you? You aren't going to get a kickback from being more productive, but your boss sure will.
milowata 1 hour ago||||
I had this moment recently with implementing facebook oauth. I don’t need to spend mental cycles figuring that out, doing the back and forth with their API, pulling my hair out at their docs, etc. I just want it to work and build my app. AI just did that part for me and could move on.
normie3000 31 minutes ago||
Integrating auth code is probably a good example of code you want to understand, rather than just seeing that it appears to work.
marginalia_nu 50 minutes ago||||
I honestly think the stuff AI is really good at is the stuff around the programming that keeps you from the actual programming.

Take a tool like Gradle. Bigger pain in the ass using an actual cactus as a desk chair. It has a staggering rate of syntax and feature churn with every version upgrade, sprawling documentation that is clearly written by space aliens, every problem is completely ungoogleable as every single release does things differently and no advice stays valid for more than 25 minutes.

It's a comically torturous DevEx. You can literally spend days trying to get your code to compile again, and not a second of that time will be put toward anything productive. Sheer frustration. Just tears. Mad laughter. Rocking back and forth.

"Hey Claude, I've upgraded to this week's Gradle and now I'm getting this error I wasn't getting with last week's version, what could be going wrong?" makes all that go away in 10 minutes.

normie3000 33 minutes ago||
I'm glad to hear the gradle experience hasn't changed in the decade since I started avoiding it.
wtetzner 1 hour ago|||
I think it's still an open question if it's actually a net savings of time.
strange_quark 23 minutes ago|||
The absence of evidence is evidence in its own way. I don’t understand how there haven’t been more studies on this yet. The one from last year that showed AI made people think they were faster but were actually slower gets cited a lot, and I know that was a small study with older tools, but it’s amazing that that hasn’t been repeated. Or maybe it has and we don’t know because the results got buried.
chasd00 1 hour ago|||
One thing I’ve noticed is that effort may be saved but not as much time. The agent can certainly type faster than me but I have to sit there and watch it work and then check its work when done. There’s certainly some time savings but not what you think.
FeteCommuniste 1 hour ago||
Another thing I've noticed is that using AI, I'm less likely to give existing code another look to see if there's already something in it that does what I need. It's so simple to get the AI to spin up a new class / method that gets close to what I want, so sometimes I end up "giving orders first, asking questions later" and only later realizing that I've duplicated functionality.
dkersten 57 minutes ago|||
“ What’s gone is the tearing, exhausting manual labour of typing every single line of code.”

Yeah, this was always the easy part.

mycall 1 hour ago|||
Isn't that what Tolkien did in his head? Write something, learn what he liked/didn't like then revise the words? Rinse/repeat. Same process here.
irishcoffee 38 minutes ago||
If Tolkien had not lived an entire life, fought in a war, been buddies with other authors, and also been a decent writer, the story doesn’t exist. And an LLM won’t come up with it.

An LLM isn’t coming up with the eye of Sauron, or the entire backstory of the ring, or gollum, etc etc

The LLM can’t know Tolkien had a whole universe built in his head that he worked for decades to get on to paper.

I’m so tired of this whole “an LLM just does what humans already do!” And then conflating that with “fuck all this LLM slop!”

n4r9 1 hour ago|||
Pain can mean tedium rather than intellectual challenge.
wtetzner 1 hour ago||
I really struggle to understand how people can find coding more tedious than prompting. To each their own I guess.
TuringTest 1 hour ago|||
I can only speak for myself but for me, it's all about the syntax. I am terrible at recalling the exact name of all the functions in a library or parameters in an API, which really slows me down when writing code. I've also explored all kinds of programming languages in different paradigms, which makes it hard to recall the exact syntax of operators (is comparison '=' or '==' in this language? Comments are // or /*? How many parameters does this function take, and in what order...) or control structures. But I'm good at high level programming concepts, so it's easy to say what I want in technical language and let the LLM find the exact syntax and command names for me.

I guess if you specialise in maintaining a code base with a single language and a fixed set of libraries then it becomes easier to remember all the details, but for me it will always be less effort to just search the names for whatever tools I want to include in a program at any point.

gertlex 50 minutes ago||
I agree with a bunch of this (I'm almost exclusively doing python and bash; bash is the one I can never remember more than the basics of). I will give the caveat that I historically haven't made use of fancy IDEs with easy lookup of function names, so would semi-often be fixing "ugh I got the function name wrong" mistakes.

Similar to how you outlined multi-language vs specialist, I wonder if "full stack" vs "niche" work unspokenly underlies some of the camps of "I just trust the AI" vs "it's not saving me any time".

dgacmu 1 hour ago|||
Some code is fun and some sucks?

There's a joke that's not entirely a joke that the job of a Google SWE is converting from one protobuf to another. That's generally not very fun code, IMO (which may differ from your opinion and that's why they're opinions!). Otoh, figuring out and writing some interesting logic catches my brain in a way that dealing with formats and interoperability stuff doesn't usually.

We're all did but we all probably have things we like more than others.

wtetzner 1 hour ago||
I mean, I agree if it's really just "machine translate this code to use the approved method of doing this thing". That seems like a perfect use case for AI. Though one would think Google would already have extensive code mod infrastructure for that kind of thing.

But those aren't the stories you hear about with people coding with AI, which is what prompted my response.

dgacmu 1 hour ago||
They do and I think a lot of that is LLM'd these days, though that's just what I hear third-hand.

I do agree that this:

> What’s gone is the tearing, exhausting manual labour of typing every single line of code.

seems more than a little overblown. But I do sympathize with not feeling motivated to write a lot of glue and boilerplate, and that "meh" often derails me on personal projects where it's just my internal motivation competing against my internal de-motivation. LLMs have been really good there, especially since many of those are cases where only I will run or deal with the code and it won't be exposed to the innertubes.

Maybe the author can't touch type, but that's a separate problem with its own solution. :)

franze 1 hour ago|||
Claude Opus 4.6:

“He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.

Calavar 57 minutes ago||
Claude already knows who the characters Frodo, Sam, and Gollum are, what their respective character traits are, and how they interacted with each other. This isn't the same as writing something new.
echelon 1 hour ago||
Please forgive me for being blunt, I want to emphasize how much this strikes me.

Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?

If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?

There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.

I want to see the future unfold during my career, not just have it be incrementalism until I retire.

I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.

God, just the thought of another ten years of the same is killing me. It's so fucking mundane.

The future is exciting.

Bring it.

abcde666777 1 hour ago|||
I think my take on the matter comes from being a games developer. I work on a lot of code for which agentic programming is less than ideal - code which solves novel problems and sometimes requires a lot of precise performance tuning, and/or often has other architectural constraints.

I don't see agentic programming coming to take my lunch any time soon.

What I do see it threatening is repetitive quasi carbon copy development work of the kind you've mentioned - like building web applications.

Nothing wrong with using these tools to deal with that, but I do think that a lot of the folks from those domains lack experience with heavier work, and falsely extrapolate the impact it's having within their domain to be applicable across the board.

wtetzner 1 hour ago||||
> Your post feels like the last generation lamenting the new generation.

> The future is exciting.

Not the GP, but I honestly wanted to be excited about LLMs. And they do have good uses. But you quickly start to see the cracks in them, and they just aren't nearly as exciting as I thought they'd be. And a lot of the coding workflows people are using just don't seem that productive or valuable to me. AI just isn't solving the hard problems in software development. Maybe it will some day.

objclxt 1 hour ago||||
> Your post feels like the last generation lamenting the new generation [...] There's so much plumbing and refactoring bullshit in writing code [...] I've had my excitement

I don't read the OP as saying that: to me they're saying you're still going to have plumbing and bullshit, it's just your plumbing and bullshit is now going to be in prompt engineering and/or specifications, rather than the code itself.

creata 1 hour ago||||
> I want to be able to make art and music and movies and games.

Then make them. What's stopping you?

echelon 1 hour ago||
I want to live forever and set foot on distant planets in other galaxies.

Got a prescription for that too?

I've made films for fifteen years. I hate the process.

Every one of my friends and colleagues that went to film school found out quickly that their dreams would wither and die on the vine due to the pyramid nature of studio capital allocation and expenditure. Not a lot of high autonomy in that world. Much of it comes with nepotism.

There are so many things I wish to do with technology that I can't because of how much time and effort and energy and money are required.

I wish I could magic together a P2P protocol that replaced centralized social media. I wish I could build a completely open source GPU driver stack. I wish I could make Rust compile faster or create an open alternative to AWS or GCP. I wish for so many things, but I'm not Fabrice Bellard.

I don't want to constrain people to the shitty status quo. Because the status quo is shitty. I want the next generation to have better than the bullshit we put up with. If they have to suffer like we suffered, we failed.

I want the future to climb out of the pit we're in and touch the stars.

nradov 36 minutes ago||
Computing technology always becomes cheaper and more powerful over time. But it's a slow process. The rate of improvement for LLMs is already decreasing. You will die of old age before the technology that you seem to be looking for arrives.
cruffle_duffle 25 minutes ago||||
> If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?

To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.

Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.

They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b

estimator7292 1 hour ago|||
Burn the planet to the ground because your life is boring. Extremely mature stance you've got there
echelon 1 hour ago||
This is 1960's era anti-nuclear all over again.

People on Reddit posting AI art are getting death threats. It's absurd.

avidiax 1 hour ago||
The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing.

The alternative is that your bespoke solution has undiscovered security vulnerabilities, probably no security community, and no easy fix for either of those.

You get the privilege of patching Node.js.

Similarly, as a hiring manager, you can hire a React developer. You can't hire a "proprietary AI coded integrated project" developer.

This piece seems to say more about React than it says about a general shift in software engineering.

Don't like React? Easiest it's ever been not to use it.

Don't like libraries, abstractions and code reuse in general? Avoid them at your peril. You will quickly reach the frontier of your domain knowledge and resourcing, and start producing bespoke square wheels without a maintenance plan.

FeteCommuniste 1 hour ago||
Yeah, I really don't get it. So instead of using someone else's framework, you're using an AI to write a (probably inferior and less thoroughly tested and considered) framework. And your robot employee is probably pulling a bunch of stuff (not quite verbatim, of course) from existing relevant open source frameworks anyway. Big whoop?
zelphirkalt 1 hour ago||
It's not really easy to not use React, since it was hyped to no end and now is entrenched. Try to get a frontend job without knowing React.
shimman 57 minutes ago||
That's a different complaint.

It's quite easy to make things without react, it's not our fault that business leaders don't let devs choose how to solve problems but hey who am I to complain? React projects allow me to pay my bills! I've never seen a good "react" project yet and I've been working professionally with react since before class components were a thing.

Every react code base has their own unique failures due to npm ecosystem, this will never change. In fact, the best way to anticipate what kind patterns are in a given react project is to look at their package.json.

pixelat3d 1 hour ago||
I fail to see the obvious wisdom in having AI re-implement chunks of existing frameworks without the real-world battle testing, without the supporting ecosystem, and without the common parlance and patterns -- all of which are huge wins if you ever expand development beyond a single person.

It's worth repeating too, that not everything needs to be a react project. I understand the author enjoys the "vibe", but that doesn't make it a ground truth. AI can be a great accelerator, but we should be very cognizant of what we abdicate to it.

In fact I would argue that the post reads as though the developer is used to mostly working alone, and often choosing the wrong tool for the job. It certainly doesn't support the claim of the title

gtirloni 1 hour ago||
> re-implement chunks of existing frameworks without the real-world battle testing

The trend of copying code from StackOverflow has just evolved to the AI era now.

I also expect people will attempt complete rewrites of systems without fully understanding the implications or putting safeguards in place.

AI simply becomes another tool that is misused, like many others, by unexperienced developers.

I feel like nothing has changed on the human side of this equation.

Lalabadie 1 hour ago|||
AI has a lot of "leaders" currently working through a somewhat ignorant discovery of existing domain knowledge (ask me how being a designer has felt in the last 15 years of UX Leadership™ slowly realizing there's depth to the craft).

In recent months, we have MCPs, helping lots of people realize that huh, when services have usable APIs, you can connect them together!

In the current case: AI can do the tedious things for me -> Huh, discarding vast dependency trees (because I previously wanted the tedious stuff done for me too) lessens my risk surface!

They really are discovered truths, but no one's forcing them to come with an understanding of the tradeoffs happening.

tempest_ 1 hour ago||
> the supporting ecosystem, ... the common parlance and patterns

Which are often the top reason to use a framework at all.

I could re-implement a web frame work in python if I needed to but then I would lose all the testing, documentation, middle-ware and worst of all the next person would have to show up and re learn everything I did and understand my choices.

HarHarVeryFunny 41 minutes ago||
I would think that frameworks make more sense than ever with LLMs.

The benefits of frameworks were always having something well tested that you knew would do the job, and that after a bit of use you'd be familiar with, and the same still stands.

LLMs still aren't AGI, and they learn by example. The reason they are decent at writing React code is because they were trained on a lot of it, and they are going to be better at generating based on what they were trained on, than reinventing the wheel.

As the human-in-the-loop, having the LLM generate code for a framework you are familiar with (or at least other people are familiar with) also let's you step in and fix bugs if necessary.

If we get to a point, post-AGI, where we accept AGI writing fully custom code for everything (but why would it - if it has human-level intelligence, wouldn't it see the value in learning and using well-debugged and optimized frameworks?!), then we will have mostly lost control of the process.

toddmorey 36 minutes ago|
It’s fun to ask the models their input. I was working on diagrams and was sure Claude would want some python / js framework to handle layout and nodes and connections. It said “honestly I find it easiest to just write the svg code directly”.
tsunagatta 32 minutes ago||
That is fun, but it doesn’t mean that the model finds it easier or will actually work better that way, that just means that in its training data many people said something like “honestly I find it easiest to just write the svg code directly” in response to similar questions
thagra 10 minutes ago||
If the author is this Alain di Chiappari, he works for a telehealth and psychology site:

https://theorg.com/org/unobravo-telehealth-psychology-servic...

It is interesting how many telehealth and crypto people are promoting AI (David Sacks being the finest of all specimens).

The article itself is of course an AI assisted mashup of all propaganda talking points. People using Unobravo should take note.

jazzyb 1 hour ago||
My biggest concern with AI is that I'm not sure how a software engineer can build up this sort of high-level intuition:

> I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am.

Without a significant development period of this:

> What’s gone is the tearing, exhausting manual labour of typing every single line of code.

A professional mathematician should use every computer aid at their disposal if it's appropriate. But a freshman math major who isn't spending most of their time with just a notebook or chalk board is probably getting in the way of their own progress.

Granted, this was already an issue, to a lesser extent, with the frameworks that the author scorns. It's orders of magnitude worse with generative AI.

andai 1 hour ago|
I'm not sure. I don't know about deep expertise and mastery, but I can attest that my fluency skyrocketed as the result of AI in several languages, simply because the friction involved in writing them went own by orders of magnitude. So I am writing way more code now in domains that I previously avoided, and I noticed that I am now much more capable there even without the AI.

What I don't know is what state I'd be in right now, if I'd had AI from the start. There are definitely a ton of brain circuits I wouldn't have right now.

Counterpoint: I've actually noticed them holding me back. I have 20 years of intuition built up now for what is hard and what is easy, and most of it became wrong overnight, and is now limiting me for no real reason.

The hardest part to staying current isn't learning, but unlearning. You must first empty your cup, and all that.

jmull 31 minutes ago||
> Since [a few months ago], things have dramatically changed...

It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.

Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.

Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.

netrem 2 hours ago|
Using a framework gives you some assurance that the underlying methods are well designed. If you don't know how to spot issues in auth design, then using an LLM instead of a library is a bad idea.

I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.

jayd16 22 minutes ago||
I think this is a subtle but important point.

If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.

With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.

Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.

More comments...