Top
Best
New

Posted by stevekrouse 8 hours ago

Reports of code's death are greatly exaggerated(stevekrouse.com)
88 points | 89 comments
lateforwork 48 minutes ago|
Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.

You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.

As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.

[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...

thesz 9 minutes ago||

  > ...generate answers near the center of existing thought.
This is right in the Wikipedia's article on universal approximation theorem [1].

[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...

"n the field of machine learning, the universal approximation theorems (UATs) state that neural networks with a certain structure can, in principle, approximate any continuous function to any desired degree of accuracy. These theorems provide a mathematical justification for using neural networks, assuring researchers that a sufficiently large or deep network can model the complex, non-linear relationships often found in real-world data."

And then: "Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region."

NNs, LLMs included, are interpolators, not extrapolators.

And the region NN approximates within can be quite complex and not easily defined as "X:R^N drawn from N(c,s)^N" as SolidGoldMagiKarp [2] clearly shows.

[2] https://github.com/NiluK/SolidGoldMagikarp

slopinthebag 29 seconds ago||
Yeah I think he had a pretty sane take in that article:

>CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.

And also

> The most effective engineers will not compete with AI at producing code, but will learn to collaborate with it, by using AI to explore ideas faster, iterate more broadly, and focus human effort on direction and design. Lower barriers to implementation do not reduce the importance of engineers; instead, they elevate the importance of vision, judgment, and taste. When creation becomes easier, deciding what is worth creating becomes the harder problem. AI accelerates execution, but meaning, direction, and responsibility remain fundamentally human.

pacman128 1 hour ago||
In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
justonceokay 1 hour ago||
Most art forms do not have a wildly changing landscape of materials and mediums. In software we are seeing things slow down in terms of tooling changes because the value provided by computers is becoming more clear and less reliant on specific technologies.

I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.

sd9 1 hour ago|||
LLMs are very much NIH machines
ffsm8 57 minutes ago||
i'd go one step further, they're going to turbo charge the NIH syndrome and treat every code file as a seperate "here"
sd9 9 minutes ago|||
Claude code is incredibly keen to create new parallel implementations of all sorts of things. I agree entirely.
j_bum 18 minutes ago|||
For others like me who know “NIH” to be “National Institutes of Health”…

“NIH” here refers to “Not Invented Here” Syndrome, or a bias against things developed externally.

realusername 1 hour ago|||
The bar to create the new X framework has just been lowered so I expect the opposite, even more churn.
derrak 1 hour ago|||
Maybe you’re right about modern LLMs. But you seem to be making an unstated assumption: “there is something special about humans that allow them to create new things and computers don’t have this thing.”

Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?

adamiscool8 1 hour ago|||
After thousands of years of research we still don’t fully understand how humans do it, so what reason (besides a sort of naked techno-optimism) is there to believe we will ever be able to replicate the behavior in machines?
derrak 55 minutes ago|||
The Church-Turing thesis comes to mind. It would at least suggest that humans aren’t capable of doing anything computationally beyond what can be instantiated in software and hardware.

But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.

sophrosyne42 41 minutes ago||
The church turing thesis is about following well-defined rules. It is not about the system that creates or decides to follow or not follow such rules. Such a system (the human mind) must exist for rules to be followed, yet that system must be outside mere rule-following since it embodies a function which does not exist in rule-following itself, e.g., the faculty of deciding what rules are to be followed.
derrak 35 minutes ago||
We can keep our discussion about church turing here if you want.

I will argue that the following capacities: 1. creating rules and 2. deciding to follow rules (or not) are themselves controlled by rules.

ben_w 49 minutes ago||||
That humans come in various degrees of competence at this rather than an, ahem, boolean have/don't have; plus how we can already do a bad approximation of it, in a field whose rapid improvements hint that there is still a lot of low-hanging fruit, is a reason for techno-optimism.
stravant 34 minutes ago|||
Thousands of years?

We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).

sophrosyne42 45 minutes ago||||
Its not an assumption, it is a fact about how computers function today. LLMs interpolate, they do not extrapolate. Nobody has shown a method to get them to extrapolate. The insistence to the contrary involves an unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
derrak 38 minutes ago||
As long as agnosticism is the attitude, that’s fine. But we shouldn’t let mythology about human intelligence/computational capacity stop us from making progress toward that end.

> unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.

For me this isn’t an assumption, it’s a corollary that follows from the Church-Turing thesis.

danaris 1 hour ago|||
That's irrelevant.

The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."

The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."

Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.

Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.

derrak 47 minutes ago||
Note that I prefaced my comment by saying the parent might be right about LLMs.

> That's irrelevant.

My comment was relevant, if a bit tangential.

Edit: I also want to say that our attitude toward machine vs. human intelligence does matter today because we’re going to kneecap ourselves if we incorrectly believe there is something special about humans. It will stop us from closing that gap.

jedberg 1 hour ago|||
People are doing this now. It's basically what skills.sh and its ilk are for -- to teach AIs how to do new things.

For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.

The skill itself is pretty much just the documentation and some code examples.

majormajor 7 minutes ago|||
Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?

How long can you keep adding novel things into the start of every session's context and get good performance, before it loses track of which parts of that context are relevant to what tasks?

IMO for working on large codebases sticking to "what the out of the box training does" is going to scale better for larger amounts of business logic than creating ever-more not-in-model-training context that has to be bootstrapped on every task. Every "here's an example to think about" is taking away from space that could be used by "here is the specific code I want modified."

The sort of framework you mention in a different reply - "No, it was created by our team of engineers over the last three years based on years of previous PhD research." - is likely a bit special, if you gain a lot of expressibility for the up-front cost, but this is very much not the common situation for in-house framework development, and could likely get even more rare over time with current trends.

andrei_says_ 1 hour ago||||
The question is, who made the new framework? Was it vibe coded by someone who does not understand its code?
jedberg 1 hour ago||
No, it was created by our team of engineers over the last three years based on years of previous PhD research.
NewJazz 1 hour ago|||
A framework is different than a paradigm shift or new language.
jedberg 1 hour ago||
Yes and no. How does a human learn a new language? They use their previous experience and the documentation to learn it. Oftentimes they way someone learns a new language is they take something in an old language and rewrite it.

LLMs are really good at doing that. Arguably better than humans at RTFM and then applying what's there.

NewJazz 40 minutes ago||
And LLMs will get retrained eventually. So writing one good spec and a great harness (or multiple) might be enough, eventually.
kstrauser 1 hour ago|||
That’s factually untrue. I’m using models to work on frameworks with nearly zero preexisting examples to train on, doing things no one’s ever done with them, and I know this because I ecosystem around these young frameworks.

Models can RTFM (and code) and do novel things, demonstrably so.

allthetime 1 hour ago||
Yeah. I work with bleeding edge zig. If you just ask Claude to write you a working tcp server with the new Io api, it doesn’t have any idea what it’s doing and the code doesn’t compile. But if you give it some minimal code examples, point it to the recent blog posts about it, and paste in relevant points from std it does incredibly well and produce code that it has not been trained on.
lifis 46 minutes ago|||
You can have the LLM itself generate it based on the documentation, just like a human early adopter would
pklausler 1 hour ago|||
This would also mean that we should design new programming languages out of sight of LLMs in case we need to hide code from them.
danielbln 1 hour ago|||
Inject the prior art into the (ever increasing) context window, let in-context-learning to its thing and go?
charcircuit 49 minutes ago|||
You can just have AI generate its own synthetic data to train AI with. If you want knowledge about how to use it to be in the a model itself.
CamperBob2 1 hour ago||
In a chat bot coding world, how do we ever progress to new technologies?

Funny, I'd say the same thing about traditional programming.

Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.

That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.

All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.

bitwize 2 minutes ago|||
We were almost there, back in the 80s.

A vice president at Symbolics, the Lisp machine company at their peak during the first AI hype cycle, once stated that it was the company's goal to put very large enterprise systems within the reach of small teams to develop, and anything smaller within the reach of a single person.

And had we learned the lessons of Lisp, we could have done it. But we live in the worst timeline where we offset the work saved with ever worse processes and abstractions. Hell, to your point, we've added static edit-compile-run cycles to dynamic, somewhat Lisp-like languages (JavaScript)! And today we cry out "Save us, O machines! Save us from the slop we produced that threatens to make software development a near-impossible, frustrating, expensive process!" And the machines answer our cry by generating more slop.

badc0ffee 1 hour ago||||
You're probably using an IDE that checks your syntax as you type, highlighting keywords and surfacing compiler warnings and errors in real time. Autocomplete fills out structs for you. You can hover to get the definition of a type or a function prototype, or you can click and dig in to the implementation. You have multiple files open, multiple projects, even.

Not to mention you're probably also using source control, committing code and switching between branches. You have unit tests and CI.

Let's not pretend the C developer experience is what it was 30 years ago, let alone 50.

CamperBob2 53 minutes ago||
I disagree that any of those things are even slightly material to the topic. It's like saying my car is fundamentally different from a 1972 model because it has ABS, airbags, and a satnav.

Reply due to rate limiting:

K&R didn't know about CI/CD, but everything else you mention has either existed for 30+ years or is too trivial to argue about.

Conversely, if you took Claude Code or similar tools back to 1996, they would grab a crucifix and scream for an exorcist.

badc0ffee 52 minutes ago||
You said C developers are doing things the "same old way" as always.

I think you're taking for granted the massive productivity boost that happened even before today's era of LLM agents.

sophrosyne42 38 minutes ago||||
If all problems were solved, we should have already found a paradise without anything to want for. Your editing workflow being the similar to another for a 1970s era language does not have any relevance to that question.
rustystump 1 hour ago|||
While i dont disagree with the larger point here i do disagree that all the code we ever need has been written. There are still soooooo many new things to uncover in that domain.
CamperBob2 1 hour ago||
Like what?
woeirua 20 minutes ago||
The argument here seems to be “you need AGI to write good code. Good code is required for… reasons. AGI is far away. Therefore code is not dead.”

First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.

Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.

anematode 12 minutes ago|
In agree in principle, but the compiler is a terrible example given the amount of scaffolding afforded to the LLMs, literally hundreds of thousands of test cases covering all kinds of esoteric corners.

Also (and this is coming from someone who thinks it's quite close) "AGI" is not implied by the ability to implement very-long-horizon software tasks. That's not "general" at all.

idopmstuff 1 hour ago||
I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.

But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.

I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.

But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.

(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)

cactusplant7374 1 hour ago|
> But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English.

Unless the defect rate for humans is greater than LLMs at some point. A lot of claims are being made about hallucinations that seem to ignore that all software is extremely buggy. I can't use my phone without encountering a few bugs every day.

idopmstuff 1 hour ago|||
Yeah, I don't really accept the argument that AI makes mistakes and therefore cannot be trusted to write production code (in general, at least - obviously depends on the types of mistakes, which code, etc.).

The reality is we have built complex organizational structures around the fact that humans also make mistakes, and there's no real reason you can't use the same structures for AI. You have someone write the code, then someone does code review, then someone QAs it.

Even after it goes out to production, you have a customer support team and a process for them to file bug tickets. You have customer success managers to smooth over the relationships with things go wrong. In really bad cases, you've got the CEO getting on a plane to go take the important customer out for drinks.

I've worked at startups that made a conscious decision to choose speed of development over quality. Whether or not it was the right decision is arguable, but the reality is they did so knowing that meant customers would encounter bugs. A couple of those startups are valuable at multiple billions of dollars now. Bugs just aren't the end of the world (again, most cases - I worked on B2B SaaS, not medical devices or what have you).

Fishkins 59 minutes ago||
> humans also make mistakes

This is broadly true, but not comparable when you get into any detail. The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.

IME, all of the QA measures you mention are more difficult and less reliable than understanding things properly and writing correct code from the beginning. For critical production systems, mediocre code has significant negative value to me compared to a fresh start.

There are plenty of net-positive uses for AI. Throwaway prototyping, certain boilerplate migration tasks, or anything that you can easily add automated deterministic checks for that fully covers all of the behavior you care about. Most production systems are complicated enough that those QA techniques are insufficient to determine the code has the properties you need.

bdangubic 49 minutes ago||
> The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.

my experience literal 180 degrees from this statement. and you don’t normally get the choose humans you work with, some you may be involved in the interview process but that doesn’t tell you much. I have seen so much human-written code in my career that, in the right hands, I’ll take (especially latest frontier) LLM written code over average human code any day of the week and twice on Sunday

bryanrasmussen 1 hour ago|||
most human bugs are caused by failures in reasoning though, not by just making something up to leap to the conclusion considered most probable, so not sure if the comparison makes sense.
wiseowise 1 hour ago||
> most human bugs are caused by failures in reasoning though

Citation needed.

bryanrasmussen 59 minutes ago||
sorry, that is just taken from my experience, and perhaps I am considering reasoning to be a broader category than others might.

To be lenient I will separate out bugs caused by insufficient knowledge as not being failures in reasoning, do you have forms of bugs that you think are more common and are not arguably failures in reasoning that should be considered?

on edit: insufficient knowledge that I might not expect a competent developer to have is not a failure in reasoning, but a bug caused by insufficient knowledge that I would expect a competent developer in the problem space to have is a failure in reasoning, in my opinion on things.

soumyaskartha 2 hours ago||
Every few years something is going to kill code and here we are. The job changes, it does not disappear.
suzzer99 1 hour ago|
For future greenfield projects, I can see a world where the only jobs are spec-writer and test-writer, with maybe one grumpy expert coder (aka code janitor) who occasionally has to go into the code to figure out super gnarly issues.
cratermoon 1 hour ago|||
A good spec-writer, as the article notes, is writing code.
drzaiusx11 1 hour ago|||
This is already happening, many days I am that grumpy "code janitor" yelling at the damn kids to improve their slop after shit blows up in prod. I can tell you It's not "fun", but hopefully we'll converge on a scalable review system eventually that doesn't rely on a few "olds" to clean up. GenAI systems produce a lot of "mostly ok" code that has subtle issues you on catch with some experience.

Maybe I should just retire a few years early and go back to fixing cars...

suzzer99 1 hour ago||
Yeah I imagine it has to be utterly thankless being the code janitor right now when all the hype around AI is peaking. You're basically just the grumpy troll slowing things down. And God forbid you introduce a regression bug trying to clean up some AI slop code.

Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.

allthetime 1 hour ago||
I’ve got a “I haven’t written a line of code in one year” buddy whose startup is gaining traction and contracts. He’s rewritten the whole stack twice already after hitting performance issues and is now hiring cheap juniors to clean up the things he generates. It is all relatively well defined CRUD that he’s just slapped a bunch of JS libs on top of that works well enough to sell, but I’m curious to see the long term effects of these decisions.

Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed

picafrost 55 minutes ago||
So much of society's intellectual talent has been allocated toward software. Many of our smartest are working on ad-tech, surveillance, or squeezing as much attention out of our neighbors as possible.

Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.

oblio 54 minutes ago|
Those are business goals that don't just go away because tech changes.
picafrost 49 minutes ago||
Of course. But LLMs may subtract the need for top talent to be working on them.
flitzofolov 50 minutes ago||
r0ml's third law states that: “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”

I believe the same pattern is inevitable for these higher level abstractions and interfaces to generate computer instructions. The language use must ultimately conform to a rigid syntax, and produce a deterministic result, a.k.a. "code".

Source: https://www.youtube.com/watch?v=h5fmhYc4U-Y

cratermoon 19 minutes ago||
I can't tell if the author's "when we get AGI" is sarcasm or genuine.
erichocean 1 hour ago||
> If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.

Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...

stevekrouse 1 hour ago|
Sick!!! Great example! I'm actually a longtime friend and angel investor in Dustin but I hadn't seen this
deadbabe 2 hours ago|
My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.

I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

oooyay 1 hour ago||
> I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.

noident 43 minutes ago|||
Shift-left was a disaster? A large number of my day to day problems at work could be described as failing to shift-left even in the face of overwhelmingly obvious benefits
oblio 47 minutes ago|||
Shift left?

Edit: I've googled it and I can't find anything relevant. I've been working in software for 20+ years and read a myriad things and it's the first time I hear about it...

stalfie 18 minutes ago|||
Well, to be fair, judging by the shift in the general vibes of the average HN comment over the past 3 years, better use of agents and advanced models DID solve the previous temporary setbacks. The techno-optimists were right, and the nay-sayers wrong.

Over the course of about 2 years, the general consensus has shifted from "it's a fun curiosity" to "it's just better stackoverflow" to "some people say it's good" to "well it can do some of my job, but not most of it". I think for a lot of people, it has already crossed into "it can do most of my job, but not all of it" territory.

So unless we have finally reached the mythical plateau, if you just go by the trend, in about a year most people will be in the "it can do most of my job but not all" territory, and a year or two after that most people will be facing a tool that can do anything they can do. And perhaps if you factor in optimisation strategies like the Karpathy loop, a tool that can do everything but better.

Upper managment might be proven right.

idopmstuff 1 hour ago|||
As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.

I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.

Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"

I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.

Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.

Some specific thoughts for you:

1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.

2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.

3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?

4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.

pixl97 1 hour ago||
It's even better when you guide them into finding the fatal flaw for themselves.
idopmstuff 1 hour ago||
Hahaha yes this is absolutely true but often times so much more work.
cratermoon 59 minutes ago||
To an extent, these people have found their religion, and rational discussion does not come into play. As with previous tech Holy Wars over operating systems, editors, and programming languages, their self-image is tied to the technology.

Where the tech argument doesn't apply to upper management, business practices, the need to "not be left behind" and leap at anything that promises reducing headcount without reducing revenue, money talks. As long as it's possible to slop something together, charge for it, and profit, slop will win.

More comments...