Top
Best
New

Posted by yuedongze 6 days ago

AI should only run as fast as we can catch up(higashi.blog)
198 points | 181 comments
donatj 6 days ago|
I have been a developer for twenty years now. For me to trust code, my want is to understand every single line. I learned long ago working on projects with a team that that becomes impossible for a single person on large projects. I learned to trust that someone understands the code and between blames and Slack I can almost always hunt that person down.

More and more often, while doing code review, I find I will not understand something and I will ask, and the "author" will clearly have no idea what it is doing either.

I find it quite troubling how little actual human thought is going into things. The AIs context window is not nearly large enough to fully understand the entire scope of any decently sized applications ecosystem. It just takes small peaks at bits and makes decisions based on a tiny slice of the world.

It's a powerful tool and as such needs to be guided with care.

MLgulabio 6 days ago||
Software becomes legacy very fast.

I have seen so many projects were people who understood all of it, are just gone. They moved, did something else etc.

As soon as this happens, you no longer have anyone 'getting it'. You have to handle so many people adding/changing very thin lines across all components and you can only hope that the original people had enough foresight adding enough unit tests for core decisions.

So i really don't mind AI here anymore.

rnewme 5 days ago||
Not sure why this is dead, but in nearly all of my consulting gigs sooner or later I ended up having to check on project/service that is effectively abandoned. Last time this morning. Luckily I had claude code and CLI tools to go through few dozen repos and millions LOC to find some obscure endpoints and data structures, since there wasn't even anyone to ask what to look for.
DANmode 2 days ago|||
One of the most powerful takeaways I had from a builders’ conference this week was:

“Whatever code you commit - you own it - no matter who (or what) wrote it.”

Make this your top-down directive, and fire people who insist on throwing trash over the fence into your yard.

nradov 5 days ago|||
We might have to give up on trust and understanding in complex domains. To draw an analogy from another field, pharmaceutical researchers often don't understand the exact mechanism of action for drugs they develop. Biological systems are too complex and much of the basic research hasn't been done yet. So they rely on rigorous testing to verify that new drugs are safe and effective. It isn't a perfect system — sometimes drugs get recalled or have warnings added later — but works well enough.
fragmede 5 days ago|||
Can humans though? There's a reason we don't just lump everything into one giant file and singleton class named DoIt(). Who hasn't come back around to some bit of code in a project and wondered what dumbass wrote this, only for the logs to tell you that it was you that wrote it, years ago. If AI is resulting in code that's more modular, in smaller digestible and understandable chunks, I'm not hearing that as a bad thing!
BeFlatXIII 5 days ago||
> I learned to trust that someone understands the code and between blames and Slack I can almost always hunt that person down.

Does your company not have many retirements, firings, or employees who quit to work elsewhere?

yuedongze 6 days ago||
It's nice to see a wide array of discussions under this! Glad that I didn't give up on this thought and end up writing it down.

I want to stress that the main point of my article is not really about AI coding, it's about letting AI perform any arbitrary tasks reliably. Coding is an interesting one because it seems like it's a place where we can exploit structure and abstraction and approaches (like TDD) to make verification simpler - it's like spot-checking in places with a very low soundness error.

I'm encouraging people to look for tasks other than coding to see if we can find similar patterns. The more we can find these cost asymmetry (easier to verify than doing), the more we can harness AI's real potential.

felipeerias 6 days ago||
Thinking about the relationship between creation and verification is a good way to develop productive workflows with AI tools.

One that works particularly well in my case is test-driven development followed by pair programming:

• “given this spec/context/goal/… make test XYZ pass”

• “now that we have a draft solution, is it in the right component? is it efficient? well documented? any corner cases?…”

Yoric 6 days ago||
Note that in the case of coding, there is an entire branch of computer science dedicated to verification.

All the type systems (and model-checkers) for Rust, Ada, OCaml, Haskell, TypeScript, Python, C#, Java, ... are based on such research, and these are all rather weak in comparison to what research has created in the last ~30 years (see Rocq, Idris, Lean).

This goes beyond that, as some of these mechanisms have been applied to mathematics, but also to some aspects of finance and law (I know of at least mechanisms to prove formally implementations of banking contracts and tax management).

So there is lots to do in the domain. Sadly, as every branch of CS other than AI (and in fact pretty much every branch of science other than AI), this branch of computer science is underfunded. But that can change!

charcircuit 6 days ago||
Considering how useful I've found AI at finding and fixing bugs proportional to the effort I put in, I question your claim that it's being underfunded. While I have learned things like Idris, in the end I never was able to practically use them to reduce bugs in the software I was writing unlike AI. It's possible that the funding towards these types of languages is actually distracting people from more practical solutions which could actually mean that it is overfunded in regards to program verification.
Yoric 3 days ago||
I'm speaking of verification, which is about making code (provably) airtight with respect to specifications, now and forever.

You're answering with finding bugs, which is about fixing one issue at a time.

Both are useful, but we're not speaking of the same scale.

seanmcdirmid 6 days ago||
Make the AI go to lots of meetings. It won’t stand a chance in keeping up its productivity.
zerosizedweasle 6 days ago|
https://www.reuters.com/graphics/USA-ECONOMY/AI-INVESTMENT/g...
HPsquared 6 days ago|||
1.6 trillion, why that's almost as much as the F-35 program!
timpera 6 days ago|||
Great visualization!
blauditore 6 days ago||
All these engineers who claim to write most code through AI - I wonder what kind of codebase that is. I keep on trying, but it always ends up producing superficially okay-looking code, but getting nuances wrong. Also fails to fix them (just changes random stuff) if pointed to said nuances.

I work on a large product with two decades of accumulated legacy, maybe that's the problem. I can see though how generating and editing a simple greenfield web frontend project could work much better, as long as actual complexity is low.

bob1029 6 days ago||
I have my best successes by keeping things constrained to method-level generation. Most of the things I dump into ChatGPT look like this:

  public static double ScoreItem(Span<byte> candidate, Span<byte> target)
  {
     //TODO: Return the normalized Levenshtein distance between the 2 byte sequences.
     //... any additional edge cases here ...
  }
I think generating more than one method at a time is playing with fire. Individual methods can be generated by the LLM and tested in isolation. You can incrementally build up and trust your understanding of the problem space by going a little bit slower. If the LLM is operating over a whole set of methods at once, it is like starting over each time you have to iterate.
theshrike79 6 days ago|||
"Dumping into ChatGPT" is by far the worst way to work with LLMs, then it lacks the greater context of the project and will just give you the statistical average output.

Using an agentic system that can at least read the other bits of code is more efficient than copypasting snippets to a web page.

bob1029 6 days ago|||
> then it lacks the greater context of the project

This is the point. I don't want it thinking about my entire project. I want it looking at a very specific problem each time.

theshrike79 5 days ago||
But why?

Most code is about patterns, specific code styles and reusing existing libraries. Without context none of that can be applied to the solution.

If you put a programmer in a room and give them a piece of paper with a function and say OPTIMISE THAT! - is it going to be their best work?

samdoesnothing 6 days ago|||
I do this but with copilot. Write a comment and then spam opt-tab and 50% of the time it ends up doing what I want and I can read it line-by-line before tabbing the next one.

Genuine productivity boost but I don't feel like it's AI slop, sometimes it feels like its actually reading my mind and just preventing me from having to type...

jerf 6 days ago||
I've settled in on this as well for most of my day-to-day coding. A lot of extremely fancy tab completion, using the agent only for manipulation tasks I can carefully define. I'm currently in a "write lots of code" mode which affects that, I think. In a maintenance mode I could see doing more agent prompting. It gives me a chance to catch things early and then put in a correct pattern for it to continue forward with. And honestly for a lot of tasks it's not particularly slower than "ask it to do something, correct its five errors, tweak the prompt" work flow.

I've had net-time-savings with bigger agentic tasks, but I still have to check it line-by-line when it is done, because it takes lazy shortcuts and sometimes just outright gets things wrong.

Big productivity boost, it takes out the worst of my job, but I still can't trust it at much above the micro scale.

I wish I could give a system prompt for the tab complete; there's a couple of things it does over and over that I'm sure I could prompt away but there's no way to feed that in that I know of.

CuriouslyC 6 days ago|||
It's architecture dependent. A fairly functional modular monolith with good documentation can be accessible to LLMs at the million line scale, but a coupled monolith or poorly instrumented microservices can drive agents into the ground at 100k.
yuedongze 6 days ago||
I think it's definitely an interesting subject for Verification Engineering. the easier to task AI to do work more precisely, the easier we can check their work.
CuriouslyC 6 days ago||
Yup. Codebase structure for agents is a rabbit hole I've spent a lot of time going down. The interesting thing is that it's mostly the same structure that humans tend to prefer, with a few tweaks: agents like smaller files/functions (more precise reads/edits), strongly typed functional programming, doc-comments with examples and hyperlinks to additional context, smaller directories with semantic subgroups, long/distinct variable names, etc.
lukan 6 days ago||
Aren't those all things, humans also tend to prefer to read?

I like to read descriptive variable names, I just don't like to write them all the time.

hathawsh 6 days ago|||
I think your intuition matches mine. When I try to apply Claude Code to a large code base, it spends a long time looking through the code and then it suggests something incorrect or unhelpful. It's rarely worth the trouble.

When I give AI a smaller or more focused project, it's magical. I've been using Claude Code to write code for ESP32 projects and it's really impressive. OTOH, it failed to tell me about a standard device driver I could be using instead of a community device driver I found. I think any human who works on ESP-IDF projects would have pointed that out.

AI's failings are always a little weird.

seanmcdirmid 6 days ago|||
Have you tried having AI build up documentation on the code first and then correct it where it’s understanding is wrong, then running code changes with the docs in the context, you can even separate it out for each module if you are daring. Ai still takes alot of hand holding to be productive with, which means our jobs are safe for now until they start learning about SWe principles somehow.
manmal 6 days ago||||
In large projects you need to actually point it to the interesting files, because it has no way of knowing what it doesn’t know. Tell it to read this and that, creating summary documents, then clear the context and point it at those summaries. A few of those passes and you‘ll get useful results. A gap in its knowledge of relevant code will lead to broken functionality. Cursor and others have been trying to solve this with semantic search (embeddings) but IMO this just can’t work because relevance of a code piece for a task is not determinable by any of its traits.
Yoric 6 days ago||
But in the end, do you feel that it has saved you time?

I find hand-holding Claude a permanent source of frustration, except in the rare case that it helps me discover an error in the code.

manmal 6 days ago||
I‘ve had a similar feeling before Opus 4.5. Now it suddenly clicks with me, and it has passed the shittiness threshold, into the „often useful“ area. I suspect that’s because Apple is partnering with Anthropic and they will have improved Swift support.

Eg it‘s great for refactoring now, it’s often updating the README along with renames without me asking. It’s also really good at rebasing quickly, but only by cherry-picking inside a worktree. Churning out small components I don’t want to add a new dependency for, those are usually good on first try.

For implementing whole features, the space of possible solutions is way too big to always hit something that I‘ll be satisfied with. Once I have an idea on how to implement something in broad strokes, I can give a very error ridden first draft to it as a stream of thoughts, let it read all required files, and make an implementation plan. Usually that’s not too far off, and doesn’t take that long. Once that’s done, Opus 4.5 is pretty good at implementing that plan. Still I read every line, if this will go to production.

divan 6 days ago|||
I start new projects "AI-first" – start with docs, and refining them on the go, with multiple CLAUDE.md in different folders (to give a right context where it's needed). This alone increases the chances of it getting tasks right tenfold. Plus I almost always verify myself all the code produced.

Ironically, this would be the best workflow with humans too.

moomoo11 6 days ago|||
You need to realize when you’re being marketed to and filter out the nonsense.

Now I use agentic coding a lot with maybe 80-90% success rate.

I’m on greenfield projects (my startup) and maintaining strict Md files with architecture decisions and examples helps a lot.

I barely write code anymore, and mostly code review and maintain the documentation.

In existing codebases pre-ai I think it’s near impossible because I’ve never worked anywhere that maintained documentation. It was always a chore.

freedomben 6 days ago|||
I've tried it extensively, and have the same experience as you. AI is also incredibly stubborn when it wants to go down a path I reject. It constantly tries to do it anyway and will slip things in.

I've tried vibe coding and usually end up with something subtly or horribly broken, with excessive levels of complexity. Once it digs itself a hole, it's very difficult to extricate it even with explicit instruction.

qudat 6 days ago|||
Are you using it only on massive codebases? It's much better with smaller codebases where it can put most of the code in context.

Another good use case is to use it for knowledge searching within a codebase. I find that to be incredibly useful without much context "engineering"

eloisant 6 days ago||
It's also good on massive codebases that include a lot of "good practices" examples.

Let's say you want to add a new functionality, for example plug to the shared user service, that already exist in another service in the same monorepo, the AI will be really good at identifying an example and applying it to your service.

daliusd 5 days ago|||
I use AI successfully in two projects:

* My 5 years old project: monorepo with backend, 2 front-ends and 2 libraries

* 10+ years old company project: about 20 various packages in monorepo

In both cases I successfully give Claude Code or OpenCode instructions either at package level or monorepo level. Usually I prefer package level.

E.g. just now I gave instructions in my personal project: "Invoice styles in /app/settings/invoice should be localized". It figured out that unlocalized strings comes from library package, added strings to the code and messages files (added missing translations), however has not cleaned up hardcoded strings from library. As I know code I have written extra prompt "Maybe INVOICE_STYLE_CONFIGS can be cleaned-up in such case" and it cleaned-up what I have expected, ran tests and linting.

wubrr 6 days ago|||
I've generally had better luck when using it on new projects/repos. When working on a large existing repo it's very important to give it good context/links/pointers to how things currently work/how they should work in that repo.

Also - claude (~the best coding agent currently imo) will make mistakes, sometimes many of them - tell it to test the code it writes and make sure it's working - I've generally found its pretty good at debugging/testing and fixing it's own mistakes.

mrtksn 6 days ago|||
So far I found that AI is very good at writing the code as in translating english to computer code.

Instead of dealing with intricacies of directly writing the code, I explain the AI what are we trying to achieve next and what approach I prefer. This way I am still on top of it, I am able to understand the quality of the code it generated and I’m the one who integrates everything.

So far I found the tools that are supposed to be able to edit the whole codebase at once be useless. I instantly loose perspective when the AI IDE fiddles with multiple code blocks and does some magic. The chatbot interface is superior for me as the control stays with me and I still follow the code writing step by step.

bojan 6 days ago|||
> I work on a large product with two decades of accumulated legacy, maybe that's the problem.

I'm in a similar situation, and for the first time ever I'm actually considering if a rewrite to microservices would make sense, with a microservice being something small enough an AI could actually deal with - and maybe even build largely on its own.

vanviegen 6 days ago||
If you're creating microservices that are small enough for a current-gen LLM to deal with well, that means you're creating way too many microservices. You'll be reminiscing your two decades of accumulated legacy monolith with fondness.
themafia 6 days ago|||
> as long as actual complexity is low.

You can start there. Does it ever stay that way?

> I work on a large product with two decades of accumulated legacy

Survey says: No.

silisili 6 days ago|||
> I work on a large product with two decades of accumulated legacy, maybe that's the problem

Definitely. I've found Claude at least isn't so good at working in large existing projects, but great at greenfielding.

Most of my use these days is having it write specific functions and tests for them, which in fairness, saves me a ton of time.

rprend 6 days ago|||
<1 year old startup with fullstack javascript monorepo. Hosted with a serverless platform with good devex, like cloudflare workers.

That’s the typical “claude code writes all my code” setup. That’s my setup.

This does require you to fit your problem to the solution. But when you do, the results are tremendous.

tuhgdetzhh 6 days ago|||
Yes, unfortunately those who jumped on the microservices hype train over the past 15 years or so are now getting the benefits of Claude Code, since their entire codebases fits into the context window of Sonnet/Opus and can be "understood" by the LLM to generate useful code.

This is not the case for most monoliths, unless they are structured into LLM-friendly components that resemble patterns the models have seen millions of times in their training data, such as React components.

manmal 6 days ago||
Well structured monoliths are modularized just like microservices. No need to give each module its own REST API in order to keep it clean.
bccdee 6 days ago|||
Conversely, poorly-structured microservices are just monoliths where most of the code is in other repositories.
Yoric 6 days ago||||
I guess that the benefit of monoliths in the context is that they (often) live in distinct repositories, which makes it easier for Claude to ingest them entirely, or at least not get lost into looking at the wrong directory.
randomtoast 6 days ago|||
One problem is that the idea of being "well-structured" has gone overboard at some point over the past 20 years in many companies. As a result, many companies now operate highly convoluted monolithic systems that are extremely difficult to replace.

In contrast, a poorly designed microservice can be replaced much more easily. You can identify the worst-performing and most problematic microservices and replace them selectively.

tuhgdetzhh 6 days ago||
> One problem is that the idea of being "well-structured" has gone overboard at some point over the past 20 years

That's exactly my experience. While a well-structured monolith is a good idea in theory, and I'm sure such examples exist in practice, that has never been the case in any of my jobs. Friends working at other companies report similar experiences.

cogman10 6 days ago|||
Honestly, if you've ever looked at a claude.md file, it seems like absolute madness. I feel like I'm reading affirmations from AA.
manmal 6 days ago|||
It’s magical incantations that might or might not protect you from bad behavior Claude learned from underqualified RL instructors. A classic instruction I have in CLAUDE.md is „Never delete a test. You are only allowed to replace with a test that covers the same branches.“ and another one „Never mention Claude in a commit message“. Of course those sometimes fail, so I do have a message hook that enforces a certain style of git messages.
Havoc 6 days ago|||
> Never mention Claude in a commit message“. Of course those sometimes fail,

It’s hardcoded into the system prompt which is why your CLAUDE.md approach fails. Ended up intercepting it out via proxy

manmal 6 days ago||
Thanks for this idea!
HWR_14 6 days ago|||
Why would it be bad to mention Claude in a commit message?
manmal 5 days ago||
Just because Claude ran the commit command, doesn’t mean it wrote the code. That’s just a nasty marketing hack from Anthropic.
theshrike79 6 days ago|||
Way too many agent prompt files are just fan fiction or D&D character background documents that have no actual effect on what the agent does =)
junkaccount 6 days ago||
Can you prove it in a blog and post it here that you do better code snippets than AI. If you claim "what kind of codebase", you should be able to use some codebase from github to prove it?
gradus_ad 6 days ago||
The proliferation of nondeterministically generated code is here to stay. Part of our response must be more dynamic, more comprehensive and more realistic workload simulation and testing frameworks.
OptionOfT 6 days ago||
I disagree. I think we're testing it, and we haven't seen the worst of it yet.

And I think it's less about non-deterministic code (the code is actually still deterministic) but more about this new-fangled tool out there that finally allows non-coders to generate something that looks like it works. And in many cases it does.

Like a movie set. Viewed from the right angle it looks just right. Peek behind the curtain and it's all wood, thinly painted, and it's usually easier to rebuild from scratch than to add a layer on top.

Yoric 6 days ago|||
Exactly that.

I suspect that we're going to witness a (further) fork within developers. Let's call them the PM-style developers on one side and the system-style developers on the other.

The PM-style developers will be using popular loosely/dynamically-typed languages because they're easy to generate and they'll give you prototypes quickly.

The system-style developers will be using stricter languages and type systems and/or lots of TDD because this will make it easier to catch the generated code's blind spots.

One can imagine that these will be two clearly distinct professions with distinct toolsets.

OptionOfT 6 days ago||
I actually think that the direct usage of AI will reduce in the system-style group (if it was ever large there).

There is a non-trivial cost in taking apart the AI code to ensure it's correct, even with tests. And I think it's easy to become slower than writing it from scratch.

Yoric 3 days ago||
FWIW, I'm clearly system-style. Given that my current company has an AI product, I'm dogfooding it, and I've found good uses for it, mostly for running quick experiments, as a rubber duck, or for solving simple issues in config files, Makefiles, etc.

It doesn't get to generate much of the code I'm shipping, though.

Angostura 6 days ago|||
I just wanted to say how much I like that similie - I'm going to knick it for sure
wasmainiac 6 days ago|||
Code has always been nondetermistic. Which engineer wrote it? What was their past experience? This just feels like we are accepting subpar quality because we have no good way to ensure the code we generate is reasonable that wont mayyyybe rm-rf our server as a fun easter egg.
mort96 6 days ago||
Code written by humans has always been nondeterministic, but generated code has always been deterministic before now. Dealing with nondeterministically generated code is new.
nowittyusername 6 days ago|||
determinism v nondeterminism is and has never been an issue. also all llms are 100% deterministic, what is non deterministic are the sampling parameters used by the inference engine. which by the way can be easily made 100% deterministic by simply turning off things like batching. this is a matter for cloud based api providers as you as the end user doesnt have acess to the inferance engine, if you run any of your models locally in llama.cpp turning off some server startup flags will get you the deterministic results. cloud based api providers have no choice but keeping batching on as they are serving millions of users and wasting precious vram slots on a single user is wasteful and stupid. see my code and video as evidence if you want to run any local llm 100% deterministocally https://youtu.be/EyE5BrUut2o?t=1
nazgul17 6 days ago||
That's not an interesting difference, from my point of view. The box m black box we all use is non deterministic, period. Doesn't matter where on the inside the system stops being deterministic: if I hit the black box twice, I get two different replies. And that doesn't even matter, which you also said.

The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.

One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.

nowittyusername 6 days ago||
My man did you even check my video, did you even try the app. This is not "bug related" nowhere did i say it was a bug. Batch processing is a FEATURE that is intentionally turned on in the inference engine for large scale providers. That does not mean it has to be on. If they turn off batch processing al llm api calls will be 100% deterministic but it will cost them more money to provide the services as now you are stuck with providing 1 api call per GPU. "if I hit the black box twice, I get two different replies" what you are saying here is 100% verifiably wrong. Just because someone chose to turn on a feature in the inference engine to save money does not mean llms are anon deterministic. LLM's are stateless. their weights are froze, you never "run" an LLM, you can only sample it. just like a hologram. and depending on the inference sampling settings you use is what determines the outcome.....
pegasus 6 days ago||
Correct me if I'm wrong, but even with batch processing turned off, they are still only deterministic as long as you set the temperature to zero? Which also has the side-effect of decreasing creativity. But maybe there's a way to pass in a seed for the pseudo-random generator and restore determinism in this case as well. Determinism, in the sense of reproducible. But even if so, "determinism" means more than just mechanical reproducibility for most people - including parent, if you read their comment carefully. What they mean is: in some important way predictable for us humans. I.e. no completely WTF surprises, as LLMs are prone to produce once in a while, regardless of batch processing and temperature settings.
nowittyusername 6 days ago||
You can change ANY sampling parameter once batch processing is off and you will keep the deterministic behavior. temperature, repetition penalty, etc.... I got to say I'm a bit disappointed in seeing this in hacker news, as I expect this from reddit. you bring the whole matter on a silver platter, the video describes in detail how any sampling parameter can be used, i provide the whole code opensource so anyone can try it themselves without taking my claims as hearsay, well you can bring a horse to water as they say....
wasmainiac 4 days ago|||
> generated code has always been deterministic

Technically you are right… but in principle no. Ask an LLM any reasonably complex task and you will get different results. This is because the mode changes periodically and we have no control over the host systems source of entropy. It’s effectively non deterministic.

glitchc 6 days ago|||
Agreed. It's a new programming paradigm that will put more pressure on API and framework design, to protect vibe developers from themselves.
yuedongze 6 days ago|||
i've seen a lot of startups that use AI to QA human work. how about the idea of use humans to QA AI work? a lot of interesting things might follow
hn_acc1 6 days ago|||
This feels a lot like the "humans must be ready at any time to take over from FSD" that Tesla is trying to push. With presumably similar results.

If it works 85% of the time, how soon do you catch that it is moving in the wrong direction? Are you having a standup every few minutes for it to review (edit) it's work with you? Are you reviewing hundreds of thousands of lines of code every day?

It feels a bit like pouring cement or molten steel really fast: at best, it works, and you get things done way faster. Get it just a bit wrong, and your work is all messed up, as well as a lot of collateral damage. But I guess if you haven't shipped yet, it's ok to start over? How many different respins can you keep in your head before it all blends?

adventured 6 days ago||||
A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that AI does. If a median software developer job paid $125k before, it'll shift to $65k-$85k type AI babysitting work after.
mjr00 6 days ago||
It's funny that I heard exactly this when I graduated university in the late 2000s:

> A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that outsourced developers do. If a median software developer job paid $125k before, it'll shift to $65k-$85k type outsourced developer babysitting work after.

__loam 6 days ago||||
No thanks.
Aldipower 6 days ago||||
Sounds inhuman.
quantummagic 6 days ago|||
As an industry, we've been doing the same thing to people in almost every other sector of the workforce, since we began. Automation is just starting to come for us now, and a lot of us are really pissed off about it. All of a sudden, we're humanitarians.
Terr_ 6 days ago||
> Automation is just starting to come for us now

This argument is common and facile: Software development has always been about "automating ourselves out of a job", whether in the broad sense of creating compilers and IDEs, or in the individual sense that you write some code and say: "Hey, I don't want to rewrite this again later, not even if I was being paid for my time, I'll make it into a reusable library."

> the same thing

The reverse: What pisses me off is how what's coming is not the same thing.

Customers are being sold a snake-oil product, and its adoption may well ruin things we've spent careers de-crappifying by making them consistent and repeatable and understandable. In the aftermath, some portion of my (continued) career will be diverted to cleaning up the lingering damage from it.

A4ET8a8uTh0_v2 6 days ago|||
Nah, sounds like management, but I am repeating myself. In all seriousness, I have found myself having to carefully rein some of similar decisions in. I don't want to get into details, but there are times I wonder if they understand how things really work or if people need some 'floor' level exposure before they just decree stuff.
colechristensen 6 days ago|||
Yes, but not like what you think. Programmers are going to look more like product managers with extra technical context.

AI is also great at looking for its own quality problems.

Yesterday on an entirely LLM generated codebase

Prompt: > SEARCH FOR ANTIPATTERNS

Found 17 antipatterns across the codebase:

And then what followed was a detailed list, about a third of them I thought were pretty important, a third of them were arguably issues or not, and the rest were either not important or effectively "this project isn't fully functional"

As an engineer, I didn't have to find code errors or fix code errors, I had to pick which errors were important and then give instructions to have them fixed.

mjr00 6 days ago|||
> Programmers are going to look more like product managers with extra technical context.

The limit of product manager as "extra technical context" approaches infinity is programmer. Because the best, most specific way to specify extra technical context is just plain old code.

LPisGood 6 days ago||
This is exactly why no code / low code solutions don’t really work. At the end of the day, there is irreducible technical complexity.
manmal 6 days ago|||
Yeah, don‘t rely on the LLM finding all the issues. Complex code like Swift concurrency tooling is just riddled with issues. I usually need to increase to 100% line coverage and then let it loop on hanging tests until everything _seems_ to work.

(It’s been said that Swift concurrency is too hard for humans as well though)

colechristensen 6 days ago||
I don't trust programmers to find all the issues either and in several shops I've been in "we should have tests" was a controversial argument.

A good software engineering system built around the top LLMs today is definitely competitive in quality to a mediocre software shop and 100x faster and 1000x cheaper.

energy123 6 days ago||
Nondeterministic isn't the right word because LLM outputs are deterministic and the tokens created from those outputs can also be deterministic.
Yoric 6 days ago||
I agree that non-deterministic isn't the right word, because that's not the property we care about, but unless I'm strongly missing something LLM outputs are very much non-deterministic, both during the inference itself and when projecting the embeddings back into tokens.
energy123 6 days ago||
I agree it isn't the main property we care about, we care about reliability.

But at least in its theoretical construction the LLM should be deterministic. It outputs a fixed probability distribution across tokens with no rng involvement.

We then sample from that fixed distribution non-deterministically for better performance or we use greedy decoding and get slightly worse performance in exchange for full determinism.

Happy to be corrected if I am wrong about something.

Yoric 3 days ago||
Ah, I realize that I had misunderstood your earlier comment, my apologies and thanks for clarifying!

We're leaving my area of confidence, so take everything I write with a pinch of salt.

As far as I understand, indeed, each layer transforms a set of inputs into a probability distribution. However, if you wanted to compute entirely with probability distributions, you'd need the ability to compose these distributions across layers. Mathematically, it doesn't feel particularly complicated, but computationally, it feels like this adds several orders of magnitude of both space and time.

delis-thumbs-7e 6 days ago||
> A very good example of the first category is image (and video) generation. Drawing/rendering a realistic looking image is a crazily hard task. Have you tried to make a slide look nicer? It will take me literally hours to center the text boxes to make it look “good”. However, you really just need to take a look at the output of Nano Banana and you can tell if it’s a good render or a bad one based on how you feel.

The writer could be very accomplished when it comes to developing - I don’t know - but they clearly don’t understand a single thing about visual arts or culture. I probably could center those text boxes after fiddling with them maybe ten seconds - I have studied art since I was a kid. My bf could do it instantly without thinking a second, he is a graphic designer. You might think that you are able to see what « looks good » since, hey you have eyes, but no you can’t. There’s million details you will miss, or maybe feel something is off, but cannot quite say why. This is why you have graphic designers, who are trained to do that to do it. They can also use generative tools to make something genuinely stunning, unlike most of us. Why? Skills.

This is the same difference why the guy in the story who can’t code can’t code even with LLM, whereas the guy who cans is able to code even faster with these new tools. If use LLM’s for basically auto-completion (what transformer models really are for) you can work with familiar codebase very quickly I’m sure. I’ve used it to gen SQL call statements, which I can’t be bothered to type myself and it was perfect. If I try to generate something I don’t really understand or know how to do, I’m lost staring at sole horrible gobbledygoo that is never going to work. Why? Skills.

There is no verification engineering. There is just people who know how to do things, who have studied their whole life to get those skills. And no, you will not replace a real hardcore professional with an LLM. LLM’s are just tools, nothing else. A tractor replaced a horse in turning the field, bit you still need a farmer to drive it.

louthy 6 days ago||
> You might think that you are able to see what « looks good » since, hey you have eyes, but no you can’t.

I'm sure lots of people will reply to you stating the opposite, but for what it's worth, I agree. I am not a visual artist... well, not any more, I was really into it as a kid and had it beaten out of me by terrible art teachers, but I digress... I am creative (music), and have a semblance of understanding of the creative process.

I ran a SaaS company for 20 years and would be constantly amazed at how bad the choices of software engineers would be when it came to visual design. I could never quite understand whether they just didn't care or just couldn't see. I always believed (hoped) it was the latter. Even when I explained basic concepts like consistent borders, grid systems, consistent fonts and font-sizing, less visual clutter, etc. they would still make the same mistakes over and over.

To the trained eye they immediately see it and see what's right and what's wrong. And that's why we still need experts. It doesn't matter what is being generated, if you don't have expertise to know whether it's good or not, the chances are glaring errors will be missed (in code and in visual design)

vbezhenar 6 days ago|||
> A tractor replaced a horse in turning the field, bit you still need a farmer to drive it.

Before mechanisation, like 50x more people worked in the agricultural sector, compared to today. So tractors certainly left without work a huge number of people. Our society adapted to this change and sucked these people into industrial sector.

If LLM would work like a tractor, it would force 49 out of 50 programmers (or, more generically, blue-collar workers) to left their industry. Is there a place for them to work instead? I don't know.

delis-thumbs-7e 4 days ago||
Fair point. The farms also begun to produce exponentially more food. If LLM’s would prove as revolutionary as Spinning Jenny and mechanisation of farm labour (which I don’t believe for a second), we could provide a easier life for billions of people, cure illnesses and poverty, provide education for countless children… The farm hands and their families moved to cities into factory work, which at least in England was dickensian horror of poverty and slums, but in many other countries (Nordic for instance) created urbanisation and new meaning of life as well as upwards social mobility. Many computer scientis here had a farmer as a grand-father or great-grandfather.

But none of this chamged how food grows and that you need somebody who bloody well knows what they are doing to produce it. Especially how machinised it is today.

However, I do not believe LLM to be a tractor. More like a slightly different hammer. You still need to hit the nail.

MLgulabio 6 days ago|||
I have learned a little bit of photoshop and 10 years ago maya too.

But i'm a software engineere by trade and I do not struggle with telling you that this thing has to move left for reason xy, i would struggle with random tools capable of doing that particular thing for me.

And it does not matter here how i did it if the result is the same result.

In Software Engineering this is just not always the case. Because often enough you would need to verify that what you get is the thing you expect (did the report actually take the right numbers) or Security. Security is the biggest risk to all ai coding out there. Security is already so hard because people don't see it, they ignore it because they don't know.

You have so many non functional requirements in software which just don't exist in art. If i need that image, thats it. Most complex thing here? Perhaps color calibration and color profiles. Resolution.

If we talk about 3D it gets again a little bit more complicated because now we talk the right 3d model, right way to rig, etc.

Also if someone says "i need a picture for x" and is happy about it, the risk is less customers. But if someone needs a new feature and tomorrow all your customer data are exposed or the companies product stops working because of a basic bug, the company might be gone a week later.

jstanley 6 days ago|||
Centering text boxes in competent design software is easy because it has a tool to align things to the centre of other things.

For example, Inkscape has this and it is easy to use.

wongarsu 6 days ago|||
Though it's notable that sometimes this will produce "wrong" results because it centers on the geometric middle point of the box, while the correct thing is often more like bringing the center of gravity into the middle

I'm more of a fan of aligning to an edge anyways. But some designers love to get really deep into these kinds of things, often in ways they can't really articulate

delis-thumbs-7e 6 days ago|||
I meant just by eye, mate. But it is pretty bad example anyway, obvs it is something that any program can do better than us. Better would be layout or maybe typography. Even professionals mess it up all the time.

Point is, even basic visual design is far from intuitive.

HWR_14 6 days ago|||
Centering the text on a slide is such a trivial thing. It is the default behavior.
nradov 5 days ago||
We literally have self driving tractors now.

https://www.deere.com/en/autonomous/

aryehof 6 days ago||
> “AI always thinks and learns faster than us, this is undeniable now”

No, it neither thinks nor learns. It can give an illusion of thinking, and an AI model itself learns nothing. Instead it can produce a result based on its training data and context.

I think it important that we do not ascribe human characteristics where not warranted. I also believe that understanding this can help us better utilize AI.

jascha_eng 6 days ago||
Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails.

Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario.

yuedongze 6 days ago||
indeed, i see verification debt outweighing tradition tech debt very very soon...
bigbuppo 6 days ago|||
And with any luck, they don't vibe code their tests that ultimately just return true;
jopsen 6 days ago|||
I would rather write the code and have AI write the tests :)

And I have on occasion found it useful.

catigula 6 days ago||
I can automatically generate suites of plausible tests using Claude Code.

If you can make as a rule "no AI for tests", then you can simply make the rule "no AI" or just learn to cope with it.

adxl 6 days ago||
I remember a junior dev who thought he was done when his code conpiled without syntax errors.
WhyOhWhyQ 6 days ago|
"AI always thinks and learns faster than us, this is undeniable now. "

Sort of a nitpick, because what's written is true in some contexts (I get it, web development is like the ideal context for AI for a variety of reasons), but this is currently totally false in lots of knowledge domains very much like programming. AI is currently terrible at the math niches I'm interested in. Since there's no economic incentive to improve things and no mountain of literature on those topics, unless AI really becomes self-learning / improves in some real way, I don't see the situation ever changing. AI has consistently gotten effectively a 0% score on my personal benchmarks for those topics.

It's just aggravating to see someone write "totally undeniable" when the thing is trivially denied.

jakeydus 5 days ago|
> It's just aggravating to see someone write "totally undeniable" when the thing is trivially denied.

You've described AI hype bros in a nutshell, I think.

More comments...