Top
Best
New

Posted by simonw 2 hours ago

StrongDM's AI team build serious software without even looking at the code(simonwillison.net)
30 points | 39 comments
CuriouslyC 23 minutes ago|
Until we solve the validation problem, none of this stuff is going to be more than flexes. We can automate code review, set up analytic guardrails, etc, so that looking at the code isn't important, and people have been doing that for >6 months now. You still have to have a human who knows the system to validate that the thing that was built matches the intent of the spec.

There are higher and lower leverage ways to do that, for instance reviewing tests and QA'ing software via use vs reading original code, but you can't get away from doing it entirely.

kaicianflone 2 minutes ago||
I agree with this almost completely. The hard part isn’t generation anymore, it’s validation of intent vs outcome. Especially once decisions are high-stakes or irreversible, think pkg updates or large scale tx

What I’m working on (open source) is less about replacing human validation and more about scaling it: using multiple independent agents with explicit incentives and disagreement surfaced, instead of trusting a single model or a single reviewer.

Humans are still the final authority, but consensus, adversarial review, and traceable decision paths let you reserve human attention for the edge cases that actually matter, rather than reading code or outputs linearly.

Until we treat validation as a first-class system problem (not a vibe check on one model’s answer), most of this will stay in “cool demo” territory.

sonofhans 51 seconds ago||
“Anymore?” After 40 years in software I’ll say that validation of intent vs. outcome has always been a hard problem. There are and have been no shortcuts other than determined human effort.
cronin101 16 minutes ago|||
This obviously depends on what you are trying to achieve but it’s worth mentioning that there are languages designed for formal proofs and static analysis against a spec, and I have suspicions we are currently underutilizing them (because historically they weren’t very fun to write, but if everything is just tokens then who cares).

And “define the spec concretely“ (and how to exploit emerging behaviors) becomes the new definition of what programming is.

varispeed 2 minutes ago|||
AI also quickly goes off the rails, even the Opus 2.6 I am testing today. The proposed code is very much rubbish, but it passes the tests. It wouldn't pass skilled human review. Worst thing is that if you let it, it will just grow tech debt on top of tech debt.
simianwords 16 minutes ago||
did you read the article?

>StrongDM’s answer was inspired by Scenario testing (Cem Kaner, 2003).

CuriouslyC 7 minutes ago||
Tests are only rigorous if the correct intent is encoded in them. Perfectly working software can be wrong if the intent was inferred incorrectly. I leverage BDD heavily, and there a lot of little details it's possible to misinterpret going from spec -> code. If the spec was sufficient to fully specify the program, it would be the program, so there's lots of room for error in the transformation.
simianwords 5 minutes ago||
Then I disagree with you

> You still have to have a human who knows the system to validate that the thing that was built matches the intent of the spec.

You don't need a human who knows the system to validate it if you trust the LLM to do the scenario testing correctly. And from my experience, it is very trustable in these aspects.

Can you detail a scenario by which an LLM can get the scenario wrong?

politelemon 27 seconds ago|||
I do not trust the LLM to do it correctly. We do not have the same experience with them, and should not assume everyone does. To me, your question makes no sense to ask.
CuriouslyC 2 minutes ago|||
The whole point is that you can't 100% trust the LLM to infer your intent with accuracy from lossy natural language. Having it write tests doesn't change this, it's only asserting that its view of what you want is internally consistent, it is still just as likely to be an incorrect interpretation of your intent.
codingdave 58 minutes ago||
> If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement

At that point, outside of FAANG and their salaries, you are spending more on AI than you are on your humans. And they consider that level of spend to be a metric in and of itself. I'm kinda shocked the rest of the article just glossed over that one. It seems to be a breakdown of the entire vision of AI-driven coding. I mean, sure, the vendors would love it if everyone's salary budget just got shifted over to their revenue, but such a world is absolutely not my goal.

simonw 34 minutes ago||
Yeah I'm going to update my piece to talk more about that.

Edit: here's that section: https://simonwillison.net/2026/Feb/7/software-factory/#wait-...

dixie_land 48 minutes ago|||
This is an interesting point but if I may offer a different perspective:

Assuming 20 working days a month: that's 20k x 12 == 240k a year. So about a fresh grad's TC at FANG.

Now I've worked with many junior to mid-junior level SDEs and sadly 80% does not do a better job than Claude. (I've also worked with staff level SDEs who writes worse code than AI, but they offset that usually with domain knowledge and TL responsibilities)

I do see AI transform software engineering into even more of a pyramid with very few human on top.

mejutoco 3 minutes ago|||
Original claim was:

> At that point, outside of FAANG and their salaries, you are spending more on AI than you are on your humans

You say

> Assuming 20 working days a month: that's 20k x 12 == 240k a year. So about a fresh grad's TC at FANG.

So you both are in agreement on that part at least.

bobbiechen 43 minutes ago|||
Important too, a fully loaded salary costs the company far more than the actual salary that the employee receives. That would tip this balancing point towards 120k salaries, which is well into the realm of non-FAANG
elicash 22 minutes ago|||
It reminds me of how people talk about the proper ratio of employees to supervisor. (With AI being the employees in this example.)
dewey 54 minutes ago|||
It would depend on the speed of execution, if you can do the same amount of work in 5 days with spending 5k, vs spending a month and 5k on a human the math makes more sense.
verdverm 28 minutes ago||
You won't know which path has larger long term costs, for a example, what if the AI version costs 10x to run?
kaffekaka 54 minutes ago|||
If the output is (dis)proportionally larger, the cost trade off might be the right thing to do.

And it might be the tokens will become cheaper.

obirunda 8 minutes ago||
Tokens will become significantly more expensive in the short term actually. This is not stemming from some sort of anti-AI sentiment. You have two ramps that are going to drive this. 1. Increase demand, linear growth at least but likely this is already exponential. 2. Scaling laws demand, well, more scale.

Future better models will both demand higher compute use AND higher energy. We cannot underestimate the slowness of energy production growth and also the supplies required for simply hooking things up. Some labs are commissioning their own power plants on site, but this is not a true accelerator for power grid growth limits. You're using the same supply chain to build your own power plant.

If inference cost is not dramatically reduced and models don't start meaningfully helping with innovations that make energy production faster and inference/training demand less power, the only way to control demand is to raise prices. Current inference costs, do not pay for training costs. They can probably continue to do that on funding alone, but once the demand curve hits the power production limits, only one thing can slow demand and that's raising the cost of use.

philipp-gayret 38 minutes ago|||
$1,000 is maybe 5$ per workday. I measure my own usage and am on the way to $6,000 for a full year. I'm still at the stage where I like to look at the code I produce, but I do believe we'll head to a state of software development where one day we won't need to.
gipp 31 minutes ago||
Maybe read that quote again. The figure is 1000 per day
verdverm 25 minutes ago||
The quote is if you haven't spent $1000 per dev today

which sounds more like if you haven't reached this point you don't have enough experience yet, keep going

At least that's how I read the quote

delecti 5 minutes ago||
Scroll further down (specifically to the section titled "Wait, $1,000/day per engineer?"). The quote in the quoted article (so from the original source in factory.strongdm.ai) could potentially be read either way, but Simon Willison (the direct link) absolutely is interpreting it as $1000/dev/day. I also think $1000/dev/day is the intended meaning in the strongdm article.
japhyr 1 hour ago||
> That idea of treating scenarios as holdout sets—used to evaluate the software but not stored where the coding agents can see them—is fascinating. It imitates aggressive testing by an external QA team—an expensive but highly effective way of ensuring quality in traditional software.

This is one of the clearest takes I've seen that starts to get me to the point of possibly being able to trust code that I haven't reviewed.

The whole idea of letting an AI write tests was problematic because they're so focused on "success" that `assert True` becomes appealing. But orchestrating teams of agents that are incentivized to build, and teams of agents that are incentivized to find bugs and problematic tests, is fascinating.

I'm quite curious to see where this goes, and more motivated (and curious) than ever to start setting up my own agents.

Question for people who are already doing this: How much are you spending on tokens?

That line about spending $1,000 on tokens is pretty off-putting. For commercial teams it's an easy calculation. It's also depressing to think about what this means for open source. I sure can't afford to spend $1,000 supporting teams of agents to continue my open source work.

Lwerewolf 19 minutes ago||
Re: $1k/day on tokens - you can also build a local rig, nothing "fancy". There was a recent thread here re: the utility of local models, even on not-so-fancy hardware. Agents were a big part of it - you just set a task and it's done at some point, while you sleep or you're off to somewhere or working on something else entirely or reading a book or whatever. Turn off notifications to avoid context switches.

Check it: https://news.ycombinator.com/item?id=46838946

verdverm 23 minutes ago||
Do you know what those hold out twats should look like before thoroughly iterating on the problem?

I think people are burning money on tokens letting these things fumble about until they arrive at some working set of files.

I'm staying in the loop more than this, building up rather than tuning out

simianwords 11 minutes ago||
I like the idea but I'm not so sure this problem can be solved generally.

As an example: imagine someone writing a data pipeline for training a machine learning model. Anyone who's done this knows that such a task involves lots data wrangling work like cleaning data, changing columns and some ad hoc stuff.

The only way to verify that things work is if the eventual model that is trained performs well.

In this case, scenario testing doesn't scale up because the feedback loop is extremely large - you have to wait until the model is trained and tested on hold out data.

Scenario testing clearly can not work on the smaller parts of the work like data wrangling.

rileymichael 11 minutes ago||
> In rule form: - Code must not be written by humans - Code must not be reviewed by humans

as a previous strongDM customer, i will never recommend their offering again. for a core security product, this is not the flex they think it is

also mimicking other products behavior and staying in sync is a fools task. you certainly won't be able to do it just off the API documentation. you may get close, but never perfect and you're going to experience constant breakage

d0liver 32 minutes ago||
> As I understood it the trick was effectively to dump the full public API documentation of one of those services into their agent harness and have it build an imitation of that API, as a self-contained Go binary. They could then have it build a simplified UI over the top to help complete the simulation.

This is still the same problem -- just pushed back a layer. Since the generated API is wrong, the QA outcomes will be wrong, too. Also, QAing things is an effective way to ensure that they work _after_ they've been reviewed by an engineer. A QA tester is not going to test for a vulnerability like a SQL injection unless they're guided by engineering judgement which comes from an understanding of the properties of the code under test.

The output is also essentially the definition of a derivative work, so it's probably not legally defensible (not that that's ever been a concern with LLMs).

wrs 41 minutes ago||
On the cxdb “product” page one reason they give against rolling your own is that it would be “months of work”. Slipped into an archaic off-brand mindset there, no?
verdverm 16 minutes ago|
We make this great, just don't use it to build the same thing we offer

Heat death of the SaaSiverse

CubsFan1060 50 minutes ago||
I can't tell if this is genius or terrifying given what their software does. Probably a bit of both.

I wonder what the security teams at companies that use StrongDM will think about this.

verdverm 15 minutes ago|
I doubt this would be allowed in regulated industries like healthcare
g947o 50 minutes ago|
Serious question: what's keeping a competitor from doing the same thing and doing it better than you?
simonw 44 minutes ago|
That's a genuine problem now. If you launch a new feature and your competition can ship their own copy a few hours later the competitive dynamics get really challenging!

My hunch is that the thing that's going to matter is network effects and other forms of soft lockin. Features alone won't cut it - you need to build something where value accumulates to your user over time in a way that discourages them from leaving.

CubsFan1060 38 minutes ago|||
The interesting part about that is both of those things require some sort of time to start.

If I launch a new product, and 4 hours later competitors pop up, then there's not enough time for network effects or lockin.

I'm guessing what is really going to be needed is something that can't be just copied. Non-public data, business contracts, something outside of software.

verdverm 13 minutes ago|||
Marketing and brand are still the most important, though I personally hope for a world where business is more indie and less winner take all

You can see the first waves of this trend in HN new.

andersmurphy 21 seconds ago||
Wouldn't the incumbents with their fantastic distribution channels, capital and own models just wipe the floor with everyone as talent no longer matters?