Posted by mellosouls 4 hours ago
The only github I could find is: https://github.com/strongdm/attractor
Building Attractor
Supply the following prompt to a modern coding agent
(Claude Code, Codex, OpenCode, Amp, Cursor, etc):
codeagent> Implement Attractor as described by
https://factory.strongdm.ai/
Canadian girlfriend coding is now a business model.Edit:
I did find some code. Commit history has been squashed unfortunately: https://github.com/strongdm/cxdb
There's a bunch more under the same org but it's years old.
(I'm continuing to try to learn Rust!)
For those of us working on building factories, this is pretty obvious because once you immediately need shared context across agents / sessions and an improved ID + permissions system to keep track of who is doing what.
PS: TIL about "Canadian girlfriend", thanks!
The worst part is they got simonw to (perhaps unwittingly or social engineering) vouch and stealth market for them.
And $1000/day/engineer in token costs at current market rates? It's a bold strategy, Cotton.
But we all know what they're going for here. They want to make themselves look amazing to convince the boards of the Great Houses to acquire them. Because why else would investors invest in them and not in the Great Houses directly.
(Two people who's opinions I respect said "yeah you really should accept that invitation" otherwise I probably wouldn't have gone.)
I've been looking forward to being able to write more details about what they're doing ever since.
EDIT nvm just saw your other comment.
I wrote a bunch more about that this morning: https://simonwillison.net/2026/Feb/7/software-factory/
This one is worth paying attention to to. They're the most ambitious team I've see exploring the limits of what you can do with this stuff. It's eye-opening.
> If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
Seems to me like if this is true I'm screwed no matter if I want to "embrace" the "AI revolution" or not. No way my manager's going to approve me to blow $1000 a day on tokens, they budgeted $40,000 for our team to explore AI for the entire year.
Let alone from a personal perspective I'm screwed because I don't have $1000 a month in the budget to blow on tokens because of pesky things that also demand financial resources like a mortgage and food.
At this point it seems like damned if I do, damned if I don't. Feels bad man.
I don't think you need to spend anything like that amount of money to get the majority of the value they're describing here.
Edit: added a new section to my blog post about this: https://simonwillison.net/2026/Feb/7/software-factory/#wait-...
I would expect cost to come down over time, using approaches pioneered in the field of manufacturing.
I built a tool that writes (non shit) reports from unstructured data to be used internally by analysts at a trading firm.
It cost between $500 to $5000 per day per seat to run.
It could have cost a lot more but latency matters in market reports in a way it doesn't for software. I imagine they are burning $1000 per day per seat because they can't afford more.
Another skill called skill-improver, which tries to reduce skill token usage by finding deterministic patterns in another skill that can be scripted, and writes and packages the script.
Putting them together, the container-maintenance thingy improves itself every iteration, validated with automatic testing. It works perfectly about 3/4 of the time, another half of the time it kinda works, and fails spectacularly the rest.
It’s only going to get better, and this fit within my Max plan usage while coding other stuff.
If the tokens that need to attend to each other are on opposite ends of the code base the only way to do that is by reading in the whole code base and hoping for the best.
If you're very lucky you can chunk the code base in such a way that the chunks pairwise fit in your context window and you can extract the relevant tokens hierarchically.
If you're not. Well get reading monkey.
Agents, md files, etc. are bandaids to hide this fact. They work great until they don't.
As for me, we get Cursor seats at work, and at home I have a GPU, a cheap Chinese coding plan, and a dream.
Right in the feels
Make a "systemctl start tokenspender.service" and share it with the team?
To be fair, I’ll bet many embracing concerning advice like that have never worked for the same company for a full year.
I didn't read that as you need to be spending $1k/day per engineer. That is an insane number.
EDIT: re-reading... it's ambiguous to me. But perhaps they mean per day, every day. This will only hasten the elimination of human developers, which I presume is the point.
At home on my personal setup, I haven't even had to move past the cheapest codex/claude code subscription because it fulfills my needs ¯\_(ツ)_/¯. You can also get a lot of mileage out of the higher tiers of these subscriptions before you need to start paying the APIs directly.
Takes like this are just baffling to me.
For one engineer that is ~260k a year.
The thing with AI is that it ranges from net-negative to easily brute forcing tedious things that we never have considered wasting human time on. We can't figure out where the leverage is unless all the subject matter experts in their various organizational niches really check their assumptions and get creative about experimenting and just trying different things that may never have crossed their mind before. Obviously over time best practices will emerge and get socialized, but with the rate that AI has been improving lately, it makes a lot of sense to just give employees carte blanche to explore. Soon enough there will be more scrutiny and optimization, but that doesn't really make sense without a better understanding of what is possible.
1) Engineering investment at companies generally pays off in multiples of what is spent on engineering time. Say you pay 10 engineers $200k / year each and the features those 10 engineers build grow yearly revenue by $10M. That’s a 4x ROI and clearly a good deal. (Of course, this only applies up to some ceiling; not every company has enough TAM to grow as big as Amazon).
2) Giving engineers near-unlimited access to token usage means they can create even more features, in a way that still produces positive ROI per token. This is the part I disagree with most. It’s complicated. You cannot just ship infinite slop and make money. It glosses over massive complexity in how software is delivered and used.
3) Therefore (so the argument goes) you should not cap tokens and should encourage engineers to use as many as possible.
Like I said, I don’t agree with this argument. But the key thing here is step 1. Engineering time is an investment to grow revenue. If you really could get positive ROI per token in revenue growth, you should buy infinite tokens until you hit the ceiling of your business.
Of course, the real world does not work like this.
But my point is moreso that saying 1k a day is cheap is ridiculous. Even for a company that expects an ROI on that investment. There’s risks involved and as you said, diminishing returns on software output.
I find AI bros view of the economics of AI usage strange. It’s reasonable to me to say you think its a good investment, but to say it’s cheap is a whole different thing.
The best you can say is “high cost but positive ROI investment.” Although I don’t think that’s true beyond a certain point either, certainly not outside special cases like small startups with a lot of funding trying to build a product quickly. You can’t just spew tokens about and expect revenue to increase.
That said, I do reserve some special scorn for companies that penny-pinch on AI tooling. Any CTO or CEO who thinks a $200/month Claude Max subscription (or equivalent) for each developer is too much money to spent really needs to rethink their whole model of software ROI and costs. You’re often paying your devs >$100k yr and you won’t pay $2k / yr to make them more productive? I understand there are budget and planning cycle constraints blah blah, but… really?!
The moats here are around mechanism design and values (to the extent they differ): the frontier labs are doomed in this world, the commons locked up behind paywalls gets hyper mirrored, value accrues in very different places, and it's not a nice orderly exponent from a sci-fi novel. It's nothing like what the talking heads at Davos say, Anthropic aren't in the top five groups I know in terms of being good at it, it'll get written off as fringe until one day it happens in like a day. So why be secretive?
You get on the ladder by throwing out Python and JSON and learning lean4, you tie property tests to lean theorems via FFI when you have to, you start building out rfl to pretty printers of proven AST properties.
And yeah, the droids run out ahead in little firecracker VMs reading from an effect/coeffect attestation graph and writing back to it. The result is saved, useful results are indexed. Human review is about big picture stuff, human coding is about airtight correctness (and fixing it when it breaks despite your "proof" that had a bug in the axioms).
Programming jobs are impacted but not as much as people think: droids do what David Graeber called bullshit jobs for the most part and then they're savants (not polymath geniuses) at a few things: reverse engineering and infosec they'll just run you over, they're fucking going in CIC.
This is about formal methods just as much as AI.
Their page looks to me like a lot of invented jargon and pure narrative. Every technique is just a renamed existing concept. Digital Twin Universe is mocks, Gene Transfusion is reading reference code, Semport is transpilation. The site has zero benchmarks, zero defect rates, zero cost comparisons, zero production outcomes. The only metric offered is "spend more money".
Anyone working honestly in this space knows 90% of agent projects are failing.
The main page of HN now has three to four posts daily with no substance, just Agentic AI marketing dressed as engineering insight.
With Google, Microsoft, and others spending $600 billion over the next year on AI, and panicking to get a return on that Capex....and with them now paying influencers over $600K [1] to manufacture AI enthusiasm to justify this infrastructure spend, I won't engage with any AI thought leadership that lacks a clear disclosure of financial interests and reproducible claims backed by actual data.
Show me a real production feature built entirely by agents with full traces, defect rates, and honest failure accounting. Or stop inventing vocabulary and posting vibes charts.
I will reformulate my question to ask instead if the page is still 100% correct or needs an update?
Apart from being a absolutely ridiculous metric, this is a bad approach, at least with current generation models. In my experience, the less you inspect what the model does, the more spaghetti-like the code will be. And the flying spaghetti monster eats tokens faster than you can blink! Or put more clearly: implementing a feature will cost you a lot more tokens in a messy code base than it does in a clean one. It's not (yet) enough to just tell the agent to refactor and make it clean, you have to give it hints on how to organise the code.
I'd go do far as to say that if you're burning a thousand dollars a day per engineer, you're getting very little bang for your tokens.
And your engineers probably look like this: https://share.google/H5BFJ6guF4UhvXMQ7
If everyone can do this, there won't be any advantage (or profit) to be had from it very soon. Why not buy your own hardware and run local models, I wonder.
No local model out there is as good as the SOTA right now.
You should have led with that. I think that's actually more impressive; anyone can spend tokens.
If their focus is to only show their productivity/ai system but not having built anything meaningful with it, it feels like one of those scammy life coaches/productivity gurus that talk about how they got rich by selling their courses.
Oh, to have the luxury of redefining success and handwaving away hard learned lessons in the software industry.
What we have instead are many people creating hierarchies of concepts, a vast “naming” of their own experiences, without rigorous quantitative evaluation.
I may be alone in this, but it drives me nuts.
Okay, so with that in mind, it amounts to heresay “these guys are doing something cool” — why not shut up or put up with either (a) an evaluation of the ideas in a rigorous, quantitative way or (b) apply the ideas to produce an “hard” artifact (analogous, e.g., to the Anthropic C compiler, the Cursor browser) with a reproducible pathway to generation.
The answer seems to be that (b) is impossible (as long as we’re on the teet of the frontier labs, which disallow the kind of access that would make (b) possible) and the answer for (a) is “we can’t wait we have to get our names out there first”
I’m disappointed to see these types of posts on HN. Where is the science?
There are plenty of papers out there that look at LLM productivity and every one of them seems to have glaring methodology limitations and/or reports on models that are 12+ months out of date.
Have you seen any papers that really elevated your understanding of LLM productivity with real-world engineering teams?
Further, I’m not sure this elevates my understanding: I’ve read many posts on this space which could be viewed as analogous to this one (this one is more tempered, of course). Each one has this same flaw: someone is telling me I need to make a “organization” out of agents and positive things will follow.
Without a serious evaluation, how am I supposed to validate the author’s ontology?
Do you disagree with my assessment? Do you view the claims in this content as solid and reproducible?
My own view is that these are “soft ideas” (GasTown, Ralph fall into a similar category) without the rigorous justification.
What this amounts to is “synthetic biology” with billion dollar probability distributions — where the incentives are setup so that companies are incentivized to convey that they have the “secret sauce” … for massive amounts of money.
To that end, it’s difficult to trust a word out of anyone’s mouth — even if my empirical experiences match (along some projection).
Taking the time to point a coding agent towards the public (or even private) API of a B2B SaaS app to generate a working (partial) clone is effectively "unblocking" the agent. I wouldn't be surprised if a "DTU-hub" eventually gains traction for publishing and sharing these digital twins.
I would love to hear more about your learnings from building these digital twins. How do you handle API drift? Also, how do you handle statefulness within the twins? Do you test for divergence? For example, do you compare responses from the live third-party service against the Digital Twin to check for parity?