Top
Best
New

Posted by derek 13 hours ago

Building a Personal AI Factory(www.john-rush.com)
220 points | 121 comments
simonw 11 hours ago|
My hunch is that this article is going to be almost completely impenetrable to people who haven't yet had the "aha" moment with Claude Code.

That's the moment when you let "claude --dangerously-skip-permissions" go to work on a difficult problem and watch it crunch away by itself for a couple of minutes running a bewildering array of tools until the problem is fixed.

I had it compile, run and debug a Mandelbrot fractal generator in 486 assembly today, executing in Docker on my Mac, just to see how well it could do. It did great! https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9...

low_common 9 hours ago||
That's a pretty trivial example for one of these IDEs to knock out. Assembly is certainly in their training sets, and obviously docker is too. I've watched cursor absolutely run amok when I let it play around in some of my codebase.

I'm bullish it'll get there sooner rather than later, but we're not there yet.

simonw 8 hours ago|||
I think the hardest problem in computer science right now may be coming up with an LLM demo that doesn't get called "pretty trivial".
1dom 2 hours ago|||
I'm very pro LLM and AI. But I completely agree with the comment about how many pieces praising LLMs are doing so with trivial examples. Trivial might not be the right word, but I can't think of a better one that doesn't have a negative connotation, but this shouldn't be negative. Your examples are good and useful, and capture a bunch of tasks a software engineer would do.

I'd say your mandelbrot debug and the LLVM patch are both "trivial" in the same sense: they're discrete, well defined, clear-success-criteria-tasks that could be assigned to any mid/senior software engineer in a relevant domain and they could chip through it in a few weeks.

Don't get me wrong, that's an insane power and capability of LLMs, I agree. But ultimately it's just doing a day job that millions of people can do sleep deprived and hungover.

Non-trivial examples are things that would take a team of different specialist skillsets months to create. One obvious potential reason why there's few non-trivial AI examples is because non-trivial AI examples require non-trivial amount of time to be able to generate and verify.

A non-trivial example isn't an example you can look at the output and say "yup, AI's done well here". It requires someone spends time going into what's been produced, assessing it, essentially redesigning it as a human to figure out all the complexity of a modern non-trivial system to confirm the AI actually did all that stuff correctly.

An in depth audit of a complex software system can take months or even years and is a thorough and tedious task for a human, and the Venn diagrams of humans who are thinking "I want to spend more time doing thorough, tedious code tasks" and "I want to mess around with AI coding" is 2 separate circles.

sundache 1 hour ago|||
I only see 148 lines of assembly and a dockerfile that's 7 lines long. Am I missing something or should that take a human less then several weeks.
dotancohen 4 minutes ago||
Depends on what's in those 148 lines.
sroussey 1 hour ago||||
LLMs are best demonstrated with greenfield examples.
j45 1 hour ago||
Plus, applying non-deterministic algorithms in a deterministic way might not always work the same. The software developers are also changing the frames and terms of reference.
sokoloff 2 hours ago||||
> ultimately it's just doing a day job that millions of people can do sleep deprived and hungover.

Doing for < $10 and under an hour what could be done in a few weeks by $10K+ worth of senior staff time is pretty valuable.

1dom 1 hour ago||
If it's something a single senior staff member can do, then - personally - I'd consider it not complex, it's relatively trivial: it can be done by literally a single person.

I'm pro AI, I'm not saying it's not valuable for trivial things. But that's a distinct discussion to the trivial nature of many LLM examples/demos in relation to genuinely complex computer systems.

j45 1 hour ago|||
There is a scale somewhere in these types of articles that will emerge.

It might be something being actually new (cutting edge) vs new to someone vs the human mind wanting to have it be novel and different enough as a comparable percentage of the experience of the first time using ChatGPT 4.

There is also the wiring of non-deterministic software frameworks and architectures compared to the deterministic only software development we're used to.

The former is a different thing than the latter.

sroussey 1 hour ago||||
Convert react-stockcharts to react v19. I’ve tried Claude Code and Cursor but only ended up with hilariously bad results.
cranium 2 hours ago||||
Instead of "pretty trivial", I'd say it's "well-defined and generally understood".

The implicit decisions it had to make were also inconsequential, eg. selection of ASCII chars, color or not, bounds of the domain,...

However, it shows that agents are powerful translators / extractors of general knowledge!

jkhdigital 3 hours ago||||
No the hardest problem is teaching CS undergrads. I just started this year (no background in academia, just 75% of a PhD and well-rounded life experience) and I’ve basically torn up the entire curriculum they handed to me and started vibe-teaching.
j45 1 hour ago||||
Many big problems are made up of small problems.
th0ma5 3 hours ago||||
Maybe you should try something other than demos? Have you tried creating a reliable system?
skydhash 7 hours ago||||
Because they are trivial in a way that you can go on GitHub and copy one of those while not pretending LLM isn't a mashup of the internet.

What people agree on being non-trivial is working on a real project. There's a lot of opensource projects that could benefit from a useful code contribution. But they only got slop thrown at them.

simonw 7 hours ago||
How about landing a compiler optimization in LLVM? https://simonwillison.net/2025/Jun/30/llvm/

(Someone on here already called that a "tinkertoy greenfield project" yesterday.)

skydhash 6 hours ago||
I took the time to investigate the work being done there (all those years learning assembly and computer architecture come in handy), and it confirms (to me) that the key aspect of using LLM is pattern matching. Meaning you know that there's a solution out there (in this case, anything involving multiplying/dividing by a power of 2 can use such trick) and framing your problem (intentionally or not) and you'll get a derived text that will contain a possible solution.

But there's nothing truly novel in the result. The key aspect is being similar enough to something that's already in the training data so that the LLM can extrapolate the rest. The hint can be quite useful and sometimes you have something that shorten the implementation time, but you have to at least have some basic understanding of the domain in order to recognize the signs.

The issue is that the result is always tainted by your prompt. The signs may be there because of your prompt and not because there's some kind of data that need s to be explored further. And sometimes it's a bad fit, similar but different (what you want and what you get). So for the few domain that's valuable to me, I prefer to construct my own mental database that can lead me to concrete artifacts (books, articles, blog posts,...) that exists outside the influence of my query.

ADDENDUM

I can use LLMs with great results and I've done so. But it's more rewarding (and more useful to me) to actually think through the problem and learning from references. Instead of getting a perfect (or wobbly or the wrong category) circle that fits my query, I go to find a strange polygon formed (by me) from other strange polygon. Then because I know I need a circle, I only need to find its center and its radius.

It's slower, but the next time I need another circle (or a square) from the same polygon, it's going to be faster and faster.

fragmede 8 hours ago|||
I think Cloudflare's oauth library qualifies https://news.ycombinator.com/item?id=44159166
gen6acd60af 5 hours ago||
This one?

>Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards.

>To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.

Some time later...

https://github.com/advisories/GHSA-4pc9-x2fx-p7vj / CVE-2025-4143

>The OAuth implementation in workers-oauth-provider that is part of MCP framework https://github.com/cloudflare/workers-mcp, did not correctly validate that redirect_uri was on the allowed list of redirect URIs for the given client registration.

Havoc 49 minutes ago|||
I suspect personal tools are as close as we're going to get to this mythical demo that satisfies all critics. i.e. here is a list of problems i've solved with just AI.

Strikes a balance between simplicity and real world usefulness

barrenko 3 hours ago|||
https://youtu.be/bUBF5V6oDKw I'd like to add this video from the AI Engineer conf, which may also be impenetrable, by the folks from Dagger (person behind Docker)
csomar 6 hours ago|||
That's a very simple example/context that I suspect most LLMs will be able to knock out with minimal frustration. I had much more complex Rust dependency upgrade done on a 30+ iterations on very custom code (wasm stuff where training data is probably scarce). Claude would ping context7 and mcp-lsp to get details. You do find its limits after a while though and as you push it harder.
nico 6 hours ago||
> That's a very simple example/context that I suspect most LLMs will be able to knock out with minimal frustration

Yes an No. You are right that it's a relatively small project. However, I've had really bad experiences trying to get ChatGPT (any of their models) to write small arm64 assembly programs that can compile and run on apple silicon

CjHuber 1 hour ago|||
Is it that much better than Codex?
zackify 10 hours ago|||
If it helps anyone else. I downgraded from Claude max to pro for $20 and the usage limits are really good.

I think they’re trying to compete with Gemini cli and now I’m glad I’m paying less

csomar 6 hours ago|||
I am on max and burning daily (ccusage) roughly my monthly subscription. It is not clear whether the API is very overpriced or we are getting aggressively subsidized. I can afford $100-200/month but not $3.000. Let's hope this last for a good while as GitHub copilot turned off the tap on unlimited usage very recently.
ffsm8 6 hours ago|||
you will run through the pro rate limiting within <1h if you do it the way the article lays out.

But yeah, if you're babysitting a single agent, only applying after reading what it wants to do ... You'll be fine for 3-4 hours before the token limit refreshed after the 5th

zxexz 4 hours ago|||
I've heard that if you have several relatively active separate sessions open, the limit is a little less restrictive. Especially if you do a /clear and continue your session on a different project. Honestly, a lot of Claude Code seems vibecoded if you look at the client side, too. Can't tell if I'm surprised that the backend has an element of that, too. Hey, dogfood tastes good - I respect them for that.
stpedgwdgfhgdd 4 hours ago|||
Same experience. One terminal window with Pro is okay. Multiple CC running in parallel not.

We most likely implement a policy that starters in our company can use Pro. Power users need Max.

com2kid 3 hours ago|||
> That's the moment when you let "claude --dangerously-skip-permissions" go to work on a difficult problem and watch it crunch away by itself for a couple of minutes running a bewildering array of tools until the problem is fixed.

Eh, I just watched Claude spend an hour trying to incorrectly fix code. Eventually I realized what was happening, stepped in and asked it to write a bunch of unit tests first, get the code working against those unit tests, and then get back to me.

Claude Code is amazing, but I still have to step in and give it basic architectural guidance again and again.

gerdesj 10 hours ago||
Crack on - this is YC!

Why are you not already a unicorn?

sussmannbaka 2 hours ago|||
As it turns out, the VC potential of Mandelbrot and HelloWorld.py are quite limited :o)
addandsubtract 1 hour ago||
Bakeries have been in business for thousands of years. Should be pretty easy to sell Mandelbrot everywhere around the world.
lucubratory 10 hours ago|||
An LLM wrapper does not have serious revenue potential. Being able to do very impressive things with Claude Code has a pretty strict ceiling on valuation because at any point Anthropic could destroy your business by removing access, incorporating whatever you're doing into their core feature set, etc.
petesergeant 9 hours ago||
Having worked with some serious pieces of enterprise software, I don't think this is right. Anthropic is not going to perfect multi-vendor integrations, spin up a support team, and solution architect your problems for you. Enterprise software gets into the walls, and can be very hard to displace once deployed. If you build an LLM-wrapper resume parser, once you've got it into your client's workflows, they're going to find it hard to unembed it to replace it with raw Anthropic.
ffsm8 6 hours ago||
But if you did become a unicorn, It would suddenly become very easy to replace for anthropic, because they're the ones actually providing the sauce and can just replicate your efforts. So your window of opportunity is to be too small for anthropic to notice and get interested. That can't be called unicorn

That was the point he was making, at least that's how I understood it

photon_garden 13 hours ago||
It’s hard to evaluate setups like this without knowing how the resulting code is being used.

Standalone vibe coded apps for personal use? Pretty easy to believe.

Writing high quality code in a complex production system? Much harder to believe.

9cb14c1ec0 10 hours ago||
Exactly. I use claude code as a major speedup in coding, but I stay in the loop on every code change to make sure it is creating an optimal system. The few times that I've just let it run have resulted in bugs that customers had to deal with.
stpedgwdgfhgdd 4 hours ago|||
I noticed that i hardly look anymore at the generated Go code… I do give a lot of attention to the tests. I let CC write some failing tests, implement. Let it run against some real scenarios. Find bugs, let it write more tests, fix. And iterate.

Writing this I realise, i should more clearly separate the functional tests from the implementation oriented unit tests.

Aeolun 7 hours ago|||
I think you can probably get a pretty decent thing going if you have models review output they haven’t written themselves (not still in context anyway)
kasey_junk 11 hours ago||
I don’t really understand this article or the workflow it’s describing as it’s kind of vague.

But I use multiple agents talking to each other, async agents, git work trees etc on complex production systems as my day to day workflow. I wouldn’t say I go so far as to never change the outputs but I certainly view it as signal when I don’t get the outputs I want that I need to work on my workflow.

webprofusion 8 hours ago||
The basic idea is that you can continuously document what your system should do (high level and detailed features), how it should prove it has done that, optionally how you want it to do it (architecture and code style etc).

The multi-model AI part is just the (current) tool to help avoid bias and make fine tuned selections for certain parts of the task.

Eventually large complex systems will be built and re-built from a set of requirements and software will finally match the stated requirements. The only "legacy code" will be legacy requirements specifications. Fix your requirements, not the generated code.

qiine 1 hour ago|
sorry but again...

https://i.pinimg.com/736x/03/af/06/03af0602a8caa51507717edd6...

dgunay 4 hours ago||
I am experimenting with a similar workflow and thought I'd share my experience.

I might be a little too hung up on the details compared to a lot of these agent cluster testimonials I've read, but unlike the author I'll be open and say that the codebase I work on is several hundred thousand lines of Go and currently does serve a high 5 to low 6 figure number of real, B2C users. Performance requirements are forgiving but correctness and reliability are very important. Finance.

Currently I use a very basic setup of scripts that clone a repo, configure an agent, and then run it against a prompt in a tmux session. I rely mainly on codex-cli since I am only given an OpenAI key to work with. The codex instances ping me in my system notifications when it's my turn, and I can easily quake-mode my terminal into view and then attach to the session (with a bit of help from fzf). I haven't gotten into MCP yet but it's on my radar.

I can sort of see the vision. For those small but distracting tasks, they are very helpful and I (mostly) passively produce a lot more small PRs to clean up papercuts around our codebase now. The "cattle not pets" mentality remains relevant - I just fire off a quick prompt when I feel the urge to get sidetracked on something minor.

I haven't gotten as much out of them for more involved tasks. Maybe I haven't really got enough of a context flywheel going yet, but I do typically have to intervene most of the time. Even on a working change, I always read the generated code first and make any edits for taste before submitting it for code review since I still view the output as my complete responsibility.

I still mostly micromanage the change control process too (branching, making commits, and pushing). I've dabbled in tools that can automate this but haven't gotten around to it.

I 100% resonate with the "fix the inputs, not the outputs" mindset as well. It's incredibly powerful without AI and our industry has been slowly but surely adopting it in more places (static typing, devops, IAC, etc). With nondeterministic processes like LLMs though it feels a lot harder to achieve, more like practice and not science.

barrenko 3 hours ago|
There's been a lot of talk recently (with "recently" being measured in days for the agents field) about context management, but I'm having the hardest time managing my own context when using these methods.
marviel 11 hours ago||
Thanks for the writeup!

I talked about a similar, but slightly simpler workflow in my post on "Vibe Specs".

https://lukebechtel.com/blog/vibe-speccing

I use these rules in all my codebases now. They essentially cause the AI to do two things differently:

(1) ask me questions first (2) Create a `spec.md` doc, before writing any code.

Seems not too dissimilar from yours, but I limit it to a single LLM

rolha-capoeira 9 hours ago||
I guess a lot of us are trying this (naturally) as solo devs, where we can take an engineering-first mindset and build a machine or factory that spits out gizmos. I haven't gotten to the finish line, mostly because for me, the holy grail is code confidence via e2e tests that the agent generated (separately, not alongside the implementation).
marviel 9 hours ago||
Totally. Yeah I think your approach is a solid take!
myflash13 3 hours ago|||
Claude Code now handles this natively with “plan mode”. Bit slow and annoying to do it manually with .md files in my opinion.
geekymartian 10 hours ago||
ADHD coding, brute forcing product generation until you get it right? Just freaking write the code that you can expand and modify in the future instead of increasing your carbon footprint.
cube00 10 hours ago||
The end goal is to remove the developer from this equation.

Business owner asks for a new CRUD app and there it is in production.

Of course it's full of full of bugs, slow as syrup, saves to a public unauthed database but that's none of my business *gulps scalding hot tea*

6510 6 hours ago||
You have users fill out bug reports then throw some buckets of money at it.

You could even add a magic button for when things don't work that reruns the same prompt and possibly get better results.

A slot machine animation while waiting would be cool.

danielbln 4 hours ago||
It's like a salt mine in here. Go ahead and hand weave your copper cable code while the world moves on and accelerates. Will there be slop along the way? Oh hell yes.

The Model T car was notorious for blowing out tires left and right, to the point that a carriage might have been less hassle at times. Yet here we are.

NitpickLawyer 1 hour ago|||
> Just freaking write the code that you can expand and modify in the future instead of

Why is it always this argument? Is it that hard to believe that you can get recent coding assistants to write expandable and maintainable code in 0shot? Have you tried just ... asking for that type of code?

stavros 3 hours ago||
Man, programming has changed forever, and the sooner you realize that, the better for you. Saying "write the code" is like telling people to shoe their own horses instead of dealing with them newfangled cars that can break down.
voidUpdate 1 hour ago||
Didn't they say "programming has changed forever" about web3 stuff? I've not heard much about that recently
stavros 53 minutes ago||
I don't know who "they" is, but I never said that.
namuol 7 hours ago||
No real mention of results that aren’t self-referential.

I guess vibe-coding is on its way to becoming the next 3D printing: Expensive hobby best suited for endless tinkering. What’s today’s vibe coding equivalent of a “benchy”? Todo apps?

SchemaLoad 7 hours ago||
3D printing actually is useful though. Basically everyone designing products or any kind of engineering is using it. The only reason it never took off for the average consumer is that every pre designed piece of plastic junk you could ever want to download and print is already available from Amazon.

In a pre online shopping world 3D printing would be far more useful for the average person. Going forward it looks like it's only really useful for people who can design their own files for actually custom stuff you can't buy.

namuol 7 hours ago||
Yeah I’m not saying either aren’t useful, just that they can both be a trap for tinkerers.
steveklabnik 13 hours ago||
I'd love to see more specifics here, that is, how Claude and o3 talk to each other, an example session, etc.
schmookeeg 12 hours ago||
I use Zen MCP and OpenRouter. Every once in awhile, my instance of claude code will "phone a friend" and use Gemini for a code review. Often unprompted, sometimes me asking for "analysis" or "ultrathink" about a thorny feature when I doubt the proposed implementation will work out or cause footguns.

It's wild to see in action when it's unprompted.

For planning, I usually do a trip out to Gemini to check our work, offer ideas, research, and ratings of completeness. The iterations seem to be helpful, at least to me.

Everyone in these sorta threads asks for "proofs" and I don't really know what to offer. It's like 4 cents for a second opinion on what claude's planning has cooked up, and the detailed response has been interesting.

I loaded 10 bucks onto OpenRouter last month and I think I've pulled it down by like 50 cents. Meanwhile I'm on Claude Max @ $200/mo and GPT Plus for another $20. The OpenRouter stuff seems like less than couch change.

$0.02 :D

Uehreka 8 hours ago|||
> Everyone in these sorta threads asks for "proofs" and I don't really know what to offer

I’ve tried building these kinds of multi agent systems a couple times, and I’ve found that there’s a razor thin edge between a nice “humming along” system I feel good about and a “car won’t start” system where the first LLM refuses to properly output JSON and then the rest of them start reading each others <think> thoughts.

The difference seems to often come down to:

- Which LLM wrappers are you using? Are they using/exposing features like MCP, tools and chain-of-thought correctly for the particular models you’re using?

- What are your prompts? What are the 5 bullet points with capital letters that need to be in there to keep things in line? Is there a trick to getting certain LLMs to actually use the available MCP tools?

- Which particular LLM versions are you using? I’ve heard people say that Claude Sonnet 4 is actually better than Claude Opus 4 sometimes, so it’s not always an intuitive “pick the best model” kind of thing.

- Is your system capable of “humming along” for hours or is this a thing where you’re doing a ton of copy-paste between interfaces? If it’s the latter then hey, whatever works for you works for you. But a lot of people see the former as a difficult-to-attain Holy Grail, so if you’ve figured out the exact mixture of prompts/tools that makes that happen people are gonna want to know the details.

The overall wisdom in the post about inputs mattering more than outputs etc is totally spot on, and anyone who hasn’t figured that out yet should master that before getting into these weeds. But for those of us who are on that level, we’d love to know more about exactly what you’re getting out of this and how you’re doing it.

(And thanks for the details you’ve provided so far! I’ll have to check out Zen MCP)

steveklabnik 11 hours ago||||
It’s not about proof: it’s that at this point I’m a fairly heavy Claude Code user and I’d like to up my game, but I’m also not so up on many of these details that I can just figure out how to give this a try just from the description of it. I’m already doing plan-up-front workflows with just Claude, but haven’t figured out some of this more advanced stuff.

I have two MCPs installed (playwright and context7) but it never seems like Claude decides to reach for them on its own.

I definitely appreciate why you’re not posting code, as you said in another comment.

Aeolun 7 hours ago||
> I have two MCPs installed (playwright and context7) but it never seems like Claude decides to reach for them on its own.

Not even when you add ‘memories’ that tell it to always use those tools in certain situations?

My admonitions to always run repomix at the start of coding, and always run the build command before crying victory seem to be followed pretty well anyway.

manmal 5 hours ago|||
What do you tell Claude to do with repomix? Get an overview into the context?
steveklabnik 6 hours ago|||
I have not done that, maybe that's the missing bit. Thanks!
conradev 11 hours ago|||
proof -> show the code if you can!

Then engineers can judge for themselves

schmookeeg 11 hours ago||
Yeahhhhhh I've been to enough code reviews / PR reviews to know this will result in 100 opinions about what color the drapes should be and what a catastrophe we've vibe coded for ourselves. If I shoot something to GH I'll highlight it for others, but nothing yet. I can appreciate this makes me look like I'm shilling.

It makes usable code for my projects. It often gets into the weeds and makes weird tesseracts of nonsense that I need to discover, tear down, and re-prompt it to not do that again.

It's cheap or free to try. It saves me time, particularly in languages I am not used to daily driving. Funnily enough, I get madder when I have it write ts/py/sql code since I'm most conversant in those, but for fringe stuff that I find tedious like AWS config and tests -- it mostly just works.

Will it rot my brain? Maybe? If this thing turns me from an engineer to a PM, well, I'll have nobody to blame but myself as I irritate other engineers and demand they fibonacci-size underdefined jira tix. :D

I think there's going to be a lot of momentum in this direction in the coming year. I'm fortunate that my clients embrace this stuff and we all look for the same hallucinations in the codebase and shut them down and laugh together, but I worry that I'm not exactly justifying my rate by being an LLM babysitter.

breckenedge 13 hours ago|||
I presume via Goose via MCP in Claude Code:

> I also have a local mcp which runs Goose and o3.

steveklabnik 13 hours ago||
Ah, I skimmed the docs for Goose but I couldn't figure out exactly what it is that it does, which is a common issue for docs.

For example: https://block.github.io/goose/docs/category/tutorials/ I just want to see an example workflow before I set this up in CI or build a custom extension to it!

breckenedge 12 hours ago||
Classic Steve Klabnik comment.
steveklabnik 10 hours ago|||
It's true that I deeply care about docs! Turns out they're good for both humans and LLMs :)
IncreasePosts 12 hours ago|||
An uncommon Aaron Breckenridge comment
web3aj 13 hours ago||
[flagged]
nilirl 3 hours ago||
"Fix inputs" => The assumption is there exists some perfect input that will give you exactly what you want.

It probably works well for small inputs and tasks well-represented in the training data (like writing code for well-represented domains).

But how does this work for old code, large codebases, and emergencies?

- Do you still "learn" the system like you used to before?

- How do you think of refactoring if you don't get a feel for the experience of working through the code base?

Overall: I like it. I think this adds speed for code that doesn't need to be reinvented. But new domains, new tools, new ways to model things, the parts that are fun to a developer, are still our monsters to slay.

myflash13 3 hours ago|
> But how does this work for old code, large codebases, and emergencies?

Have you actually tried Claude Code? It works pretty well on my old code, medium size SaaS codebase. I’ve had it build entire features end to end in (backend, front end, data migrations, tests) in one or two prompts.

dkdcio 11 hours ago|
I went down this (and even built a bit of internal web tooling) —- it’s like playing multiple games of online poker for me (instead of the factoria analogy here)

it’s really promising, but I found focusing on a single task and doing it well is still more efficient for now. excited for where this goes

More comments...