Top
Best
New

Posted by kiwieater 11 hours ago

The 100 hour gap between a vibecoded prototype and a working product(kanfa.macbudkowski.com)
213 points | 281 commentspage 3
makingstuffs 7 hours ago|
I’m sure someone else has probably coined the term before me (or it’s just me being dumb, often the case) but I’ve started calling this phase of SWE ‘Ricky Bobby Development’.

So many people are just shouting ‘I wanna go fast’ and completely forgetting the lessons learned over the past few decades. Something is going to crash and burn, eventually.

I say this as a daily LLM user, albeit a user with a very skeptical view of anything the LLM puts in front of me.

nunez 6 hours ago|
I love this!
redgridtactical 5 hours ago||
The 100 hours number feels about right for a solo project. What people underestimate is that the last 20% isn't just polish — it's the boring defensive stuff that makes an app not crash on someone else's phone.

I shipped a React Native app recently and probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app, and testing edge cases on old devices. None of that is the fun part. None of it shows up in a demo. But it's the difference between "works on my machine" and "works in production."

Vibecoding gets you to the demo. The gap is everything after that.

shepherdjerred 5 hours ago||
> probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app

This is the exact kind of task that LLMs excel at

croisillon 5 hours ago|||
c'm'on, drop that
johnfn 3 hours ago||
This comment is written by an LLM, right?

Edit: It's interesting how I am getting downvoted here when pangram confirms my suspicions that this is 100% AI generated.

dielll 9 hours ago||
I have had the experience with creating https://swiftbook.dev/learn

Used Codex for the whole project. At first I used claude for the architect of the backend since thats where I usually work and got experience in. The code runner and API endpoints were easy to create for the first prototype. But then it got to the UI and here's where sh1t got real. The first UI was in react though I had specifically told it to use Vue. The code editor and output window were a mess in terms of height, there was too much space between the editor and the output window and no matter how much time I spent prompting it and explaining to it, it just never got it right. Got tired and opened figma, used it to refine it to what I wanted. Shared the code it generated to github, cloned the code locally then told codex to copy the design and finally it got it right.

Then came the hosting where I wanted the code runner endpoint to be in a docker container for security purpose since someone could execute malicious code that took over the server if I just hosted it without some protection and here it kept selecting out of date docker images. Had to manually guide it again on what I needed. Finally deployed and got it working especially with a domain name. Shared it with a few friends and they suggested some UI fixes which took some time.

For the runner security hardening I used Deepseek and claude to generate a list of code that I could run to show potential issues and despite codex showing all was fine, was able to uncover a number of issues then here is where it got weird, it started arguing with me despite showing all the issues present. So I compiled all the issues in one document, shared the dockerfile and linux secomp config tile with claude and the also issues document. It gave me a list of fixes for the docker file to help with security hardening which I shared back with codex and that's when it fixed them.

Currently most of the issues were resolved but the whole process took me a whole week and I am still not yet done, was working most evenings. So I agree that you cannot create a usable product used by lots of users in 30 minutes not unless it's some static website. It's too much work of constant testing and iteration.

tqwhite 5 hours ago||
I have had things like your React instead of Vue problem. I solved it by always having Claude write a full implementation spec/plan in markdown which I give to a fresh context Claude to implement. Typically, I have comments and make it revise until I am happy.

It has basically eliminated surprises like that.

tom_ 7 hours ago||
You can say "shit" here if you like.
itomato 4 hours ago||
Nobody is saying they're ready for production in 30 minutes, just that there is something real where an idea used to be.

Something much closer to production SDLC patterns than a Figma mockup.

stillpointlab 8 hours ago||
I came across the following yesterday: "The Great Way is not difficult for those who have no preferences," a famous Zen teaching from the Hsin Hsin Ming by Sengstan

As we move from tailors to big box stores I think we have to get used to getting what we get, rather than feeling we can nitpick every single detail.

I'd also be more interested in how his 3rd, 4th or 5th vibe coded app goes.

fixxation92 7 hours ago||
What I really want to know is... as a software developer for 25+ years, when using these AI tools- it is still called "vibecoding"? Or is "vibecoding" reserved for people with no/little software development background that are building apps. Genuine question.
DennisP 7 hours ago||
Steve Yegge has been a dev for several decades with lead spots at Amazon and Google, has completely converted to using AI, wrote a book about it using it effectively for large production-ready projects, and still calls it vibe coding.
fixxation92 5 hours ago||
I don't think I'll ever adopt this term, I'm not a fan of it at all. I find myself saying "I was working with AI" and just leave it at that. It is a collaboration afterall.
newsoftheday 7 hours ago||
As a software developer over 30 years, AI is not a tool, it is not deterministic, it is an aide.
tqwhite 5 hours ago||
Don't have it do things for you. Have it do things with you.
hashmap 5 hours ago||
if something like a popup appears that i didnt ask the page to do i snap close the page and never look at it again
nemo44x 9 hours ago||
The 80/20 rule doesn’t go away. I am an AI true believer and I appreciate how fast we can get from nothing to 80% but the last “20%” still takes 80%+ of the time.

The old rules still apply mainly.

tossandthrow 8 hours ago||
Yes, so 80% of 100 hours is considerably less than 80% of 600 hours
iamcalledrob 8 hours ago||
In my experience, the last 20% tends to be the stuff that's less obvious, too, by it's very nature.

The details and pitfalls that are unique to your specific scenario, that you only discover by running into them.

And yet this less obvious, more uncommon stuff is also what AI will be weakest at.

jimnotgym 8 hours ago|
I have not been coding for a few years now. I was wondering if vibe coding could unstick some of my ideas. Here is my question, can I use TDD to write tests to specify what I want and then get the llm to write code to pass those tests?
kantselovich 1 hour ago||
TDD helps a lot, but it’s no guarantee - LLM is smart enough to “fake” the code to pass tests .

I’m working on project - a password manager, where I have full end to end test harnesses - cli client makes changes, sync them to the server and then observe the data in iOS app running in the emulator. More than once I noticed codex just hard coded expected values from the test harnesses directly into UI layout in iOS app to make the test pass…

Similar issues in the crypto layer - tests were written first , then code was written . During the review I noticed that the code was made to just pass the test - the logic was to check if signature values exists instead of checking if crypto signature is valid.

LLM can help with code reviews as well, but it has to be guided specifically what to look for for. This is with codex 5.4 model

_heimdall 8 hours ago|||
That's a great approach, though I'd also recommend setting up a strong basis for linting, type checking, compilation, etc depending on the language. An LLM given a full test suite and guard rails of basic code style rules will likely do a pretty good job.

I would find it a bit tricky to write a full test suite for a product without any code though. You'd need to understand the architecture a bit and likely end up assuming, or mocking, what helpers, classes, config, etc will be built.

potro 8 hours ago|||
You absolutely can. This is one of recommended directions with agentic coding. But you can go farther and ask llm to write tests too. The review/approve them.
mlaretallack 8 hours ago|||
Yes, I mostly do spec driven developement. And at the design stage, I always add in tests. I repeat this pattern for any new features or bug fixes, get the agent to write a test (unit, intergration or playwright based), reproduce the issue and then implement the change and retest etc... and retest using all the other tests.
linsomniac 8 hours ago|||
To expand on the "Yes": the AI tools work extremely well when they can test for success. Once you have the tests as you'd like them, you may want to tell the LLM not to modify the tests because you can run into situations where it'll "fix" the tests rather than fixing the code.
__mp 8 hours ago|||
yes. depending on the techstack your experience might be better or worse. HTML/CSS/React/Go worked great, but it struggled with Swift (which I had no experience in).
faeyanpiraat 8 hours ago||
Yes
More comments...