Top
Best
New

Posted by dropbox_miner 10 hours ago

I'm going back to writing code by hand(blog.k10s.dev)
418 points | 203 commentspage 2
viceconsole 7 hours ago|
> Vibe-coding makes you feel like you have infinite implementation budget. You don't. You have infinite LINE budget (the AI will generate as much code as you want). But you have the same finite complexity budget as always.

This is a special case of a general fundamental point I'm struggling with.

Let's assume AI has reduced the marginal cost of code to zero. So our supply of code is now infinite.

Meanwhile, other critical factors continue to be finite: time in a day, attention, interest, goodwill, paying customers, money, energy.

So how do you choose what to build?

Like a genie, the tools give us the power to ask for whatever we want. And like a genie, it turns out we often don't really know what we want.

TranquilMarmot 6 hours ago||
Right - knowing what to actually build always has been and always will be the limiting factor to actual success. I could spend months and hundreds of dollars generating the absolute BEST todo list that's out there but nobody wants that.
ozim 5 hours ago||
I have vibe coded 3 applications I never had time to code but always wanted.

Now it is different in a way where now I don’t have time to use those apps.

That’s a joke.

But I do believe it answers the question of “what to build?”. If you didn’t have time before LLM assisted coding you still don’t have time for it. You most likely know what is used and what not already by heart or by some measurements.

ninjahawk1 1 hour ago||
A problem often ignored is that while AI is trained on human written code, how it writes is different in practice.

Will that improve or get worse? One would argue that LLMs in general are drastically more competent now than they were a couple years ago, they’re also much better at coding. We’re likely just now entering the era where they can code but are still not what you’d fully expect, or at least not what someone with absolutely no coding knowledge could use to code at the same level as someone who does know how to code.

Maybe that changes as the models improve, maybe it doesn’t, only time will tell.

simon84 3 hours ago||
Personally, i've taken a serious step back from 'unsupervised' vibe-coding. When the codebase is clean and you want some additional fix or small feature, Claude is quite good at mimicking your style and does a pretty good job.

When asking for a new major feature, despite hard guidelines and context (that eat half your context window), then it quickly ships bloat. The foundations are not very well organized and this is where you acknowledge it is all about random-prediction of the next word-thing.

Overall, i've wasted more time reviewing the PR and trying to steer it properly than I expected. So multi-layer agent vibe coding is no longer the way to go *for me*. Maybe with unlimited tokens and a better prompt, to be investigated...

yason 2 hours ago||
We're still in the early ages and must discern hard what AI is good for, what it can maybe do, what it could potentially do and what it just can't do, and move those threshold marks very conservatively. AI is also cheap enough that it's worth shots of experiments. As long as you don't really rely on AI it's easy to test the capabilities of this new conversational autocomplete, and the random gains it offers can be magnificent (except when they aren't, of course).

What has generally worked for me is paraphrasing the old adage "Write the data structures and the code will follow" over to AI. Design your data, consider the design immutable and let the AI try fill in the necessary code (well, with some guidance). If it finds the data structures aren't enough, have it prompt you instead of making changes on its own. AI can do lot of the low-hanging fruit and often the harder ones as well as long as it's bound to something.

Yet, for now, AI at best has been something that relieves me from having to write a long string of boring code: it's not sustainable to keep developing stuff relying on AI alone. It's also great when quality is not an issue; for any serious work AI has not speeded me up noticeably. I still need to think through the hard parts, and whatever I gain in generating code I lose in managing the agents. But I can parallelise code generation, trying new approaches, and exploring out because AI is cheap. AI is also pretty good for going through the codebase and reasoning about dependencies whether in the context of adding a new feature or fixing a bug: I often let AI create a proof-of-concept change that does it, then I extract the important bits out of that and usually trim down the diffs down to at least 1/3 or less.

AI further helps with non-work, i.e. tasks that you have to do in order to fulfill external demands and requirements, and not strictly create anything solid and new. I can imagine AI creating various reports and summaries and documentation, perhaps mostly to be consumed and condensed by another AI at the receiving end. Sadly, all of this is mostly things not worth doing anyway.

Overall, I cringe under all the hype that's been laid on AI: it's a new tool that's still looking for its box or niche carveout, not a revolution.

ktzar 1 hour ago|
I don't think we're in the early ages... LLMs technology has essentially stagnated since GPT3.5, we just have bigger models that can handle more context. We're trying to cope for the lack of progress of the actual technology by coming up with contraptions of multiple models stuck together, Mixture-of-Experts, Reviewer models, PM models...
Havoc 1 hour ago||
That's a strange definition of "code by hand"
shahbaby 9 hours ago||
This reads too much like it was LLM generated. I can't say for sure if it was but I have an allergic reaction to the short snappy know-it-all LLM writing style.
TranquilMarmot 6 hours ago||
AI;DR
baxtr 7 hours ago|||
Writing code by hand but blog post are written by LLMs?
fromwilliam 7 hours ago||
yeah, it set off my llm radar too
AntiUSAbah 2 hours ago||
Im exploring currently if i should split up a project into a framework part and the game itself (2d, idle game).

The framework could be an isolation later against viberod but not sure if its necessary for my small project i always wanted to do and never done anything with it.

For another tool, i will try another approach: Start with a deep investigation and spec write together with AI, than starting with the core architecture layout and than adding features.

So instead of just prompting "write a golang project with a http server serving xy, and these top 3 features" i will prompt "create a basic golang scarfold for build and test" -> "create a basic http server with a basic library doing xy" -> "define api spec" -> "write feature x"

There is kind a skill and depth to vibe coding though.

throwaway2027 2 hours ago||
I'm thoroughly enjoying using AI to write code, but it paid off by years of doing things the hard way before. I already was a so called "10x developer" if I speak for myself. I'm doing things even faster now with AI.
zem 3 hours ago||
I don't bother trying to give the LLM a set of dos and don'ts for how to write the code, that becomes a frustrating game of whack-a-mole. I find it a lot more efficient to have it write some code, look it over, and if I'm not happy with some of the decisions give it specific instructions for how to fix that one part. as a bonus I end up reinforcing my knowledge of the code base in the process.
archleaf 10 hours ago|
So what you really mean is you are going to do better and more detailed skills files so you can get an architecture that you've thought through rather than something random?
dropbox_miner 10 hours ago|
Partly, but the order matters. The CLAUDE.md constraints only work if you designed the architecture first. They're just how you communicate it to the AI. The mistake I made wasn't writing bad skills files, it was not designing anything at all and expecting the AI to make coherent structural decisions across 30 sessions.

The rewrite is me sitting down with a blank doc and drawing the boxes before any code exists. Then the CLAUDE.md enforces what I already decided. Whether that actually holds up as the project grows, I genuinely don't know yet.

cpncrunch 10 hours ago||
Are you really saving any time at all using AI at all then? If you have to write the architecture for it, write all the rules you want it to follow, check everything it's written, and then reprompt it because it's not how you want it?
SpicyLemonZest 9 hours ago||
Yes. I do all of this and I'd estimate 50-100% coding time savings. A lot of that comes from better multitasking over single-workstream throughput, which I suppose might compromise the gains depending on what you're doing. For me it amplifies the speedup by allowing some of my "coding time" to be spent on non-coding tasks too.
cpncrunch 9 hours ago||
But even if coding time is reduced by half, is that worth the downsides? Coding has never really been a major percentage of my time.
SpicyLemonZest 7 hours ago||
I could be wrong in some subtle way I'm not seeing, but I believe the model we're working in avoids the downsides. I actually think my review bar is slightly higher now, because I don't feel as much pressure to compromise my standards when I know Claude is capable of writing the code I want.
More comments...