Top
Best
New

Posted by dropbox_miner 11 hours ago

I'm going back to writing code by hand(blog.k10s.dev)
472 points | 228 commentspage 3
archleaf 11 hours ago|
So what you really mean is you are going to do better and more detailed skills files so you can get an architecture that you've thought through rather than something random?
dropbox_miner 11 hours ago|
Partly, but the order matters. The CLAUDE.md constraints only work if you designed the architecture first. They're just how you communicate it to the AI. The mistake I made wasn't writing bad skills files, it was not designing anything at all and expecting the AI to make coherent structural decisions across 30 sessions.

The rewrite is me sitting down with a blank doc and drawing the boxes before any code exists. Then the CLAUDE.md enforces what I already decided. Whether that actually holds up as the project grows, I genuinely don't know yet.

cpncrunch 11 hours ago||
Are you really saving any time at all using AI at all then? If you have to write the architecture for it, write all the rules you want it to follow, check everything it's written, and then reprompt it because it's not how you want it?
SpicyLemonZest 10 hours ago||
Yes. I do all of this and I'd estimate 50-100% coding time savings. A lot of that comes from better multitasking over single-workstream throughput, which I suppose might compromise the gains depending on what you're doing. For me it amplifies the speedup by allowing some of my "coding time" to be spent on non-coding tasks too.
cpncrunch 10 hours ago||
But even if coding time is reduced by half, is that worth the downsides? Coding has never really been a major percentage of my time.
SpicyLemonZest 8 hours ago||
I could be wrong in some subtle way I'm not seeing, but I believe the model we're working in avoids the downsides. I actually think my review bar is slightly higher now, because I don't feel as much pressure to compromise my standards when I know Claude is capable of writing the code I want.
erelong 11 hours ago||
Can't you just ask AI to break up large files into smaller ones and also explain how the code works so you can understand it, instead of start over from scratch?
dropbox_miner 10 hours ago||
That was actually the first thing I tried. It did a good jov at explaining the code base mess and the architecture. Then I ran 3-4 refactor attempts. Each one broke things in ways that were harder to debug than the original mess. The god object had so many implicit dependencies that pulling one thread unraveled something else. And each attempt burned through my daily Claude usage limit before the refactor was stable.

And I'm sure the rewrite is going to teach me a whole different set of lessons...

tres 9 hours ago||
What's your test coverage like?

Not sure why good coverage wouldn't mitigate risk in a refactor...

My mantra whenever I'm working with AI is that I want it to know what "point b" looks like and be able to tell by itself whether it's gotten there...

If you have a working implementation, it sounds like you have a basis for automated tests to be written... once you have that (assuming that the tests are written to test the interface rather than the implementation), then it should be fairly direct to have an agent extract and decompose...

striking 10 hours ago|||
I'm currently working on the discovery phase of a larger refactor and have pretty quickly realized that AI can actually often be pretty useless even if you've encoded the rules in an unambiguous, programmatic way.

For example, consider a lint rule that bans Kysely queries on certain tables from existing outside of a specific folder. You'd write a rule like this in an effort to pull reads and writes on a certain domain into one place, hoping you can just hand the lint violations to your AI agent and it would split your queries into service calls as needed.

And at first, it will appear to have Just Worked™. You are feeling the AGI. Right up until you start to review the output carefully. Because there are now little discrepancies in the new queries written (like not distinguishing between calls to the primary vs. the replica, missing the point of a certain LIMIT or ORDER BY clause, failing to appropriately rewrite a condition or SELECT, etc.) You run a few more reviewer agent passes over it, but realize your efforts are entirely in vain... because even if the reviewer agent fixes 10 or 20 or 30 of the issues, you can still never fully trust the output.

As someone with experience in doing this kind of thing before AI, I went back to doing it the old way: using a codemod to rewrite the code automatically using a series of rules. AI can write the codemod, AI can help me evaluate the results, but actually having it apply all of the few hundred changes automatically led to a lack of my ability to trust the output. And I suspect that will continue to be true for some time.

This industry needs a "verification layer" that, as far as I know, it does not have yet. Some part of me hopes that someone will reply to this comment with a counterexample, because I could sorely use one.

joshuanapoli 10 hours ago|||
Rewrite following a new architecture plan could get finished pretty quickly, treating the original as a prototype.
SpicyLemonZest 10 hours ago||
When people talk about codebases being "incomprehensible", it's not always hyperbole. Sometimes the architecture literally cannot be broken up or understood.
whattheheckheck 10 hours ago||
I find that really hard to believe. It's not like curing cancer
pixl97 9 hours ago|||
When you see some legacy C++ codebase with millions of lines of code, catching cancer and slowly dying from it is more human than trying to unscrew that mess.

A really screwed code base blows out your context window and just starts burning tokens as the AI works out a way to kill -9 itself to escape the hell you're subjecting it to.

NichoPaolucci 9 hours ago||||
While I mostly agree - science is built up on truths. Code has a large amount of creativity and freedom built into the decisions, some codebases will be documented, follow rigorous training, and design decisions. Others will just be an absolute legacy mess of 20 years of odd decisions made by people who may have not known what they were doing. Like an art piece that you don’t really “understand”.
chamomeal 10 hours ago||||
No but it can be a rube goldberg machine of insanity
SpicyLemonZest 8 hours ago|||
[flagged]
throwaway2027 2 hours ago||
I'm thoroughly enjoying using AI to write code, but it paid off by years of doing things the hard way before. I already was a so called "10x developer" if I speak for myself. I'm doing things even faster now with AI.
zem 4 hours ago||
I don't bother trying to give the LLM a set of dos and don'ts for how to write the code, that becomes a frustrating game of whack-a-mole. I find it a lot more efficient to have it write some code, look it over, and if I'm not happy with some of the decisions give it specific instructions for how to fix that one part. as a bonus I end up reinforcing my knowledge of the code base in the process.
radicalbyte 5 hours ago||
I don't understand the people who "get the agent to do everything" for them. It just makes a mess if you do that. Yet if I spend a little bit of time setting a project up properly (including telling my minions exactly what to do) I can then get it to do the boring things for me.

The very worst things you can do in a codebase are (a) not deeply understand how it works (have it be magic) and (b) be lazy and mess up the structure.

How do you fix a problem which happens at 2:00am and takes your system down if you don't have an excellent understanding of how it works?

Over time we're already bad at (a) because most developers hate writing documentation so that knowledge is invariably lost over time.

tvbusy 6 hours ago||
I don't think the prompts that the author has proposed will actually work. Including final scope and non-scope is good but it's more of a reaction of what the AI already did. These prompts are suitable for a rewrite, basically, since it's unlikely anyone would have had these ready when they start out.

I have found small iterations to have the best results. I'm not giving AI any chance to one shot it. For example, I won't tell it to "create a fleet view" but something more like "extract key binding to a service" so that I can reuse it in another view before adding another view. Basically, talk to the AI as an engineer talking to another engineer at the nitty gritty level that we need to deal with everyday, not a product person wishing for a business selling point to magically happen.

Havoc 2 hours ago||
That's a strange definition of "code by hand"
mindaslab 1 hour ago||
I'm going back to writing algorithms on paper.
binyu 11 hours ago||
> I'm rewriting k10s in Rust. Not because Rust is better but, because it's the language I can steer. I've written enough of it to feel when something's wrong before I can articulate why. That instinct is the one thing vibe-coding can't replace. The AI hands you plausible-looking code. You need a nose for when it's garbage.

Isn't Golang relatively easier to read than Rust? I was under the impression that Rust is a more complex language syntactically.

> The other change is simpler: I'm doing the design work myself, by hand, before any code gets written. Not a vague doc. Concrete interfaces, message types, ownership rules. The architecture decisions that the AI kept making wrong are now made in writing before the first prompt.

This post is good to grasp the difference between "vibe-coding" and using the AI to help with design and architectural choices done by a competent programmer (I am not saying you are not one). Lately I feel that Opus 4.7 involves the user a lot more, even when given a prompt to one-shot a particular piece of software.

dropbox_miner 10 hours ago||
Go reads fine whether the architecture is good or bad, and I couldn't tell the difference until I was in trouble. Rust is harder to read but harder to misuse. The borrow checker would have caught that data race at compile time. I've also just written more Rust. That familiarity matters separately.

+1 on Open 4.7 involving the user a lot more. Rn I'm trying to get to a state where I can codify my design + decision preferences as agents personas and push myself out of the dev loop.

ok_dad 6 hours ago|||
Buddy that k10s code was never good. Go vs Rust is not the issue here, it’s the fact the project was vibe coded without reading anything. It’s hilarious to even think that a god model was caused by anything other than someone who let the bot choose too much.

Good architecture in any language is obvious to someone who is experienced and cares.

Go is actually great for bots to write if you’re actually thinking.

binyu 10 hours ago|||
Gotcha, that implies you are going to read the code that the AI produces anyways.

> Go reads fine whether the architecture is good or bad

Were you reading the Golang code all along and got fooled or did you review it after it failed? Sorry I admit I didn't read the whole article.

williamstein 10 hours ago||
He was NOT reading the code: "For 7 months I'd been prompting and shipping without ever sitting down and actually reading the code Claude wrote."
binyu 10 hours ago||
Right, thank you. Personally I think reading all the code that the AI produces is impossible and kind of defeats the purpose of using it. The key is to devise a structured way to interact with it (skills and similar) and use extensive testing along the way to verify the work at all steps.
cortesoft 9 hours ago||
> Isn't Golang relatively easier to read than Rust? I was under the impression that Rust is a more complex language syntactically

It sounds like the author knows Rust, and might not be as familiar with Go.

A language that you are proficient in is always going to be easier read than one you don’t, even if it is an objectively easier language to to read in general.

travisgriggs 7 hours ago||
In a world where juniors (or seniors in new territories) are incentivized to publish or perish, how will any of us gain proficiency any more? I can see the agent assisted journey accelerating some familiarity, but not proficiency.

I’ve used AI tools to do i18n translations to Spanish and Portuguese (somewhat ashamed to admit this). I’ve grown more familiar with the structure of these languages, and come to recognize some of the common vocabulary for our agtech domain. If anything, I feel more clueless about both languages now than I did before, when it comes to any sort of proficiency.

pjmlp 6 hours ago|
I am still mostly coding by hand, other than meeting the KPIs of AI use at the company, required trainings, use of agents and whatever.

Eventually like every hype wave the dust will settle, and lets see where we stand.

By now all the AI companies have consumed all human knowledge so they either learn to actually think for themselves, or that is it.

Either way, that won't change the ongoing layoffs while trying to pursue the AI dream from management point of view.

0xpgm 5 hours ago|
> Either way, that won't change the ongoing layoffs while trying to pursue the AI dream from management point of view.

I think most companies doing layoffs are bloated to begin with, AI is just the scapegoat to do the layoffs.

pjmlp 4 hours ago||
I am aware of layoffs that are really caused by AI.

Translation and asset generation teams for enterprise CMS, whose role has now been taken by AI.

Likewise traditional backend development, that was already reduced via SaaS products, serverless, iPaaS low code/no code tooling, that now is further reduced via agents workflow tooling, doing orchestration via tools (serverless endpoints).

More comments...