Top
Best
New

Posted by dropbox_miner 9 hours ago

I'm going back to writing code by hand(blog.k10s.dev)
418 points | 203 comments
pron 37 minutes ago|
Yep. The only people I've heard saying that generated code is fine are those who don't read it.

The problem is that the mitigations offered in the article also don't work for long. When designing a system or a component we have ideas that form invariants. Sometimes the invariant is big, like a certain grand architecture, and sometimes it’s small, like the selection of a data structure. You can tell the agent what the constraints are with something like "Views do NOT access other views' state" as the post does.

Except, eventually, you'll want to add a feature that clashes with that invariant. At that point there are usually three choices:

- Don’t add the feature. The invariant is a useful simplifying principle and it’s more important than the feature; it will pay dividends in other ways.

- Add the feature inelegantly or inefficiently on top of the invariant. Hey, not every feature has to be elegant or efficient.

- Go back and change the invariant. You’ve just learnt something new that you hadn’t considered and puts things in a new light, and it turns out there’s a better approach.

Often, only one of these is right. Often, one of these is very, very wrong, and with bad consequences.

Picking among them isn’t a matter of context. It’s a matter of judgment, and the models - not the harnesses - get this judgment wrong far too often. I would say no better than random chance.

Even if you have an architecture in mind, and even if the agent follows it, sooner or later it will need to be reconsidered, and if you don't read what the agent does very carefully - more carefully than human-written code - you will end up with the same "code the devours itself".

benguild 9 minutes ago||
“The only people I've heard saying that generated code is fine are those who don't read it.” Are you sure these people aren’t busy working rather than chatting? (haha)

But in all seriousness it depends on what you’re doing with it. Writing a quick tool using an LLM is much easier than context changing to write it yourself. If you need the tool, that’s very valuable.

sevenzero 6 minutes ago||
Also as a webdev, it writes basic CRUD pretty good. I am tired of having to build forms myself and the LLMs are usually really good at that.

Been building a new app with lots of policies and whatnot and instructing a LLM is just much faster than doing the same repetitive shit over and over myself.

21asdffdsa12 23 minutes ago||
And the solution is the same, as when it was outsourced- and the "patch" was fix it by writing spec. Thus i conclude my TED talk with the statement: LLMs are the new outsourcing and run into the same problems.
i_love_retros 20 minutes ago||
Don't outsource either then
jwpapi 1 hour ago||
That’s the same story I had.

The swindle goes like this, AI on a good codebase can build a lot of features, you think it’s faster it even seems safer and more accurate on times, especially in domains you don’t know everything about.

This goes in for a while whilst the codebase gets bigger and exploration takes longer and failure rate increases. You don’t want it to be true and try harder so you only stop after it practically became impossible to make any changes.

You look at the code again and there is so much code spaghetti is an understatement it’s the Chinese wall.

You start working…, and you realize what was going on

I deleted 75,000 of 140,000 lines of code and I honestly feel like the 3 months I went hard into agentic coding I wasted and I failed my users by building useless features increasing bugs, losing the mental model of my code and not finding the problems I didn’t know about the kind of hard decisions you only see when you in the code, the stuff that wanders in your mind for days

mjburgess 1 hour ago||
I think this is true, but i imagine there's a workflow solution to this which isnt to drop AI.

Eg., treating AI code generated as immediately legacy, with tight encapsulation boundaries, well-defined interfaces etc. And integrating in a more manual workflow.

There's a range from single-shot prompts to inline code generation, that will make more sense depending on the problem and where in the code base it is.

Single-shot stuff is going to make more sense for a protyping phase with extensive spec iteration. Once that prototype is in place, you then prob want to drop down into per-module/per-file generation, and be more systematic -- always maintaining a reasonably good mental model at this layer.

herrherrmann 56 minutes ago|||
That workflow just sounds exhausting to me. Would I always need to consider how much of a blast radius my AI-generated code might have? Sounds like there’s so much extra management going into these micro decisions that it ultimately defeats the purpose of generating code altogether.

I could see value in using it during the prototyping phase, but wouldn’t like to work like you described for a serious project for end users.

embedding-shape 18 minutes ago|||
I just don't like to type code anymore. If I can accomplish the same by describing the code, and get the same results as if I typed it myself, I'll opt for not typing so damn much. I've done so much typing in my career, that typing ~80% less to get the same results, makes a pretty big difference in how likely I am to set out to accomplish something.

I care more about code quality now, because typing no longer limits if I feel like it's worth to refactor something or not.

meetingthrower 17 minutes ago|||
And you have discovered the job of managers! There has always been a lot of hate for managers. Wonder if the robots hate us just as much? (I often feel a weird guilt when I tell an agent to do something I know I am going to throw away but will serve as an interesting exploration...I know if I did that to a human they would be pissed...)
anal_reactor 47 minutes ago|||
> treating AI code generated as immediately legacy, with tight encapsulation boundaries, well-defined interfaces etc.

This is good advice regardless whether you're using AI or not, yet in real life "let's have well-defined boundaries and interfaces" always loses against "let's keep having meetings for years and then ducttape whatever works once the situation gets urgent".

dxdm 56 minutes ago|||
I find it interesting that this outcome is a surprise. I don't want this to sound smug, I'm genuinely curious what the initial expectations are and where they come from.

They seem to be different for LLMs, because would anyone be surprised if they handed summary feature descriptions to some random "developer" you've ever only met online, and got back an absolute dung pile of half-broken implementation?

For some reason, people seem to expect miracles from some machine that they would not expect of other humans, especially not ones with a proven penchant for rambling hallucinations every once in a while.

I'd like to know, ideally from people who've been there, why they think that is. Where does the trust come from?

doginasuit 37 minutes ago||
I've never relied on an LLM to build a large section of code but I can see why people might think it is worth a try. It is incredible for finding issues in the code that I write, arguably its best use-case. When I let it write a function on its own, it is often perfect and maybe even more concise and idiomatic than I would have been able to produce. It is natural to extrapolate and believe that whatever intelligence drives those results would also be able to handle much more.

It is surprising how bad it is at taking the lead given how effective it is with a much more limited prompt, particularly if you buy in to all the hype that it can take the place of human intelligence. It is capable of applying a incredible amount of knowledge while having virtually no real understanding of the problem.

chorsestudios 1 hour ago|||
Were you auto-committing everything without reading the generated code? and if you read it but didn't understand it why not just ask for detailed comments for each output? Knowing that a larger codebase causes it to struggle means the output needs to be increasingly scrutinized as it becomes more complex.
skydhash 24 minutes ago||
I don’t think it’s about what the code does. I think it’s more about how the code fits in its whole context. How useful it is in solving the overarching problem (of the whole software). How well does it follow the paradigm of the platform and the codebase.

You can have very good diffs and then found that the whole codebase is a collection of slightly disjointed parts.

gchamonlive 1 hour ago|||
I haven't had the chance to work on large codebases, but isn't it possible to somehow adapt the workflow of Working Effectively with Legacy Code, building islands of higher quality code, using the AI to help reconstruct developer intention and business rules and building seams and unit tests for the target modules?

AI doesn't necessarily have to increase your throughput, it can also serve as a flexible exploration and refactoring tool that will support either later hand crafter code or agentic implementation.

deadbabe 32 minutes ago||
I don’t think you truly captured the worst part:

There comes a realization, to many engineer’s horror, that AI won’t be able to save them and they will have to manually comprehend and possibly write a ton of code by hand to fix major issues, all while upper management is breathing down their back furious as to why the product has become a piece of shit and customers are leaving to competitors.

The engineers who sink further into denial thrash around with AI, hoping they are a few prompts or orchestrations away from everything be fixed again.

But the solution doesn’t come. They realize there is nothing they can do. It’s over.

baddash 5 hours ago||
I've set a few rules for working with coding agents:

1. If I use a coding agent to generate code, it should be something I am absolutely confident I can code correctly myself given the time (gun to my head test).

2. If it isn't, I can't move on until I completely understand what it is that has been generated, such that I would be able to recreate it myself.

3. I can create debt (I believe this is being called Cognitive Debt) by breaking rule 2, but it must be paid in full for me to declare a project complete.

Accumulating debt increases the chances that code I generate afterwards is of lower quality, and it also feels like the debt is compounding.

I'm also not really sure how these rules scale to serious projects. So far I've only been applying these to my personal projects. It's been a real joy to use agents this way though. I've been learning a lot, and I end up with a codebase that I understand to a comfortable level.

jimsojim 5 hours ago||
While this is a legitimate set of rules to follow for maintaining code sanity and a solid mental model of how a codebase may grow, it’s always challenging to stick to them in a workplace where expectations around delivery speed have changed drastically with the onset of AI. The sweet spot lies in striking a balance between staying connected to the codebase and not becoming a limiting factor for the team at the same time.
baddash 4 hours ago||
That's kind of what I figured, sadly. I haven't experienced it personally yet since I got let go from my last job about 14 months ago, but it makes so much sense given how management is so willing to sacrifice quality for speed.
jimsojim 3 hours ago||
Another frustrating thing that has emerged from this is where managers “vibe code” half-baked ideas for a couple of hours and then hand it off as if they’ve meaningfully contributed to the implementation. Suddenly you’re expected to reverse engineer incoherent prompts, inconsistent code, and random abstractions that nobody fully understands.

In their mind they’ve already done the “architectural heavy lifting” and accelerated the team. More often than not it just adds cognitive overhead where you spend more time deciphering and cleaning up garbage than actually building the thing properly from scratch.

6LLvveMx2koXfwn 2 hours ago|||
I am lucky to have never worked in a team where my manager wouldn't expect strong push back in this scenario. Many of the corporate environments described on here seem dystopian, this included.
KronisLV 1 hour ago|||
Vouching for this comment because my friend confided in me a week ago that her manager also does this and is like “oh yeah, here’s 80% done, you just do the rest so we can ship it” when a large part of it is slop that needs to be rewritten, due to not enough guidance and pushback during generation.
eCa 1 hour ago||
That’s when you ask it to write tests to a good coverage, and then have it reimplement everything with the tests still passing…
KronisLV 27 minutes ago||
Unless the tests are written against logic that is in of itself subtly wrong and even the structure of code and what methods there are is wrong - so even your unit tests would have to be rewritten because the units are structured badly.

It’s a valid direction to look in, it just doesn’t address the root issue of throwing slop across the wall and also having unrealistic expectations due to not knowing any better.

skydhash 4 minutes ago||
Yep. It’s very healthy to be suspicious of code. Any code. Whether generated or not. That’s where the bugs are.

If there’s one thing that’s disturbing with AI proponent is how trusting they are of code. One change in the business domain and most of the code may have turn from useful to actively harmful. Which you have to rewrite. Good luck doing that well if you’re not really familiar with the code.

IanCal 2 hours ago|||
This is fine if it’s more enjoyable for you, that’s what’s important in personal projects most of the time.

But we don’t follow the same things for dependencies, work of colleagues, external services, all the layers down to the silicon when trying to work.

Why is AI suddenly different?

We just have to do this by risk and reward. What’s the downside if it’s wrong, and how likely is an error to be found in testing and review? What is the benefit gained if it’s all fine? This is the same for libraries and external services.

A complex financial set of rules in a non-updatable crypto contract with no testing?

A viewer for your internal log data to visualise something?

wccrawford 48 minutes ago|||
AI is different because it's a tool, and the user of the tool is responsible for the work performed.

An outsourced developer isn't a "tool". They're a human being, and responsible for their actions. They're being paid, and they either act responsibly or they get replaced.

A vibe coder is a human using a tool. The human is responsible for code quality, and if it's not good enough, they need to keep using the tool to make it better. That means understanding the tool's output.

If an artist used Photoshop to create a billboard ad that was ugly, they don't get to blame Photoshop. They have to keep using the tool until their output is good.

marginalia_nu 1 hour ago|||
It is and has always been immensely helpful to understand what you are doing in any context.

There are some programmers who treat the job as just plumbing together what is to them completely incomprehensible black boxes, who treat the computer as a mystery machine that just does things "somehow", but these programmers will almost always be hacks that spend their entire career producing mediocre code.

There are things such a programmer can build, but they are very limited by their lack of in depth understanding, and it is only a tiny fraction of what a more competent programmer can put together.

To get beyond being a hack, you need to understand the entire stack, including the code that you didn't write, including both libraries, frameworks and the OS, and including the hardware, the networking layers, and so forth. You don't have to be an expert at these things by any means, but you do need to understand them and be comfortable treating them as transparent boxes that you may have to go in and fiddle with at some point to get where you need to go. Sometimes you need to vendor a dependency and change it. Sometimes you need to drop it entirely and replace it with something more fit for purpose you built yourself.

brabel 4 hours ago|||
I was trying to follow similar rules, until one day I had to solve a hard mathematical problem. Claude is a phd level mathematician, I am not. I, however, know exactly the properties of the desired solution and how to test it’s correct. So I decided to keep Claude’s solution over my basic, naive one. I mentioned that in the pull request and everyone agreed that was the right call. Would you open exceptions like that in your rules? What if AI becomes so much better at coding than you , not just at doing advanced mathematics? Would you then stop to write code by hand completely since that would be the less optimal option, despite you losing your ability to judge the code directly at that point (and as in my example, you can still judge tests, hopefully)? I think these are the more interesting questions right now.
Jweb_Guru 4 hours ago|||
> Claude is a phd level mathematician

Unfortunately, it is not, and many of its attempts at mathematical proofs have major flaws. You shouldn't trust its proofs unless you are already able to evaluate them--which I think is pretty much all the OP is saying.

adrianN 3 hours ago|||
To be fair, many of the proof attempts that mathematicians do also have major flaws. Most get caught before getting published.
IanCal 2 hours ago|||
Trust isn’t a binary, and I can trust things I don’t understand enough that I can use them. OP was talking about needing to understand, which is quite a bit above the level of being able to validate enough to use for a task.
auggierose 3 hours ago||||
I definitely wouldn't put math in my code I didn't understand just because Claude says so. I am not astonished that everyone agreed, that's why shit is going to hit the fan pretty badly pretty soon due to AI coding.

There is one exception to this: If the AI also delivers the proof of why the math is correct, in a machine-checked format, and I understand the correctness theorem (not necessarily its proof). Then I would use it without hesitation.

hennell 2 hours ago|||
I always found it weird when helping people with excel formulas how few people even try to check maths they don't understand, let alone try to understand it.

I struggle to remember even relatively simple maths like working out "what percentage of X is Y" so if I write a formula like that I'll put in some simple values like 12 and 6 or 10,000 and 2,456 just to confirm I haven't got the values backwards or something. I've been shown sheets where someone put a formula in that they don't understand, checked it with numbers they can't easily eyeball and just assumed it was right as it's roughly in their ball park / they had no idea what the end result should be.

Then again I've also seen sheets where a 10% discount column always had a larger number than the standard price so even obviously wrong things aren't always checked.

ncruces 2 hours ago|||
I don't disagree, but whoever never put math they don't fully understand in their code gets to throw the first stone.

I've reached solutions by trial and error too, and tried to rationalize them later, quite a few times. And it's easier to rationalize a working solution, however adversarial you claim to be in your rationalization.

I don't see using gen AI for the (not so) “brute force” exploration of the solution space as that different from trial and error and post fact rationalization.

layer8 2 hours ago||||
How did you test that the solution is correct? Is the set of possible inputs a low-ish finite number?

Normally with mathematical problems you have to prove the solution correct. Testing is not sufficient, unless you can test all possible inputs exhaustively.

topranks 2 hours ago||||
How do you know what it spat out is correct though?

If it’s beyond our ability to review and we blindly trust it’s correct based on a limited set of tests… we’re asking for trouble.

boron1006 3 hours ago||||
> Claude is a phd level mathematician , I am not

I’m going to guess that this is Gell-Mann amnesia more than anything, and it’s going to get a lot of organizations into a lot of weird places.

imtringued 2 hours ago|||
You do realize you can ask Claude about the things you don't understand?

"PhD level" just means you finished a bachelor and masters degree and are now doing a bit of original research as an employed research assistant.

Claude isn't "PhD level" anything. This shows a complete lack of understanding here. Claude has read every single text book in existence, so it can surface knowledge locked away in book chapters that people haven't read in years (nobody really reads those dense books on niche topics from start to finish).

Since Claude has infinite patience, you can just keep asking until you get it.

dathanb82 5 hours ago|||
I’ve also heard it being called “comprehension debt,” which I like a little more because I think it’s more precise: the specific debt being accrued is exactly a lack of comprehension of the code.
cassianoleal 2 hours ago|||
I think it’s both in fact.

Comprehension debt just sounds like there are things you don’t (yet) understand.

Cognition debt means your lack of understanding compounds and the cognition “space” required to clear it increases accordingly.

An increasing comprehension debt that can be paid off one bit at a time within reasonable cognition space takes linear time to clear.

Cognition debt takes exponential time to clear the more of it you have. If it reaches a point where you simply don’t have the space for the cognition overhead required to understand the problem, you probably need to start over from your specifications.

layer8 2 hours ago||||
I like that too. However, “cognitive debt” points to the possibility of cognitive overload, that the code can become so complex and inscrutable that it may become impossible to comprehend. “Comprehension debt” sounds a bit weaker in that respect, that it’s just a matter of catching up with one’s comprehension.
baddash 5 hours ago||||
Yeah I like that better too, gonna start using that
kortilla 2 hours ago|||
“You can outsource your thinking, but not your understanding.”
i_love_retros 9 minutes ago|||
It's not worth fighting it at work. If the idiots you work for want everything vibe coded and delivered at 5 * 2025 speed then just vibe code and try to leave the company ASAP. That's where I am right now. Of course I might end up somewhere just as ridiculous or maybe not be able to even find another job. Shitty times we live in right now.
TranquilMarmot 5 hours ago|||
This is great until the "gun to your head" is your skip-level manager demanding that a feature be implemented by the end of the week, and they know you can just "generate it with AI" so that timeline is actually realistic now whereas two years ago it would have required careful planning, testing, and execution.
fransje26 2 hours ago|||
Well, that's nice.

Your manager is unknowingly helping you create a form of job security for yourself, with all the technical debt and bugs being accumulated.

He might not understand it, and it might not be the type of work you want to do, but someone is going to have to fix those issues. And the longer they wait, the bigger the task gets.

2ndorderthought 20 minutes ago|||
The question is, is it a job you actually still want once the poo pile reaches critical mass you are the only one with a shovel and the deadline is "yesterday"
anygivnthursday 36 minutes ago|||
That isn't new, though. Managers often pushed unrealistic timelines and showed lack of care about tech debt well before vibe coding, just the timelines where different, and the magnitude will be bigger this time. But we also have LLMs to help it clean it up faster, I guess.
wccrawford 37 minutes ago||||
If the manager is unreasonable, you were always going to have a problem with them, eventually. Nothing you can do with fix this.

If manage is reasonable, you can explain to them that there isn't time to check the work of the AI, and that it frequently makes obscure mistakes that need to be properly checked, and that takes time.

At this point, if they still insist you just give it the AI's work, they've made a decision that is their fault. You've done what you can.

And when the shit hits the fan, we're back to whether they're reasonable or not. If they are, you explained what could happen and it did. If they force responsibility on you, they aren't reasonable and were never going to listen to you. That time bomb was always going to go off.

nertirs3 4 hours ago|||
I hate this current trend of managers deciding, what tools developers have to use. Hopefully it ends soon.
nikau 3 hours ago||
Time will tell if outages and defect resolution sky rocket or if ai can deal with it
adrianN 3 hours ago||
Does that matter that much in practice? I bet lots of costumers are okay with software that crashes 10x as much if it costs 10x less. There already is a ton of shitty software that still sells.
throwaway2027 1 hour ago|||
I already followed those rules mostly with StackOverflow and before AI.
whitefang 4 hours ago|||
I agree to this though it also depends on the nature of project.

Had a project idea which I coded with the help of AI and it became quite large to a point I was starting to have uncharted areas in the code. Mostly because I reviewed it too shallow or moved fast.

It was a good thing as that project never floated but if I were to do such a thing on my breadwinning project I would lose the joy.

gritzko 4 hours ago|||
I just had a Claude episode. Instead of trying to fix the bug, it edited the data to hide the bug in the sample run. This kind of BS behavior is not rare. Absolutely, if you do not understand every bit of what's going on, you end up with a pile of BS.
dzhiurgis 3 hours ago||
That’s why I love gemini - none of this bullshit ever happen.
gritzko 1 hour ago||
I do not think Gemini can relieve a developer from knowing what he is doing.
2ndorderthought 18 minutes ago||
There's some really weird and unusual posts glazing Google in here today. Bot accounts out in force!
bmitc 4 hours ago||
This is about how I use it. I initially use it to carve out an architecture and iterate through various options. That saves a lot of time for me having to iterate through different language features and approaches. Once I get that, I have it scaffold out, and I go in and tidy things up to my personal liking and standards. From there, I start iterating through implementations. I generally have been implementing stuff myself, but I've gotten better at scaffolding out functions/methods through code instead of text. Then I ask it to finish things off. That falls into your first category of letting it implement stuff that I already know I could do. Not sure if it's faster. But it's lower cognitive load for me, since I can start thinking about the next steps without being concerned about straightforward code.

This all works pretty great. Where it starts going off the rails is if I let it use a library I'm not >=90% comfortable with. That's a good use of these tools, but if I let it plow through feature requests, I end up accumulating debt, as you pointed out.

For my uses, I'm still finding the right balance. I'm not terribly sure it makes me faster. What I do think it helps with is longer focused sections because my cognitive load is being reduced. So I can get more done but not necessarily faster in the traditional sense. It's more that I can keep up momentum easier, which does deliver more over time.

I'm interested in multi agent systems, but I'm still not sure of the right orchestration pattern. These AI tools still can go off the rails real quick.

20k 2 hours ago||
I always find these kinds of posts interesting, to compare the velocity that people seem to get with Ai, vs what I get by just coding by hand

Coincidentally I've been working on a project for about 7 months now: its a 3d MMO. Currently its playable, and people are having fun with it - it has decent (but needs work) graphics, and you can cram a few hundred people into the server easily currently. The architecture is pretty nice, and its easy to extend and add features onto. Overall, I'm very happy with the progress, and its on track to launch after probably a years worth of development

In 7 months vibe coding, OP failed to produce a basic TUI. Maybe the feature velocity feels high, but this seems unbelievably slow for building a basic piece of UI like this - this is the kind of thing you could knock out in a few weeks by hand. There are tonnes of TUI libraries that are high quality at this point, and all you need to do is populate some tables with whatever data you're looking for. Its surprising that its taking so long

There seems to be a strong bias where using AI feels like you're making a lot of progress very quickly, but compared to manual coding it often seems to be significantly slower in practice. This seems to be backed up by the available productivity data, where AI users feel faster but produce less

chromadon 1 hour ago||
This struck me as odd too. 7 months? It wouldn’t take that long to write it in a new language.

Another thing I don’t see mentioned is code quality.

Vibe-coded code bases are an excellent example of why LLMs aren’t very good at writing code. It will often correct its own mistakes only to make them again immediately after and Inconsistent pattern use.

Recently Claude has been making some “interesting” code style choices, not inline with the code base it’s currently supposed to be working on.

echelon 2 minutes ago|||
This was made in two days of vibe coding. It has flaws, but it's impressive as hell:

https://tinyskies.vercel.app/

ZaoLahma 1 hour ago|||
> There seems to be a strong bias where using AI feels like you're making a lot of progress very quickly, but compared to manual coding it often seems to be significantly slower in practice.

This metric highly depends on who uses the AI to do what, where strong emphasis is on "who" and "what".

In my line of work (software developer) the biggest time sinks are meetings where people need to align proposed solutions with the expectations of stakeholders. From that aspect AI won't help much, or at all, so measuring the difference of man hours spent from solution proposal to when it ends up in the test loops with and without AI would yield... very disappointing results.

But for troubleshooting and fixing bugs, or actually implementing solutions once they have been approved? For me, I'm at least 10x'ing myself compared to before I was using AI. Not only in pure time, but also in my ability to reason around observed behaviors and investigating what those observations mean when troubleshooting.

But I also work with people who simply cannot make the AI produce valuable (correct) results. I think if you know exactly what you want and how you want it, AI is a great help. You just tell it to do what you would have done anyway, and it does it quicker than you could. But if you don't know exactly what you want, AI will be outright harmful to your progress.

21asdffdsa12 1 hour ago|||
Seems to be baked into the GPT- produce text- aka to produce language and code is life and purpose. So the whole system is inherently internally biased towards "roll your own everything?" unless spoke too in a "Senior-dev" language, that prevents these repetitions.
thot_experiment 2 hours ago||
It's more complex than that, I think the reality is that there's a lot of code that's just not that deep bro. I have some purely personal projects that have components that I don't understand anymore, I wrote that shit by hand, they still work but I haven't touched that shit in years. There's a lot of code that AI can write that's like that that helps me, the stuff I would forget about even if I wrote it by hand. I think you have to have discipline in it's use, it's a tool like any other.

AI, and especially agentic AI can make you lose situational awareness over a codebase and when you're doing deep work that SUUUUCKS, but it's not useless, you just have to play to it's strengths. Though my favorite hill to die on is telling people not to underestimate it's value as autocomplete. Turns out 40 gigabytes of autocomplete makes for a fucking amazing autocomplete. Try it with llama.vim + qwen coder 30b, it feels like the editor is reading your mind sometimes and the latency is so low.

snowe2010 8 hours ago||
> The other change is simpler: I'm doing the design work myself, by hand, before any code gets written. Not a vague doc. Concrete interfaces, message types, ownership rules.

That’s the hard part of coding. If you have an architecture then writing the code is dead simple. If you aren’t writing the code you aren’t going to notice when you architected an API that allows nulls but then your database doesn’t. Or that it does allow that but you realize some other small issue you never accounted for.

I do not know how you can write this article and not realize the problem is the AI. Not that you let it architect, but that you weren’t paying attention to every single thing it does. It’s a glorified code generator. You need to be checking every thing it does.

The hard part of software engineering was never writing code. Junior devs know how to write code. The hard part is everything else.

Philip-J-Fry 3 hours ago||
Yes, I think there's 2 kinds of developer. Those who think the code is the hard part, and those that don't.

The developers that thing coding is hard are the ones that absolutely love AI coding. It's changed their world because things they used to find hard are now easy.

Those that think coding is easy don't have such an easy time because coding to them is all about the abstractions, the maintainability and extensibility. They want to lay sensible foundations to allow the software to scale. This is the hard part. When you discover the right abstractions everything becomes relatively easy. But getting there is the hard part. These people find AI coding a useful tool but not the crazy amazing magical tool the people who struggle with coding do.

The OP is definitely in the second camp since they could spot and realise the shortcomings of the AI. They spotted the problem, and that problem is that the AI can't do the hard bit.

jfim 2 hours ago|||
I'd say there's another camp: the camp of people who know that code isn't the hard part, but that it's still time consuming to write code. AI coding is pretty useful for that, when you can nail the design but you just need a set of hands to implement it.
Philip-J-Fry 1 hour ago||
I'm classing that as the second camp. Because you don't find it hard to do, it's just time consuming. It means you still know what you're doing and you're just using AI as a tool to accelerate your delivery. That's the optimal way to use in my experience if you want to actually deliver well architected software.
seer 2 hours ago||||
But isn’t AI doing the same thing to project management as to coding?

PMs can now cross reference and organize tickets with just a few keystrokes. Organisational knowledge, business knowledge, design systems and patterns, etc all of it is encoded in LLM consumable artefacts. For PMs it is the same switch - instead of having to do it by hand you direct lower level employees to handle the details and inconsistencies and you just do vibe and vision.

When all of the pieces successfully connect and execute reliably, what is left for humans to do? Just direct and consume?

And AI companies with their huge swaths of data are soon gonna be in the situation of being able to do the directing themselves

jcgrillo 40 minutes ago||
Such a person is just pushing a giant pile of cleanup work onto their colleagues. Unless they actually checked, the "cross references" are probably wrong in places or just entirely made up. Lower level employees by definition don't have the experience to correct the more subtle inconsistencies, so you've basically just constructed a high pass filter that lets only the worst failures through. Moreover, you're absolutely guaranteed to lose the respect of those lower level employees--forcing someone else to clean up your sloppy work is just cruel, and people resent being treated cruelly.
byzantinegene 2 hours ago|||
this pretty much sums up what i feel about AI currently. It made my life significantly easier for most tasks I already breeze through, yet tasks I used to struggle with are still the equally difficult
mikepurvis 7 hours ago|||
I agree with what you're saying, but I think we do have a problem right now with definitions where there's a lot of people basically getting supercharged tab completions or running a chatbot or two in a parallel pane, but still clearly reviewing everything; and on the other side of things is freaking Steve Yegge pitching a whole new editor that lets you orchestrate a dozen or more agents all vibing away on code you're apparently never going to read more than a line or two of: https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas. The second group are not, and those are the ones that I find a bit more worrisome.

RossBencina 7 hours ago|||
> The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas.

I can't speak for others, but I'd go further and say that LLMs allow me to go deeper on the design side. I can survey alternative data structures, brainstorm conversationally, play design golf, work out a consistent domain taxonomy and from there function, data structure and field names, draft and redraft code, and then rewrite or edit the code myself when the AI cost/benefit trade off breaks down.

barrell 4 hours ago||||
That’s a little bit of a No True Scotsman. Yes there are people who do not review anything; but even people who are reviewing every line from an LLM do not have the same understanding as someone who wrote it themselves.

I’m not making a judgement call about which is better, but it was widely accepted in tech before the advent of LLMs that you just fundamentally lack a sense of understanding as a reviewer vs an author. It was a meme that engineers would rather just rewrite a complicated feature than fix a bug, because understanding someone else’s code was too much effort.

bmitc 4 hours ago||||
> and on the other side of things is freaking Steve Yegge pitching a whole new editor that lets you orchestrate a dozen or more agents all vibing away on code you're apparently never going to read more than a line or two of

I find it useful to not listen to people who just talk.

imtringued 1 hour ago||||
That blog post is surreal. It's like cryptocurrencies and the whole web3 nonsense. Cryptocurrencies basically don't work, so there have been a hundred aimless attempts at fixing self inflicted problems caused by deficiencies of cryptocurrencies with no actual goal that has any impact on the real world.

It's the same thing here. AI has dropped the cost of software development, so developers are now fooling themselves into producing low or zero value software. Since the value of the software is zero or near zero, it doesn't really matter whether you get it right or not. This freedom from external constraints lets you crank up development velocity, which makes you feel super productive, while effectively accomplishing less than if you had to actually pay a meaningful cost to develop something.

Like, what is the purpose of Gas Town? It looks to me like the purpose of Gas Town is to build Gas Town.

skydhash 6 hours ago|||
> The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas

I worry about the first group too, because interfaces and data structures are the map, not the territory. When you create a glossary, it is to compose a message, that transmit a specific idea. I find invariably that people that focus on code that much often forgot the main purpose of the program in favor of small features (the ticket). And that has accelerated with LLM tooling.

I believe most of us that are not so keen on AI tooling are always thinking about the program first, then the various parts, then the code. If you focus on a specific part, you make sure that you have well defined contracts to the orther parts that guarantees the correctness of the whole. If you need to change the contract, you change it with regard to the whole thing, not the specific part.

The issue with most LLM tools is that they’re linear. They can follow patterns well, and agents can have feedback loop that correct it. But contracts are multi dimensional forces that shapes a solution. That solution appears more like a collapsing wave function than a linear prediction.

seer 4 hours ago|||
I’ve noticed that agents almost always fail at the planing vs execution stage.

I follow the plan -> red/green/refactor approach and it is surprisingly good, and the plans it produces all look super well reasoned and grounded, because the agent will slurp all the docs and forums with discussions and the like.

Trouble is once it starts working there would inevitably be a point where the docs and the implementation actually differ - either some combination of tools that have not been used in that way, some outdated docs, or just plain old bugs.

But if the goals of the project/feature are stated clearly enough it is quite capable of iterating itself out of an architectural dead end, that is if it can run and test itself locally.

It goes as deep as inspecting the code of dependencies and libraries and suggesting upstream fixes etc. all things that I would personally do in a deep debugging session.

And I’m supper happy with that approach as I’m more directing and supervising rather than doing the drudgery of it.

Trouble is a lot of my team mates _dont_ actually go this deep when addressing architectural problems, their usual mode of operandi is “escalate to the architect”.

This will not end up good for them in the long run I feel, but not sure what they can do themselves - the window of being able to run and understand everything seems to be rapidly closing.

Maybe that’s not super bad - I don’t exactly what the compiler is doing to translate things to machine code, and I definitely don’t get how the assembly itself is executed to produce the results I want at scale - that is level of magic and wizardry I can only admire (look ahead branching strategies and caching on modern cpus is super impressive - like how is all of this even producing correct responses reliable at such a a scale …)

Anyway - maybe all of this is ok - we will build new tools and frameworks to deal with all of this, human ingenuity and desire for improvement, measured in likes, references or money will still be there.

tripledry 4 hours ago|||
This is the only way for me to use Agents without completely hating and failing at it. Think about the problem, design structures and APIs and only then let AI implement it.
staplers 7 hours ago|||

  You need to be checking every thing it does.
This is what seems to be lost on so many. As someone with relatively little code experience, I find myself learning more than ever by checking the results and what went right/wrong.

This is also why I don't see it getting better anytime soon. So many people ask me "how do you get your claude to have such good output?" and the answer is always "I paid attention and spotted problems and asked claude to fix them." And it's literally that simple but I can see their eyes already glazing over.

Just as google made finding information easier, it didn't fix the human element of deciphering quality information from poor information.

krilcebre 3 hours ago|||
How do you know what good output should look like with little code experience?
brabel 4 hours ago|||
Looking at code looking for errors is a hard thing to do well for a large amount of code. A better approach is to ensure tests cover all the important cases and many edge cases. Looking at the code may still be a good idea but mostly to check the design. I think that once you get Claude to test the code it writes well, trying to find errors in the code is a waste of time. I’ve made the mistake of thinking Claude was wrong many times despite the tests passing just to be humbled by breaking the tests with my “improvements”!
skydhash 7 hours ago||
And when you got familiar with the other parts, you realize that writing code is the most enjoyable one. More often than not, you’re either balancing trade offs or researching what factors yoy have missed with the previous balancing. When you get to writing code, it’s with a sigh of relief, as that means you understand the problem enough to try a possible solution.

You can skip that and go directly to writing code. But that meant you replaced a few hours of planning with a few weeks of coding.

jcgrillo 23 minutes ago||
Very well said. When I'm working on a hard problem I'll often spend a few weeks sweating details like algorithms, API shapes, wire formats, database schemas, etc. These things are all really easy to change while they're just in a design document. Once you start implementing, big sweeping edits get a lot more difficult. So better to frontload as much of that as possible in the design phase. AI coding agents don't change this dynamic. However all that frontloaded work pays off big when it does come time to implement, because the search space has been narrowed considerably.
larusso 10 minutes ago||
I ran quite early into the same issues with my rust pet projects. Single structs with tons of Option<T> and validation methods etc. enums for type fields combined with says optional fields in the same layer so accessor methods all return Option<T>.

I add now a long list of instructions how to work with the type system and some do’s and don’ts. I don’t see myself as a vibe coder. I actually read the damn code and instruct the ai to get to my level of taste.

plastic041 8 hours ago||
Title says

> back to writing code by hand

But what they are doing is

> doing the __design work__ myself, by hand, before any code gets written.

So... Claude still is generating the code I guess?

And seriously, I can't understand that they thought their vibe coded project works fine and even bought a domain for the project without ever looking at source code it generated, FOR 7 MONTHS??

0xpgm 3 hours ago||
In short, it is simply a click-bait title.

And the goal of the article is to draw attention to their project.

kdheiwns 1 hour ago||
It's the same thing every time.

> Claude (c) by Anthropic (R) is the best thing since sliced bread and I'm Lovin' It(tm)! Here's a breakdown of you too can live a code free life for 10 easy payments of $99.99 a month if you subscribe now!

> Step one in your journey to code free life: code the whole damn project and put it together yourself

It's so much fluff and baloney and every single article is identical. And every single one is just over the top praise of Claude that doesn't come off as remotely authentic. There's always mentions of Claude "one shotting"(tm) something.

dewey 7 hours ago||
I bought domains for projects minutes after the idea.

I don’t think it’s that weird to not look at the code if it’s a side project and you follow along incrementally via diffs. It’s definitely a different way of working but it’s not that crazy.

bayarearefugee 5 hours ago||
> I don’t think it’s that weird to not look at the code if it’s a side project and you follow along incrementally via diffs.

Its not weird to not look at the code, as long as you're looking at the code? (diffs?)

Uh, ok

retsibsi 4 hours ago||
The article explicitly says that the author looked at the diffs; it distinguishes this from "sitting down and actually reading the code", which they didn't do. So when plastic041 says the author spent 7 months vibe coding "without ever looking at source code", it's not unreasonable for dewey to assume that "looking at source code", in this context, actually means something stronger and excludes just looking at the diffs.
IanCal 2 hours ago||
I feel like I’m watching developers speed run project and product management learnings.

We’ve moved to seeing that specs are useful and that having someone write lots of wrong code doesn’t make the project move faster (lots of times devs get annoyed at meetings and discussions because it hinders the code writing, but often those are there to stop everyone writing more of the wrong thing)

We’ve seen people find out that task management is useful.

Now more I’m seeing talk of fully doing the design work upfront. And we head towards waterfall style dev.

Then we’ll see someone start naming the process of prototyping, then I’m sure something about incremental features where you have to ma age old vs new requirements. Then talk of how really the customer needs to be involved more.

Genuinely, look at what projects and product managers do. They have been guiding projects where the product is code yet they are not expected to read the code and are required to use only natural language to achieve this.

meetingthrower 2 hours ago|
So right. All these guys have never been managers. Do you think humans don't write things that break? Or that teams sometimes take a wrong path and burn a week of work? Or months? Well now you can experience all of that in 30 minutes of vibecoding. As a former tech product manager, it feels EXACTLY the same.
yakshaving_jgt 1 hour ago||
Except it isn't the same because the cost is different, which allows discovery that we couldn't afford previously.
meetingthrower 30 minutes ago||
Yes x 1000. I find it amazing.
neals 17 minutes ago||
I'm moving very slowly into AI coding. I'm not comfortable enough to let Claude do anything big. What I do is this: I set out general architecture, create function stubs and add comments on how to implement things. Then I let Claude do 10 minutes of work and I check everything and refactor some of it. It saves me on boring implantation stuff (like, is this an array, move an index here or there, check for whatever exists or not, put it in the db)
xantronix 8 hours ago|
So you're not actually writing code by hand? I'm very confused by the difference between the title and the conclusion here.
rane 6 hours ago||
The point was to come up with a sensationalistic headline that HN eats up and post flies to the front page.
Towaway69 2 hours ago||
I wonder whether the title was generated/suggested by an AI?
More comments...