Posted by svara 23 hours ago
Ask HN: How is AI-assisted coding going for you professionally?
If you've recently used AI tools for professional coding work, tell us about it.
What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?
Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.
The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.
Tasks where, in the past, I have thought “if I had a utility to do x it would save me y time” and I’d either start and give up or spend much longer than y on it are now super easy, create a directory, claude “create an app to do x” so simple.
Other areas of success have been just offloading the typing/prototyping. I know exactly how the code should look like so I rarely run into issues.
Stack: go, python Team size: 8 Experience, mixed.
I'm using a code review agent which sometimes catches a critical big humans miss, so that is very useful.
Using it to get to know a code base is also very useful. A question like 'which functions touch this table' or 'describe the flow of this API endpoint' are usually answered correctly. This is a huge time saver when I need to work on a code base i'm less familiar with.
For coding, agents are fine for simple straightforward tasks, but I find the tools are very myopic: they prefer very local changes (adding new helper functions all over the place, even when such helpers already exist)
For harder problems I find agents get stuck in loops, and coming up with the right prompts and guardrails can be slower than just writing the code.
I also hates how slow and unpredictable the agents can be. At times it feels like gambling. Will the agents actually fix my tests, or fuck up the code base? Who knows, let's check in 5 minutes.
IMO the worst thing is that juniors can now come up with large change sets, that seem good at a glance but then turn out to be fundamentally flawed, and it takes tons of time to review
I've become somewhat addicted to using coding agents, in the sense I've felt I can finally realize a lot of fantasies about code cleanup and modernization I've had during the decade, and also fulfill user requests, without spending a lot of time writing code and debugging. During the last few months I've been spending my weekends prompting and learning the ropes. I've been using GPT 5.x and GPT 4 before that.
I've tried both giving it big cleanup tasks, and big design tasks. It was ok but mentally very exhausting, especially as it tends to stick to my original prompt which included a lot of known unknowns, even after I told it I've settled on a design decision, and then I have to go over its generated code line-by-line and verify that earlier decisions I had already rejected aren't slipping into the code again. In some instances I've had to tell it again and again that the code it's working on is greenfield and no backwards compatibility should be kept. In other instances I had to tell it that it shouldn't touch public API.
Also, a lot of things which I take for granted aren't done, such as writing detailed comments above each piece of code that is due to a design constraint or an obscure legacy reason. Even though I explicitly prompt it to do so.
Hand-holding it is a chore. It's like coaching a junior dev. This is on top of me having 4 actual real-life junior devs sending me PRs to review each week. It's mentally exhausting. At least I know it won't take offense when I'm belittling its overly complicated code and bad design decision (which I NEVER do when reviewing PRs for the actual junior devs, so in this sense I get something to throw my aggression against).
I have tried using it to make 3 big tasks in the last 5 months. I have shelved the first one (modernizing an ancient codebase written more than 20 years ago), as it still doesn't work even after spending ~week on it, and I can't spare any more time. The second one (getting another huge C# codebase to stop rebuilding the world on every compilation) seemed promising and in fact did work, but I ended up shelving it after discovering its solution broke auto-complete in Visual Studio. A MS bug, but still.
The 3rd big task is actually a user-facing one, involving a new file format, a managed reader and a backend writer. I gave it a more-or-less detailed design document. It went pretty ok, especially after I've made the jump to GPT 5.2 and now 5.4. Both of them still tended to hallunicate too much when the code size passed a certain threshold.
I don't use it for bug fixing or small features, since it requires a lot of explaining, and not worth it. Our system has a ton of legacy requirement and backwards compatibility guarantees that would take many days to specify properly.
I've become disillusioned last week. It's all for the best. Now that my addiction has lessened maybe I can have my weekends back.
An example from last week:
Me: Do this.
AI: OK.
<Brings me code that looks like it accomplishes the task but after looking at it it’s accomplishing it in a monkey’s paw/spiteful genie kind of way.>
Me: Not quite, you didn’t take this into account. But I made the same mistake while learning so I can pull it back on track.
AI: OK
<It’s worse, and why are all the values hardcoded now?>
…
40 minutes go by. The simplest, smallest bit of code is almost right.
Me: Alright, abstract it into a Sass mixin.
AI: OK.
<Has no idea how to do it. It installed Sass, but with no understanding of what it’s working on so the mixin implementation looks almost random. Why is that the argument? What is it even trying to accomplish here?>
At which point I just give up and hand code the thing in 10 minutes.
It would be neat if AI worked. It doesn’t.