Top
Best
New

Posted by svara 23 hours ago

Ask HN: How is AI-assisted coding going for you professionally?

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.

328 points | 522 commentspage 11
ed_elliott_asc 16 hours ago|
It is definitely making me more productive.

Tasks where, in the past, I have thought “if I had a utility to do x it would save me y time” and I’d either start and give up or spend much longer than y on it are now super easy, create a directory, claude “create an app to do x” so simple.

PerryStyle 17 hours ago||
I work in HPC and I’ve found it very useful in creating various shell scripts. It really helps if you have linters such as shellcheck.

Other areas of success have been just offloading the typing/prototyping. I know exactly how the code should look like so I rarely run into issues.

synthc 19 hours ago||
Very hit or miss.

Stack: go, python Team size: 8 Experience, mixed.

I'm using a code review agent which sometimes catches a critical big humans miss, so that is very useful.

Using it to get to know a code base is also very useful. A question like 'which functions touch this table' or 'describe the flow of this API endpoint' are usually answered correctly. This is a huge time saver when I need to work on a code base i'm less familiar with.

For coding, agents are fine for simple straightforward tasks, but I find the tools are very myopic: they prefer very local changes (adding new helper functions all over the place, even when such helpers already exist)

For harder problems I find agents get stuck in loops, and coming up with the right prompts and guardrails can be slower than just writing the code.

I also hates how slow and unpredictable the agents can be. At times it feels like gambling. Will the agents actually fix my tests, or fuck up the code base? Who knows, let's check in 5 minutes.

IMO the worst thing is that juniors can now come up with large change sets, that seem good at a glance but then turn out to be fundamentally flawed, and it takes tons of time to review

mikelevins 16 hours ago||
It's going pretty well, though it took at least six months to get there. I'm helped by knowing the domain reasonably well, and working with a principal investigator who knows it well and who uses LLMs with caution. At this stage I use Claude for coding and research that does not involve sensitive matters, and local-only LLMs for coding and research that does. I've gradually developed some regular practices around careful specification, boundaries, testing, and review, and have definitely seen things go south a few times. Used cautiously, though, I can see it accelerating progress in carefully-chosen and -bounded work.
mannyv 15 hours ago||
It's fun, but testing has become more of a PITA. When I write code I test and understand each piece. With AI generated code I need to figure out how it works and why it isn't working.
seudxs 15 hours ago||
I started AI-assisted coding quite a while ago with "query for code to copy and paste" approach which was slow but it dramatically shifts when the LLMs are used as agents that are just AI that have access to certain things like your project's source codes, internet and some other technical docs that refines them. You can simply instruct it to change snippet of codes by mentioning them with their actions inside the chat that feeds the agent, this is done in tools like cursor, antigravity, llmanywhere. an instruction could be limited to CRUD instructions, CRUD standing for Create, Read, Update, and Delete. an update instruction looks like "change the code that does this to do that" or more precise one "change the timeout of the request to ycombinator.com to 10". having a good memory definitely helps here but forgetting isn't the end of the development or necessity to start reading the source codes yourself to know where an instruction should target but you can ask the project's interconnected source codes (i put interconnected because it generates lots of source codes for some cases like test cases that aren't used in production but are part of the project in my experience with cursor for example) goal summary if you've forgotten the big picture of the project because you came back from a break or something. I used AI agent for my last langgraph solo project only which had python and go languages, git and cursor so take my advice with a grain of salt :)
block_dagger 18 hours ago||
I am having a blast at work. I've been leaning hard into AI (as directed by leadership) while others are falling far far behind. I am building new production features, often solo or with one or two other engineers, at lightning speed, and being recognized across the org for it. This is an incredible opportunity for many engineers that won't last. I'm trying to make the most of it. It will be sad when software is no longer a useful pastimes for humans. I'm thinking another three years and most of us will be unemployed or our jobs will have been completely transformed into something unrecognizable a few short years ago.
olvy0 3 hours ago||
I work on an ancient codebase, C# and C++ code spanning over 3 major repos and 5 other minor ones. I'm senior engineer and tech lead of my team, but I also do a lot of actual coding and code reviews. It's a somewhat critical internal infra. I'm intimately familiar with most of the code.

I've become somewhat addicted to using coding agents, in the sense I've felt I can finally realize a lot of fantasies about code cleanup and modernization I've had during the decade, and also fulfill user requests, without spending a lot of time writing code and debugging. During the last few months I've been spending my weekends prompting and learning the ropes. I've been using GPT 5.x and GPT 4 before that.

I've tried both giving it big cleanup tasks, and big design tasks. It was ok but mentally very exhausting, especially as it tends to stick to my original prompt which included a lot of known unknowns, even after I told it I've settled on a design decision, and then I have to go over its generated code line-by-line and verify that earlier decisions I had already rejected aren't slipping into the code again. In some instances I've had to tell it again and again that the code it's working on is greenfield and no backwards compatibility should be kept. In other instances I had to tell it that it shouldn't touch public API.

Also, a lot of things which I take for granted aren't done, such as writing detailed comments above each piece of code that is due to a design constraint or an obscure legacy reason. Even though I explicitly prompt it to do so.

Hand-holding it is a chore. It's like coaching a junior dev. This is on top of me having 4 actual real-life junior devs sending me PRs to review each week. It's mentally exhausting. At least I know it won't take offense when I'm belittling its overly complicated code and bad design decision (which I NEVER do when reviewing PRs for the actual junior devs, so in this sense I get something to throw my aggression against).

I have tried using it to make 3 big tasks in the last 5 months. I have shelved the first one (modernizing an ancient codebase written more than 20 years ago), as it still doesn't work even after spending ~week on it, and I can't spare any more time. The second one (getting another huge C# codebase to stop rebuilding the world on every compilation) seemed promising and in fact did work, but I ended up shelving it after discovering its solution broke auto-complete in Visual Studio. A MS bug, but still.

The 3rd big task is actually a user-facing one, involving a new file format, a managed reader and a backend writer. I gave it a more-or-less detailed design document. It went pretty ok, especially after I've made the jump to GPT 5.2 and now 5.4. Both of them still tended to hallunicate too much when the code size passed a certain threshold.

I don't use it for bug fixing or small features, since it requires a lot of explaining, and not worth it. Our system has a ton of legacy requirement and backwards compatibility guarantees that would take many days to specify properly.

I've become disillusioned last week. It's all for the best. Now that my addiction has lessened maybe I can have my weekends back.

keithnz 17 hours ago||
Pretty good, we have a huge number of projects, some more modern than others. For the older legacy systems, it's been hugely useful. Not perfect, needs a bit more babysitting, but a lot easier to deal with than doing it solo. For the newer things, they can mostly be done solely by AI, so more time spent just speccing / designing the system than coding. But every week we are working out better and batter ways of working with AI, so it's an evolving process at the moment
miiiiiike 13 hours ago|
It’s like working with the dumbest, most arrogant intern you could imagine. It has perfect recall of the docs but no understating of them.

An example from last week:

Me: Do this.

AI: OK.

<Brings me code that looks like it accomplishes the task but after looking at it it’s accomplishing it in a monkey’s paw/spiteful genie kind of way.>

Me: Not quite, you didn’t take this into account. But I made the same mistake while learning so I can pull it back on track.

AI: OK

<It’s worse, and why are all the values hardcoded now?>

…

40 minutes go by. The simplest, smallest bit of code is almost right.

Me: Alright, abstract it into a Sass mixin.

AI: OK.

<Has no idea how to do it. It installed Sass, but with no understanding of what it’s working on so the mixin implementation looks almost random. Why is that the argument? What is it even trying to accomplish here?>

At which point I just give up and hand code the thing in 10 minutes.

It would be neat if AI worked. It doesn’t.

More comments...