Top
Best
New

Posted by svara 11 hours ago

Ask HN: How is AI-assisted coding going for you professionally?

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.

234 points | 391 commentspage 3
ok_coo 2 hours ago|
I use Gemini, and rarely ChatGPT (usually once or twice a day). I ask very narrow, pointed questions about something specific I would like an answer to. I typically will verify that the solution is good/accurate because I've been burned in the past by receiving what I'd characterize as a bad solution or "wrong" answer.

I think it's useful tool, but whenever I have a LLM attempt to develop an entire feature for me, the solution becomes to a pain to maintain (because I don't have the mental model around it or the solution has subtle issues).

Maybe people who are really deep into using AI are using Claude? Perhaps it's way better, I don't know.

VoidWhisperer 3 hours ago||
It has definitely made me more productive. That said, that productivity isn't coming from using it to write business logic (I prefer to have an in-depth understanding of the logical parts of the codebases that I'm working on. I've also seen cases in my work codebases where code was obviously AI generated before and ends up with gaping security or compliance issues that no one seemed to see at the time).

The productivity comes from three main areas for me:

- Having the AI coding assistance write unit tests for my changes. This used to be by far my least favorite part of my job of writing software, mostly because instead of solving problems, it was the monotonous process of gathering mock data to generate specific pathways, trying to make sure I'm covering all the cases, and then debugging the tests. AI coding assistance allows me to just have to review the tests to make sure that they cover all the cases I can think of and that there aren't any overtly wrong assumptions

- Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole

- Quick test scripts. It has been extremely useful for generating quick SQL data for testing things, along with quick one-off scripts to test things like external provider APIs

ivraatiems 3 hours ago|
> Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole.

I agree, this is where coding agents really shine for me. Even if they get the details wrong, they often pinpoint where things happen and how quite well.

They're also great for rapid debugging, or assisted bug fixing. Often, I will manually debug a problem, then tell the AI, "This exception occurs in place Y because thing X is happening, here's a stack trace, propose a fix", and then it will do the work of figuring out where to put the fix for me. I already usually know WHAT to do, it's just a matter of WHERE in context. Saves a lot of time.

Likewise, if I have something where I want thing X to do Y, and X already does Z, then I'll say, "Implement a Y that works like Z but for A B C", and it'll usually get it really close on the first try.

piker 5 hours ago||
I am working on a sub 100KLOC Rust application and can't productively use the agentic workflows to improve that application.

On the other hand, I have tried them a number of times in greenfield situations with Python and the web stack and experienced the simultaneous joy and existential dread of others. They can really stand new projects up quick.

As a founder, this leaves me with what I describe as the "generation ship" problem. Is it possible that the architecture we have chosen for my project is so far out of the training data that it would be faster to ditch the project and reimplement it from scratch in a Claude-yolo style? So far, I'm convinced not because the code I've seen in somewhat novel circumstances is fairly mid, but it's hard to shake the thought.

I do find chatting with the models incredibly helpful in all contexts. They are also excellent at configuring services.

koyote 2 hours ago||
If what you are doing is novel then I don't think yolo'ing it will help either. Agents don't do novel. I've even noticed this in meeting summaries produced by AI: A prioritisation meeting? AI's summary is concise, accurate, useful. A software algorithm design meeting, trying to solve a domain-specific issue? AI did not understand a word of what we discussed and the summary is completely garbled rubbish.

If all you're doing is something that already exists but you decided to architecture it in a novel way (for no tangible benefit), then I'd say starting from scratch and make it look more like existing stuff is going to help AI be more productive for you. Otherwise you're on your own unless you can give AI a really good description of what you are doing, how things are tied together etc. And even then it will probably end up going down the wrong path more often than not.

cyanydeez 4 hours ago||
I'm surprised there's no more attempts to stablize around a base model, like in stable diffusion, then augment those models with LoRas for various frameworks, and other routine patterns. There's so much going into trying to build these omnimodels, when the technology is there to mold the models into more useful paradigms around frameworks and coding patterns.

Especially, now that we do have models that can search through code bases.

jebarker 5 hours ago||
I work in an R&D team as research scientist/engineer.

Cursor and Claude Code have undoubtedly accelerated certain aspects of my technical execution. In particular, root causing difficult bugs in a complicated codebase has been accelerated through the ability to generate throwaway targeted logging code and just generally having an assistant that can help me navigate and understand complex code.

However, overall I would say that AI coding tools have made my job harder in two other ways:

1. There’s an increased volume of code that requires more thorough review and/or testing or is just generally not in keeping with the overall repo design.

2. The cost is lowered for prototyping ideas so the competitive aspect of deciding what to build or which experiment to run has ramped up. I basically need to think faster and with more clarity to perform the same as I did before because the friction of implementation time has been drastically reduced.

sornaensis 6 hours ago||
I have good success using Copilot to analyze problems for me, and I have used it in some narrow professional projects to do implementation. It's still a bit scary how off track the models can go without vigilance.

I have a lot of worry that I will end up having to eventually trudge through AI generated nightmares since the major projects at work are implemented in Java and Typescript.

I have very little confidence in the models' abilities to generate good code in these or most languages without a lot of oversight, and even less confidence in many people I see who are happy to hand over all control to them.

In my personal projects, however, I have been able to get what feels like a huge amount of work done very quickly. I just treat the model as an abstracted keyboard-- telling it what to write, or more importantly, what to rewrite and build out, for me, while I revise the design plans or test things myself. It feels like a proper force multiplier.

The main benefit is actually parallelizing the process of creating the code, NOT coming up with any ideas about how the code should be made or really any ideas at all. I instruct them like a real micro-manager giving very specific and narrow tasks all the time.

j3k3 2 hours ago|
TBH it kinda makes sense why personal projects are where productivity jumps are much larger.

Working on projects within a firm is... messy.

xenadu02 3 hours ago||
Sometimes it produces useful output. A good base of tests to start with. Or some little tool I'd never take the time to make if I had to do it myself.

On the other hand I tried to get help debugging a test failure and Claude spit out paragraph after paragraph arguing with itself going back and forth. Not only did it not help none of the intermediate explanations were useful either. It ended up being a waste of time. If I didn't know that I could have easily been sent on multiple wild goose chases.

drrob 9 hours ago||
I've only recently begun using copilot auto-complete in Visual Studio using Claude (doing C# development/maintenance of three SaaS products). I've been a coder since 1999.

The suggestions are correct about 40% of the time, so I'm actually surprised when they're right, rather than becoming reliant on them. It saves me maybe 10 minutes a day.

scuff3d 8 hours ago|
The only part AI auto complete I found I really like is when I have a function call that takes like a dozen arguments, and the auto complete can just shove it all together for me. Such a nice little improvement.
drrob 6 hours ago|||
My least favourite part of the auto complete is how wordy the comments it wants to create are. I never use the comments it suggests.
queenkjuul 4 hours ago||
I have been begging Claude not to write comments at all since day 1 (it's in the docs, Claude.md, i say the words every session, etc) and it just insists anyway. Then it started deleting comments i wrote!

Fucking robot lol

stephenr 8 hours ago|||
Do you mean suggesting arguments to provide based on name/type context?
scuff3d 4 hours ago||
Yeah, it usually gets the required args right based on various pieces of context. It have a big variation though between extension. If the extension can't pull context from the entire project (or at least parts of it) it becomes almost useless.
stephenr 2 hours ago||
IntelliJ platform (JetBrains IDEs) has this functionality out of the box without "AI" using regular code intelligence. If all your parameters are strings it may not work well I guess but if you're using types it works quite well IME.
scuff3d 2 hours ago||
Can't use JetBrains products at work. I also unfortunately do most of my coding at work in Python, which I think can confound things since not everything is typed
INTPenis 4 hours ago||
I'm always skeptical to new tech, I don't like how AI companies have reserved all memory circuits for X years, that is definitely going to cause problems in society when regular health care sector businesses can't scale or repair their infra, and the environmental impact is also a discussion that I am not qualified to get into.

All I can say for sure is that it is absolutely useful, it has improved my quality of life without a doubt. I stick to the principle that it's here to improve my work life balance, not increase output for our owners.

And that it has done, so far. I can accomplish things that would have taken me weeks of stressful and hyperfocused work in just hours.

I use it very carefully, and sparingly, as a helpful tool in my toolbox. I do not let it run every command and look into every system, just focused efforts to generate large amounts of boilerplate code that would require me to have a lot of docs open if I were to do it myself.

I definitely don't let it read or write my e-mails, or write any text. Because I always loved writing, and will never stop loving it.

It's here to stay, because I'm not alone in feeling this way about it. So the staunch AI-deniers are just wasting their time. Just like any other tech, it's going to be used against humans, against the already oppressed.

I definitely recognize that the tech has made some people lose their minds. Managers and product owners are now vibe coding thinking they can replace all their developers. But their code base will rot faster than they think.

robbbbbbbbbbbb 4 hours ago||
Context: micro (5 person) software company with a mature SaaS product codebase.

We use a mix of agentic and conversational tools, just pick your own and go with it.

For Unity development (our main codebase and source of value) I give current gen tools a C- for effectiveness. For solving confined, well modularisable problems (eg refactor this texture loader; implement support for this material extension) it’s good. For most real day to day problems it’s hopelessly confused by the large codebase full of state, external dependency on chunks of Unity, implicit hardware-dependent behaviours, etc. It has no idea how to work meaningfully with Unity’s scene graph or component model. I tried using MCP to empower it here: on a trivial test project it was fine. In a real project it got completely lost and broke everything after eating 30k tokens and 40 minutes of my time, mostly because it couldn’t understand the various (documented) patterns that straddled code files and scene structure.

For web and API development I give it an A, with just a little room for improvement. In this domain it’s really effective all the way down the logical stack from architectural and deployment decisions all the way down to implementation details and debugging including digging really deep in to package version incompatibilities and figuring out problems in seconds that would take me hours. My one criticism would be the - now familiar - “junior developer” effect where it’ll often run ahead with an over engineered lump of machinery without spotting a simpler more coherent pattern. As long as you keep an eye on it it’s fine.

So in summary: if what you’re doing is all in text, nothing in binary, doesn’t involve geometric or numerical reasoning, and has billions of lines of stack overflow solutions: you’ll be golden. Otherwise it’s still very hit and miss.

ChrisMarshallNY 5 hours ago|
Define "professional."

I write stuff for free. It's definitely "professional grade," and lots of people use the stuff I ship, but I don't earn anything for it.

I use AI every day, but I don't think that it is in the way that people here use it.

I use it as a "coding partner" (chat interface).

It has accelerated my work 100X. The quality seems OK. I have had to learn to step back, and let the LLM write stuff the way that it wants, but if I do that, and perform minimal changes to what it gives me, the results are great.

More comments...