Top
Best
New

Posted by svara 1 day ago

Ask HN: How is AI-assisted coding going for you professionally?

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.

328 points | 522 commentspage 12
__mp 21 hours ago|
I enjoy Opus on personal projects. I don’t even bother to check the code. Go/JavaScript/Typescript/CSS works very well for me. Swift not so much. I haven’t tried C/C++ yet. Scala was Ok.

Professionally I hardly use the tools for coding, since I’m in an architecture role and mostly write design docs and do reviews. And I write the occasional prototype.

I have started building tools to integrate copilot (Opus) better with $CORP. This way I can ask it questions across confluence and github.

Leveraging Claude for a project feels very addictive to me. I have to make a conscious effort to stop and I end up working on multiple projects at the same time.

shockwaverider 19 hours ago||
One thing I use Claude for is diagramming system architecture stuff in LateX and it’s great, I just describe what I am visualizing and kaboom I get perfect output I can paste into overleaf.
gamerDude 18 hours ago||
I find it useful. It has been a big solve from a motivation perspective. Getting into bad API docs or getting started on a complex problem, it's easy to have AI go with me describing it. And the other positive is front end design. I've always hated css and it's derivatives and AI makes me now decent.

The negatives are that AI clearly loves to add code, so I do need to coach it into making nice abstractions and keeping it on track.

theshrike79 18 hours ago||
I've shipped full features and bug fixes without touching an IDE for anything significant.

When I need to type stuff myself it's mostly just minor flavour changes like Claude adding docstrings in a silly way or naming test functions the wrong way - stuff that I fixed in the prompt for the next time.

And yes, I read and understand the code produced before I tag anyone to review the PR. I'm not a monster =)

esperent 13 hours ago||
> Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless."

This comment section is exactly the same, of course.

> I'd like to cut through the noise

Me too, but it's not happening here.

seanmcdirmid 21 hours ago||
I’m transitioning from AI assisted (human in the loop) to AI driven (human on the loop) development. But my problems are pretty niche, I’m doing analytics right now where AI-driven is much more accessible. I’m in a team of three but so far I’m the only one doing the AI driven stuff. It basically means focusing on your specification since you are handing development off to the AI afterwards (and then a review of functionality/test coverage before deploying).

Mostly using Gemini Flash 3 at a FAANG.

agreezy 17 hours ago||
It allowed me to build my SaaS https://agreezy.app in 2 months (started January and launched early February). A lot of back and forth between Claude and Qwen but it's pretty polished. AI hallucinations are real so I ended up more tests than normal.
pgt 22 hours ago||
I am getting disproportionately good results with the models by following a process: spec -> plan -> critique -> improve plan -> implement plan.
CharlesW 22 hours ago||
If I may "yes, and" this: spec → plan → critique → improve plan → implement plan → code review

It may sound absurd to review an implementation with the same model you used to write it, but it works extremely well. You can optionally crank the "effort" knob (if your model has one) to "max" for the code review.

frumiousirc 20 hours ago|||
A blanket follow-up "are you sure this is the best way to do it?"

Frequently returns, "Oh, you are absolutely correct, let me redo this part better."

jgilias 18 hours ago||
You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.

At the end of the day it’s an autocomplete. So if you ask “are you sure?” then “oh, actually” is a statistically likely completion.

CharlesW 17 hours ago||
> You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.

I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.

Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.

jgilias 12 hours ago||
Interesting, I definitely see better results on a clean session. On a “dirty” session it’s more likely to go with “this is what we implemented, it’s good, we could improve it this way”, whereas on a clean session it’s a lot more likely to find actual issues or things that were overlooked in the implementation session.
afroisalreadyin 22 hours ago|||
Can you give a little more detail how you execute these steps? Is there a specific tool you use, or is it simply different kinds of prompts?
ramoz 18 hours ago|||
I follow a very similar workflow, with manual human review of plans and continuous feedback loops with the plan iterations

See me in action here. It's a quick demo: https://youtu.be/a_AT7cEN_9I

pgt 20 hours ago|||
I wrote it down here: https://x.com/BraaiEngineer/status/2016887552163119225

However, I have since condensed this into 2 prompts:

1. Write plan in Plan Mode

2. (Exit Plan Mode) Critique -> Improve loop -> Implement.

4b11b4 21 hours ago||
similar approach
anonzzzies 16 hours ago|
We use our own scripts around claude code to create and maintain 100s of products. We have products going back 30+ years and clients are definitely happier since AI. We are more responsive to requests for lower fees than before.
More comments...