Posted by svara 1 day ago
Ask HN: How is AI-assisted coding going for you professionally?
If you've recently used AI tools for professional coding work, tell us about it.
What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?
Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.
The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.
Professionally I hardly use the tools for coding, since I’m in an architecture role and mostly write design docs and do reviews. And I write the occasional prototype.
I have started building tools to integrate copilot (Opus) better with $CORP. This way I can ask it questions across confluence and github.
Leveraging Claude for a project feels very addictive to me. I have to make a conscious effort to stop and I end up working on multiple projects at the same time.
The negatives are that AI clearly loves to add code, so I do need to coach it into making nice abstractions and keeping it on track.
When I need to type stuff myself it's mostly just minor flavour changes like Claude adding docstrings in a silly way or naming test functions the wrong way - stuff that I fixed in the prompt for the next time.
And yes, I read and understand the code produced before I tag anyone to review the PR. I'm not a monster =)
This comment section is exactly the same, of course.
> I'd like to cut through the noise
Me too, but it's not happening here.
Mostly using Gemini Flash 3 at a FAANG.
It may sound absurd to review an implementation with the same model you used to write it, but it works extremely well. You can optionally crank the "effort" knob (if your model has one) to "max" for the code review.
Frequently returns, "Oh, you are absolutely correct, let me redo this part better."
At the end of the day it’s an autocomplete. So if you ask “are you sure?” then “oh, actually” is a statistically likely completion.
I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.
Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.
See me in action here. It's a quick demo: https://youtu.be/a_AT7cEN_9I
However, I have since condensed this into 2 prompts:
1. Write plan in Plan Mode
2. (Exit Plan Mode) Critique -> Improve loop -> Implement.