Posted by svara 17 hours ago
Ask HN: How is AI-assisted coding going for you professionally?
If you've recently used AI tools for professional coding work, tell us about it.
What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?
Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.
The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.
The manager & a senior dev on my first day told me to "Don't try to write code yourself, you should be using AI". I got encouraged to use spec-driven development and frameworks like superpowers, gsd, etc.
I'm definitely moving faster using AI in this way, but I legitimately have no idea what the fuck I am doing. I'm making PRs I don't know shit about, I don't understand how it works because there is an emphasis on speed, so instead of ramping up in a languages / technologies I've never used, I'm just shipping a ton of code I didn't write and have no real way to vet like someone who has been working with it regularly and actually has mastered it.
This time last year, I was still using AI, but using it as a pair programming utility, where I got help learn to things I don't know, probe topics / concepts I need exposure to, and reason through problems that arose.
I can't control the direction of how these tools are going to evolve & be used, but I would love if someone could explain to me how I can continue to grow if this actually is the future of development. Because while I am faster, the hope seems to be AI / Agents / LLMs will only ever get better and I will never need to have an original thought or use crtical thinking.
I have just about 4 years of professional experience. I had about 10 - 12 months of the start of my career where I used google to learn things before LLMs became sole singular focus.
I wake up every day with existential dread of what the future looks like.
The people forcing it down you do not care about the long-term ramifications.
So now a lot of different parts of the company are trying to replicate their workflow. The process is showing what works, you need to have AI first documentation (readme with one line for each file to help manage context), develop skills and steering docs for your codebase, code style, etc,. And it mostly works!
For me personally, it has drastically increased productivity. I can pick up something from our infinitely huge backlog, provide some context and let the agent go ham on fixing it while i do whatever other stuff is assigned to me.
I have moved away from using an LLM now before having figured out the specifications, otherwise it's very very risky to go down a wrong rabbit hole the LLM seduced you into via its "user engagement" training.
It works really well (using Claude Code and Opus 4.6 primarily). Incremental changes tend to be well done and mostly one-shotted provided I use plan mode first, and larger changes are achievable by careful planning with split phases.
We have skills that map to different team roles, and 5 different skills used for code review. This usually gets you 90% there before opening a PR.
Adopting the tool made me more ambitious, in the sense that it lets me try approaches I would normally discard because of gaps in my knowledge and expertise. This doesn't mean blindly offloading work, but rather isolating parts where I can confidently assess risk, and then proceed with radically different implementations guided by metrics. For example, we needed to have a way to extract redlines from PDF documents, and in a couple of days went from a prototype with embedded Python to an embedded Rust version with a robust test oracle against hundreds of document.
I don't have multiple agents running at the same time working on different worktrees, as I find that distracting. When the agent is implementing I usually still think about the problem at hand and consider other angles that end up in subsequent revisions.
Other things I've tried which work well: share an Obsidian note with the agent, and collaboratively iterate on it while working on a bug investigation.
I still write a percentage of code by hand when I need to clearly visualise the implementation in my head (e.g. if I'm working on some algo improvement), or if the agent loses its way halfway through because they're just spitballing ideas without much grounding (rare occurrence).
I find Elixir very well suited for AI-assisted development because it's a relatively small language with strong idioms.
I also have Copilot and Cursor bugbot reviews and run it on a Ralph wiggum loop with claude code. A few rounds overnight and the PR is perfect and ready for a final review before merging.
I do run 4 CC sessions in parallel though, but thats just one day a week. The rest of the week is spent figuring out the next set of features and fixes needed, operational things, meetings,feedback, etc.
So far, it's been fantastic. I can do more things for clients, much faster, than I ever dreamed would be possible when I've attempted work like this before.
I think the biggest problem with AI coding is that it simply doesn't fit well into existing enterprise structures. I couldn't imagine being able to do anything productive when I'm stuck having to rely on other teams or request access to stuff from the internet like I did in previous jobs.
I'm using `auggie` which is their CLI-based agentic tool. (They also have a VS Code integration - that became too slow and hung often the more I used it.) I don't use any prompting tricks, I just kind of steer the agent to the desired outcome by chatting to it, and switch models as needed (Sonnet 4.6 for speed and execution, GPT 5.1 for comprehension and planning).
My favorite recent interaction with Augment was to have one session write a small API and its specification within the old codebase, then have another session implement the API client entirely from the specification. As I discovered edge cases I had the first agent document them in the spec and the second agent read the updated spec and adjust the implementation. That worked much, much better than the usual ad hoc back and forth directly between me and one agent and also created a concise specification that can be tracked in the repo as documentation for humans and context for future agentic work.
I only use Claude Code with Opus 4.6 on High Effort.
I always, ALWAYS treat my “new job” as writing a detailed ticket for whatever it is I need to do.
I give the model access to a DB replica of my prod DB that I create manually.
I do NOT waste time with custom agents, Claude.md files or any of that stuff.
When I put ALL of the above together, the results ARE THE PROMISED LAND: I simply haven’t written a single line of code manually in the last 3 months.
For me I have been a coder since a very young age and I am nearing the end of my career now. I still love writing code to problem solve just as much as the first day I learnt to code. The thought of something taking that task away from me doesn't fill me with glee.
A parallel for me is if I enjoyed puzzle pages and those brought me with joy and satisfaction employing my grey matter to solve, I just wouldn't find it interesting to have an agent complete the forms to me, with me simply guiding the agent to clues.
My solution was to write code to force the model down a deterministic path.
It’s open source here: https://codeleash.dev
It’s working! ~200k LOC python/typescript codebase built from scratch as I’ve grown out the framework. I probably wrote 500-1000 lines of that, so ~99.5% written by Claude Code. I commit 10k-30k loc per week, code-reviewed and industrial strength quality (mainly thanks to rigid TDD)
I review every line of code but the TDD enforcement and self-reflection have now put both the process and continual improvement to said process more or less on autopilot.
It’s a software factory - I don’t build software any more, I walk around the machine with a clipboard optimizing and fixing constraints. My job is to input the specs and prompts and give the factory its best chance of producing a high quality result, then QA that for release.
I keep my operational burden minimal by using managed platforms - more info in the framework.
One caveat; I am a solo dev; my cofounder isn’t writing code. So I can’t speak to how it is to be in a team of engineers with this stuff.
No AI used.
Metaphorically speaking, you’re out there sprinting on the road while people who’ve made agentic coding work for them are sipping coffee in a limo.
People who haven’t made agentic coding work (but do it anyway) are sipping coffee in the back of a limo that has no brakes. No thanks to that.