Top
Best
New

Posted by svara 20 hours ago

Ask HN: How is AI-assisted coding going for you professionally?

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.

313 points | 504 commentspage 9
TrueSlacker0 15 hours ago|
I am no longer in software as a day job so i am not sure of my input applys. I traded that world for opening a small brewery back in 2013. So I am a bit outdated on many modern trends but I still enjoy programming. In the last fee months using both gemeni and now movong over to claude, I have created at least 5 (and growing) small apps that have radically transformed what i am able to do at the business. I totally improved automation of my bookkeeping (~16hrs a month categorizing everything down to 3ish), created an immense amount of better reports on production, sales and predictions from a system i had already been slowly writing all these years, I created a run club rewards tracking app instead of relying on our paper method, improved upon a previously written full tv menu display system that syncs with our website and on premis tvs and now i am working on a full productive maintenance trigger system and a personal phone app to trigger each of these more easily. Its been a game changer for me. I have so many more ideas planned and each one frees up more of my waste time to create more.
davemp 11 hours ago||
Professionally, sending our code off prem is not an option. Frankly I don’t understand why executives are okay with AI companies training LLMs on their IP. Unless they own a significant stake in the AI company I guess.

Personally, it’s been decent for generating tedious boilerplate. Though I’m not sure if reading the docs and just writing things myself would have been faster when it comes time to debug. I’m pretty fast at code editing with vim at this point. I’m also hesitant to feedback any fixes to the AI companies.

I’ve found “better google” to be a much more comfortable if not faster way to use the tools. Give me the information, I’ll build an understanding and see the big picture much better.

abcde666777 13 hours ago||
Two contexts:

1. Workplace, where I work on a lot of legacy code for a crusty old CRM package (Saleslogix/Infor), and a lot of SQL integration code between legacy systems (System21).

So far I've avoided using AI generated code here simply because the AI tools won't know the rules and internal functions of these sets of software, so the time wrangling them into an understanding would mitigate any benefits.

In theory where available I could probably feed a chunk of the documentation into an agent and get some kind of sensible output, but that's a lot of context to have to provide, and in some cases such documentation doesn't exist at all, so I'd have to write it all up myself - and would probably get quasi hallucinatory output as a reward for my efforts.

2. Personally where I've been working on an indie game in Unity for four years. Fairly heavy code base - uses ECS, burst, job system, etc. From what I've seen AI agents will hallucinate too much with those newer packages - they get confused about how to apply them correctly.

A lot of the code's pretty carefully tuned for performance (thousands of active NPCs in game), which is also an area I don't trust AI coding at all, given it's a conglomeration of 'average code in the wild that ended up in the training set'.

At most I sometimes use it for rubber ducking or performance. For example at one point I needed a function to calculate the point in time at which two circles would collide (for npc steering and avoidance), and it can be helpful to give you some grasp of the necessary math. But I'll generally still re-write the output by hand to tune it and make sure I fully grok it.

Also tried to use it recently to generate additional pixel art in a consistent style with the large amount of art I already have. Results fell pretty far short unfortunately - there's only a couple of pixel art based models/services out there and they're not up to snuff.

causalzap 13 hours ago||
I’ve been a web dev for 10+ years, and my professional pivot in 2026 has been moving away from "content-first" sites to "tool-led" content products. My current stack is Astro/Next.js + Tailwind + TypeScript, with heavy Python usage for data enrichment.

What’s working:

Boilerplate & Layout Shifting: AI (specifically Claude 4.x/5) is excellent for generating Astro components and complex Tailwind layouts. What used to take 2 hours of tweaking CSS now takes 15 minutes of prompt-driven iteration.

Programmatic SEO (pSEO) Analysis: I use Python scripts to feed raw data into LLMs to generate high-volume, structured analysis (300+ words per page). For zero-weight niche sites, this has been a massive leverage point for driving organic traffic.

Logic "Vibe Checks": When building strategy engines (like simulators for complex games), I use AI to stress-test my decision-making logic. It’s not about writing the core engine—which it still struggles with for deep strategy—but about finding edge cases in my "Win Condition" algorithms.

The Challenges:

The "Fragment" Syntax Trap: In Astro specifically, I’ve hit issues where AI misidentifies <> shorthand or hallucinates attribute assignments on fragments. You still need to know the spec inside out to catch these.

Context Rot: As a project grows, the "context window" isn't the problem; it's the "logic drift." If you let the AI handle too many small refactors without manual oversight, the codebase becomes a graveyard of "almost-working" abstractions.

The Solution: I treat AI as a junior dev who is incredibly fast but lacks a "mental model" of the project's soul. I handle the architecture and the "strategy logic," while the AI handles the implementation of UI components and repetitive data transformations.

Stack: Astro, TypeScript, Python scripts for data. Experience: 10 years, independent/solo.

GMoromisato 10 hours ago||
I'm working on a startup, mostly writing C++, and I'm using AI more and more. In the last month I have one machine running Codex working on task while I work on a different machine.

I have to think like micro-manager, coming up with discrete (and well-defined) tasks for the AI to do, and I periodically review the code to make it cleaner/more efficient.

But I'm confident that it is saving me time. And my love for programming has not diminished. I'm still driving the architecture and writing code, but now I have a helper who makes progress in parallel.

Honestly, I don't want to go back.

DefineOutside 12 hours ago||
I work at a large company that is contracted to build warehouses that automate the movement of goods with conveyors, retrieval systems, etc.

This is a key candidates to use AI as we have built hundreds of warehouses in the past. We have a standard product that spans over a hundred thousand lines of code to build upon. Still, we rely on copying code from previous projects if features have been implemented before. We have stopped investing in the product to migrate everything to microservices, for some reason, so this code copying is increasingly common as projects keep getting more complex.

Teams to implement warehouses are generally around eight developers. We are given a design spec to implement, which usually spans a few hundred pages.

AI has over doubled the speed at which I can write backend code. We've done the same task so many times before with previous warehouses, that we have a gold mine of patterns that AI can pick up on if we have a folder of previous projects that it can read. I also feel that the code I write is higher quality, though I have to think more about the design as previously I would realize something wouldn't work whilst writing the code. At GWT though, it's hopeless as there's almost no public GWT projects to train an AI on. It's also very helpful in tracing logs and debugging.

We use Cursor. I was able to use $1,300 tokens worth of Claude Opus 4.6 for a cost of $100 to the company. Sadly, Cursor discontinued it's legacy pricing model due to it being unsustainable, so only the non-frontier models are priced low enough to consistently use. I'm not sure what I'm going to do when this new pricing model takes affect tomorrow, I guess I will have to go back to writing code by hand or figure out how to use models like Gemini 3.1. GPT models also write decent code, but they are always so paranoid and strictly follow prompts to their own detriment. Gemini just feels unstable and inconsistent, though it does write higher quality code.

I'm not being paid any more for doubling my output, so it's not the end of the world if I have to go back to writing code by hand.

lazystar 18 hours ago||
my team is anti-AI. my code review requests are ignored, or are treated more strictly than others. it feels coordinated - i will have to push back the launch date of my project as a result.

another teammate added a length check to an input field, and his request was merged near instantly, even though it had zero unit testing. this team is incredibly cooked in the long term, i just need to ensure that i survive the short term somehow.

j3k3 18 hours ago||
" this team is incredibly cooked in the long term" they're not actually.

People like you are making sunk expenditures whilst the models are evolving... they can just wait until the models get to 'steady-state' to figure out the optimal workflow. They will have lost out on far less.

dude250711 16 hours ago|||
> another teammate added a length check to an input field, and his request was merged near instantly, even though it had zero unit testing

That sounds extremely reasonable though?

bitwize 15 hours ago||
Code that does not take a pre-existent unit test from failing to passing is by definition broken.
fastasucan 15 hours ago|||
No its not.
jbxntuehineoh 13 hours ago||||
that is not what "by definition" means
jasbrg 15 hours ago|||
i take it you’re meaning i’m the “treat every gun as if it’s loaded” sense and not actually
teg4n_ 17 hours ago|||
it sounds like you might have wasted your team's time previously and now they don't trust the code you put up with a PR. Maybe you can do something to improve your relationship with them?

As a sidenote, I highly doubt they are cooked longterm. Using AI is not exactly skilled labor. If they want or need I'm sure they could learn patterns/workflows in like an afternoon. As things go on it will only get easier to use.

j3k3 17 hours ago||
Exactly. I find it hilarious that the people down-voted my comment.

Like yeah sorry... not everyone has to be a risk-taker. Many people like to observe and await to see what new techniques emerge that can be exploited.

block_dagger 14 hours ago||
I would start looking for a job at an AI-leaning firm.
spprashant 15 hours ago||
I only just started using it at work in the last month.

I am a data engineer maintaining a big data Spark cluster as well as a dozen Postgres instances - all self hosted.

I must confess it has made me extremely productive if we measure in terms of writing code. I don't even do a lot of special AGENTS.md/CLAUDE.md shenanigans, I just prompt CC, work on a plan, and then manually review the changes as it implements it.

Needless to say this process only works well because: A) I understand my code base. B) I have a mental structure of how I want to implement it.

Hence it is easy to keep the model and me in sync about what's happening.

For other aspects of my job I occasionally run questions by GPT/Gemini as a brainstorming partner, but it seems a lot less reliable. I only use it as a sounding board. I does not seem to make me any more effective at my job than simply reading documents or browsing github issues/stack overflow myself.

max_ 14 hours ago||
I use it as a research tool.

What it has done is replace my Googling and asking people looking up stuff on stack over flow.

Its also good for generating small boiler plate code.

I don't use the whole agents thing and there are so many edge cases that I always need to understand & be aware of that the AI honestly think cannot capture

mikelevins 13 hours ago|
It's going pretty well, though it took at least six months to get there. I'm helped by knowing the domain reasonably well, and working with a principal investigator who knows it well and who uses LLMs with caution. At this stage I use Claude for coding and research that does not involve sensitive matters, and local-only LLMs for coding and research that does. I've gradually developed some regular practices around careful specification, boundaries, testing, and review, and have definitely seen things go south a few times. Used cautiously, though, I can see it accelerating progress in carefully-chosen and -bounded work.
More comments...