Posted by nreece 10/27/2025
I've found it takes significant time to find the right "mode" of working with AI. It's a constant balance between maintaining a high-level overview (the 'engineering' part) while still getting that velocity boost from the AI (the 'coding' part).
The real trap I've seen (and fallen into) is letting the AI just generate code at me. The "engineering" skill now seems to be more about ruthless pruning and knowing exactly what to ask, rather than just knowing how to write the boilerplate.
This makes me cringe because it's a lot harder to get LLMs to generate good code when you start with a crappy codebase. If you start with a good codebase, it's like the codebase is coding itself. The former approach trying to get the LLM to write clean code is akin to mental torture, the second approach is highly pleasant.
Yes, they're bad now, but they'll get better in a year.
If the generative ability is good enough for small snippets of code, it's good enough for larger software that's better organized. Maybe the models don't have enough of the right kind of training data, or the agents don't have the right reasoning algorithms. But it is there.
If we’re simply measuring model benchmarks, I don’t know if they’re much better than a few years ago… but if we’re looking at how applicable the tools are, I would say we’re leaps and bounds beyond where we were.
Also use MCPs like codex7 and Agentic LLMs for more interactivity instead of just relying on a raw model.
For example, you can pull the library code to your working environment and install the coding agent there as well. Then you can ask them to read specific files, or even all files in the library. I believe (according to my personal experience) this would significantly decrease the possibility of hallucinating.
"But AI can build this in 30min"