Posted by briankelly 2 days ago
Sometimes I liken the promise of AI to my experience with stereoscopic images (I have never been able to perceive them) -- I know there's something there but I frequently don't get it.
> For controllers, I might include a small amount of essential details like the route name: [code]
Commit history: https://github.com/dx-tooling/platform-problem-monitoring-co...
Look, I honestly think this is a fair article and some good examples, but what is with this inane “I didn’t write any of it myself” claim that is clearly false that every one of these articles keeps bringing up?
What’s wrong with the fact you did write some code as part of it? You clearly did.
So weird.
What I wanted to express was that I didn’t do any of the implementation, that is, any logic.
I need to phrase this better.
Coding by prompt is the next lowering of the bar and vibe coding even more so. Totally great in some scenarios and adds noise in others.
The main message I want to bring across is two-fold:
1. Senior developers are in a great position to make productive use of AI coding tools
2. I have (so far) identified three measures that make AI coding sessions much more successful
I hope the reworked version makes these more central and clear. I'm not a native English speaker, thus it's probably not possible for me to end up with an optimal version.
Still, I hope the new approach works a bit better for you — would love to receive another round of feedback.
1. Clearly define requirements
2. Clearly sketch architecture
3. Setup code tool suite
4. Let AI agent write the remaining code
Is better price-performance than going lighter on 1-3 and instead of 4, spending that time writing the code yourself with heavy input from LLM autocomplete, which is what LLMs are elite at.
The agent will definitely(?) write the code faster, but quality and understanding (tech debt) can suffer.
IOW the real takeaway is that knowing the requirements, architecture, and tooling is where the value is. LLM Agent value is dubious.
It also makes coding a lot less painful because I'm not making typos or weird errors (since so much code autocompletes) that I spend less time debugging too.
No, the article was just something about enjoying AI. This is hardly anything related to senior software developer skills.
There is no such thing as a measured approach. You can either use LLM agents to abdicate your intellectual honesty and produce slop, or you can refuse their use.
Those lamenting the loss of manual programming: we are free to hone our skills on personal projects, but for corporate/consulting work, you cannot ignore 5x speed advantage. It's over. AI-assisted coding won.
Otherwise, it can be 0.2x in some cases. And you should not use LLMs for anything security-related unless you are a security expert, otherwise you are screwed.
(this is SOTA as of April 2025, I expect things to become better in the near future)
If you know the programming language really well, that usually means you know what libraries are useful, memorized common patterns, and have some project samples laying out. The actual speed improvement would be on typing the code, but it's usually the activity that requires the least time on any successful project. And unless you're a slow typist, I can't see 5x there.
If you're lacking in fundamental, then it's just a skill issue, and I'd be suspicious of the result.
Everything boring can be automated and it takes five seconds compared to half an hour.
> Given this code, extract all entities and create the database schema from these
Sometimes, the best representation for storing and loading data is not the best for manipulating it and vice-versa. Directly mapping code entities to database relations (assuming it's SQL) is a sure way to land yourself in trouble later.
> write documentation for these methods
The intent of documentation is to explain how to use something and the multiple why's behind an implementation. What is there can be done using a symbol explorer. Repeating what is obvious from the name of the function is not helpful. And hallucinating something that is not there is harmful.
> write test examples
Again the type of tests matters more than the amount. So unless you're sure that the test is correct and the test suite really ensure that the code is viable, it's all for naught.
...
Your use cases assume that the output is correct. And as the hallucination risk from LLM models is non-zero, such assumption is harmful.
As for the documentation part — I infer that you hadn't used state of the art models, had you? They do not write symbol docs mechanistically. They understand what the code is _doing_. Up to their context limits, which are now 128k for most models. Feed them 128k of code and more often than not it will understand what it is about. In seconds (compared to hours for humans).