Posted by briankelly 4/3/2025
Coding by prompt is the next lowering of the bar and vibe coding even more so. Totally great in some scenarios and adds noise in others.
Actually coding is a relatively small part of my job. I could use an LLM for the others parts but my employer does not appreciate being given word salad.
I'm blasting through tickets, leaving more time to tutor and help junior colleagues and do refactoring. Guiding them has then been a multiplier, and also a bit of an eye opener about how little real guidance they've been getting up until now. I didn't realise how resource constrained we'd been as a team leading to not enough time guiding and helping them.
I don't trust the tools with writing code very often but they are very good at architecture questions, outputting sample code etc. Supercharged google
As a generalist, I feel less overwhelmed
It's probably been the most enjoyable month at this job.
Those lamenting the loss of manual programming: we are free to hone our skills on personal projects, but for corporate/consulting work, you cannot ignore 5x speed advantage. It's over. AI-assisted coding won.
Otherwise, it can be 0.2x in some cases. And you should not use LLMs for anything security-related unless you are a security expert, otherwise you are screwed.
(this is SOTA as of April 2025, I expect things to become better in the near future)
If you know the programming language really well, that usually means you know what libraries are useful, memorized common patterns, and have some project samples laying out. The actual speed improvement would be on typing the code, but it's usually the activity that requires the least time on any successful project. And unless you're a slow typist, I can't see 5x there.
If you're lacking in fundamental, then it's just a skill issue, and I'd be suspicious of the result.
Everything boring can be automated and it takes five seconds compared to half an hour.
> Given this code, extract all entities and create the database schema from these
Sometimes, the best representation for storing and loading data is not the best for manipulating it and vice-versa. Directly mapping code entities to database relations (assuming it's SQL) is a sure way to land yourself in trouble later.
> write documentation for these methods
The intent of documentation is to explain how to use something and the multiple why's behind an implementation. What is there can be done using a symbol explorer. Repeating what is obvious from the name of the function is not helpful. And hallucinating something that is not there is harmful.
> write test examples
Again the type of tests matters more than the amount. So unless you're sure that the test is correct and the test suite really ensure that the code is viable, it's all for naught.
...
Your use cases assume that the output is correct. And as the hallucination risk from LLM models is non-zero, such assumption is harmful.
As for the documentation part — I infer that you hadn't used state of the art models, had you? They do not write symbol docs mechanistically. They understand what the code is _doing_. Up to their context limits, which are now 128k for most models. Feed them 128k of code and more often than not it will understand what it is about. In seconds (compared to hours for humans).
What the code is doing is important only when you intend to modify it. Normally, what's important is how to use it. That's the whole point of design: Presenting an API that hides how things happens in favor of making it easier (natural) to do something. The documentation should focus on that abstract design and the relation to the API. The concrete implementation rarely matters if you're on the other side of the API.
Rob Pike has the right idea but the wrong execution. As the amount of second and third party code we use increases, the search time goes up, and we need better facilities to reduce the amount of time you need to spend looking at the internals of one package because you need that time to look at three others. So clarity and discoverability both need to matter, and AI has no answers here, only more problems.
IMO, a lot of the success of Java comes from having provided 80% of the source code with the JDK. You could spend so much time single stepping into code that was not yours to figure out why your inputs didn’t cause the outputs you expected. But those are table stakes now.
> For controllers, I might include a small amount of essential details like the route name: [code]
Commit history: https://github.com/dx-tooling/platform-problem-monitoring-co...
Look, I honestly think this is a fair article and some good examples, but what is with this inane “I didn’t write any of it myself” claim that is clearly false that every one of these articles keeps bringing up?
What’s wrong with the fact you did write some code as part of it? You clearly did.
So weird.
What I wanted to express was that I didn’t do any of the implementation, that is, any logic.
I need to phrase this better.
Sometimes I liken the promise of AI to my experience with stereoscopic images (I have never been able to perceive them) -- I know there's something there but I frequently don't get it.
We all know how big companies handle software, if it works ship it. Basically once this shit starts becoming very mainstream companies will want to shift into their 5x modes (for their oh so holy investors that need to see stock go up, obviously.)
So once this sloppy prototype is seen as working they will just ship the shit sandwhich prototype. And the developers won’t know what the hell it means so when something breaks in the future, and that is when not if. They will need AI to fix it for them, cause once again they do not understand what is going on.
What I’m seeing here is you proposing replacing one of your legs with AI and letting it do all the heavy lifting, just so you can lift heavier things for the moment.
Once this bubble crumbles the technical debt will be big enough to sink companies, I won’t feel sorry for any of the AI boosties but do for their families that will go into poverty
When the good feeling fades and you need to up the dosage, you will find that your ability to function is declining and your dependency on the generative tools is increasing. Besides no one is thinking about the end game. If (and its a big if) this goes to plan and these generative tools can do everything. Well at that point the only software needed is the generative tool itself isn't it? There would be no need for anything else so anyone building stuff on top of it, or using it to build stuff, would be SOL.
So best case scenario we all get addicted to fundamentally flawed technology because our ability to function independently has eroded too far, worst case there will be only foundational model companies operating in software.