Posted by Anon84 5 days ago
A lot of places skip creation and maintenance of decent observability - that's code.
We can now easily use advanced, code heavy testing techniques like property testing - code.
We can create environmental simulations to speed up and improve integration testing - code.
We can lift up internal abstraction levels, replace boiler plate with frameworks, DSLs - code.
The flashing red dot on the web page is very annoying. Is there some design reason for that?
edit: I meant the <svg> inside `trail-map-container`
https://www.thetypicalset.com/blog/grammar-parser-maintenanc...
Solid red dots are articles you've visited.
I don't think this sentence speaks for me. This is the sort of thing I love to do.
Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.
The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.
Code as bottleneck doesn't have to mean "I wanted this feature but it took me many months to finally code it". It is also "I wanted this feature for 2 years, but the friction in sitting down to put it in code and spending 5-10 days on it, etc, put me off".
If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, as they knew it wouldn't take as little as with the LLM.
(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).
In other words, the code was the bottleneck.
The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.
The error in the reasoning is that while you can increase your resourcing to tenfold and gain nothing in return, the inverse is not necessarily true.
This is merely speed of development and not the velocity of a company towards higher value. There are many PMs confidently (using the same AI tools), without a clear deep understanding of the user problems or why the requirements will be adopted by their target users (or even who the target users really are), writing these done elaborately.
So yes this will lead to faster end-end execution. But if the product is used or if it sits unused will depend on things beyond the above.
I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.
The bottleneck is always decision making and human review when multiple humans are involved. This is especially true when we are all trying to build agentic / llm based systems where the outcomes are highly varied and its impossible to write easy tests to automatically check quality or benchmark progress.
I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.