Top
Best
New

Posted by jinhkuan 4 days ago

Coding assistants are solving the wrong problem(www.bicameral-ai.com)
188 points | 147 commentspage 2
b1temy 2 days ago|
> most tech debt isn’t actually created in the code, it’s created in product meetings. Deadlines. Scope cuts.

> When asked what would help most, two themes dominated

> Reducing ambiguity upstream so engineers aren’t blocked...

I do wonder how much LLMs would help here, this seems to me at least, to be a uniquely human problem. Humans (Managers, leads, owners, what have you) are the ones who interpret requirements, decide deadlines, features and scope cuts and are the ones liable for it.

What could an LLM do to reduce ambiguity upstream? If it was trained with information on requirements, this same information could be documented somewhere for engineers to refer to. If it were to hallucinate or "guess" an answer without talking to a person for clarification, and which might turn out to not be correct, who would be responsible for it? imo, the bureaucracy and waiting for clarification mid-implementation is a necessary evil. Clever engineers, through experience, might try implement things in an open way that can be easily changed for future changes they predict might happen.

As for the second point,

> A clearer picture of affected services and edge cases

> three categories stood out: state machine gaps (unhandled states caused by user interaction sequences), data flow gaps, and downstream service impacts.

I'd agree. Perhaps when a system is complex enough, and a developer is laser focused on a single component of it, it is easy to miss gaps when other parts of the system are used in conjunction with it. I remember a while ago, it used to be a popular take that LLMs were a useful tool for generating unit tests, because of their usual repetitive nature and because LLMs were usually good at finding edge cases to test, some of which a developer might have missed.

---

I will say, it is refreshing to see a take on coding assistants being used on other aspects instead of just writing code, which as the article pointed out, came with its own set of problems (increase Inefficiencies in other parts of the development lifecycle, potential AI-introduced security vulnerabilities, etc.)

znnajdla 2 days ago||
It is entirely possible for someone to be 2x faster at coding with AI without increasing their throughput. I believe Claude Code has made me at least 2x as productive. But I can easily see why a 2x increase in individual development speed may not translate to any increase overall at the level of the whole organization. Because at most organizations the bottleneck is not code, it’s everything else from politics to external blockers to bike shedding and people’s egos. So a developer who is 2x faster at their work may just end up having more free time on their hands. The greatest increases in productivity and throughput are where there are no external blockers, like random side projects, which is exactly where people report the greatest productivity with AI.
helloplanets 4 days ago||
The writeup is a bit contrived in my opinion. And sort of misrepresenting what users can do with tools like Claude Code.

Most coding assistant tools are flexible to applying these kinds of workflows, and these sorts of workflows are even brought up in Anthropic's own examples on how to use Claude Code. Any experienced dev knows that the act of specifically writing code is a small part of creating a working program.

zmmmmm 4 days ago||
this concept of bottlenecking on code review is definitely a problem.

Either you (a) don't review the code, (b) invest more resources in review or (c) hope that AI assistance in the review process increases efficiency there enough to keep up with code production.

But if none of those work, all AI assistance does is bottleneck the process at review.

ozlikethewizard 4 days ago||
Also the thought of my job becoming more code review than anything else is enough to turn me into a carpenter.
ares623 4 days ago||
If companies truly believed more code equals more productivity then they will remove all code review from their process and let IC’s ship AI generated code that they “review” as the prompter directly to prod.
rk06 4 days ago||
you mean to Staging, right? even non AI code can't be trusted on "straight to prod on Friday evening" level
fpoling 4 days ago||
I have found that using Cursor to write in Rust what I previously would write as a shell or Python or jq script was rather helpful.

The datasets are big and having the scripts written in the performant language to process them saves non-trivial amounts of time, like waiting just 10 minutes versus an hour.

Initial code style in the scripts was rather ugly with a lot of repeated code. But with enough prompting that I reuse the generated code became sufficiently readable and reasonable to quickly check that it is indeed doing what was required and can be manually altered.

But prompting it to do non-trivial changes to existing code base was a time sink. It took too much time to explain/correct the output. And critically the prompts cannot be reused.

Havoc 3 days ago|
Same though lately discovered some rough edges in rust with LLM. Sticking a working app into a from scratch container image seems particularly problematic even if you give it the hint that it needs to static link
LorenPechtel 1 day ago||
I have long said that the fundamental job of a programmer is to translate sloppy requirements into bulletproof logic.

And AI has no concept of this.

williamcotton 3 days ago||
> Experienced developers were 19% slower when using AI coding assistants—yet believed they were faster

One paper is sure doing a lot of leg work these days...

jfyi 3 days ago|
You know, anecdotally...

When I first picked up an agentic coding assistant I was very interested in the process and paid way more attention to it than necessary.

Quickly, I caught myself treating it like a long compilation and getting up to get a coffee and had to self correct this behavior.

I wonder how much novelty of the tech and workflow plays into this number.

newswasboring 4 days ago||
Isn't this proposal closely matching with the approach OpenSpec is taking? (Possibly other SDD tool kits, I'm just familiar with this one). I spend way more time in making my spec artifacts (proposal, design, spec, tasks) than I do in code review. During generation of each of these artifacts the code is referenced and surfaces at least some of the issues which are purely architecture based.
28304283409234 4 days ago||
I barely use ai as a coding assistant. I use it as a product owner. Works wonders. Especially in this age of clueless product owners.
tankenmate 3 days ago|
Some of the conclusions remind of the "ha ha only serious" joke that most people (obviously not the Monks themselves) had about Perl; "write only code". Maybe some of the lessons learnt about how to maintain Perl code might be applicable in this space?
More comments...