Top
Best
New

Posted by yuedongze 7 days ago

AI should only run as fast as we can catch up(higashi.blog)
198 points | 181 commentspage 2
awesome_dude 7 days ago|
It's like a buffered queue, if the producer (AI) is too fast for the consumer (dev's brain) then the producer needs to block/stop/slow down other wise data will be lost (in this analogy the data loss is the consumer no longer having a clear understanding of what the code is doing)

One day, when AI becomes reliable (which is still a while off because AI doesn't yet understand what it's doing) then the AI will replace the consumer (IMO).

FTR - AI is still at the "text matches another pattern of text" stage, and not the "understand what concepts are being conveyed" stage, as demonstrated by AI's failure to do basic arithmetic

kristjank 7 days ago||
This feeling of verification >> generation anxiety bears a resemblance to that moment when you're learning a foreign language, you speak a well-prepared sentence, and your correspondent says something back, of which you only understand about a third.

In like fashion, when I start thinking of a programming statement (as a bad/rookie programmer) and an assistant completes my train of thought (as is default behaviour in VS Code for example), I get that same feeling that I did not grasp half the stuff I should've, but nevertheless I hit Ctrl-Return because it looks about right to me.

yuedongze 7 days ago|
> because it looks about right to me

this is something one can look in further. it is really probabilistic checkable proofs underneath, and we are naturally looking for places where it needs to look right, and use that as a basis of assuming the work is done right.

Yoric 7 days ago||
> Maybe our future is like the one depicted in Severance - we look at computer screens with wiggly numbers and whatever “feels right” is the right thing to do. We can harvest these effortless low latency “feelings” that nature gives us to make AI do more powerful work.

Come to think about it... aren't this exactly what syntax coloring and proper indentation are all about? The ability to quickly pattern-spot errors, or at least smells, based on nothing but aesthetics?

I'm sure that there is more research to be done in this direction.

bitwize 6 days ago|
Our future is going to be like the one in Severance: no creation, no craftsmanship, just grooming the grids of numbers that do the actual work.
yannyu 7 days ago||
I think there's a lot of utility to current AI tools, but it's also clear we're in a very unsettled phase of this technology. We likely won't see for years where the technology lands in terms of capability or the changes that will be made to society and industry to accommodate.

Somewhat unfortunately, the sheer amount of money being poured into AI means that it's being forced upon many of us, even if we didn't want it. Which results in a stark, vast gap like the author is describing, where things are moving so fast that it can feel like we may never have time to catch up.

And what's even worse, because of this industry and individuals are now trying to have the tool correct and moderate itself, which intuitively seems wrong from both a technical and societal standpoint.

pglevy 7 days ago||
I've been thinking about something like this from a UI perspective. I'm a UX designer working on a product with a fairly legacy codebase. We're vibe coding prototypes and moving towards making it easier for devs to bring in new components. We have a hard enough time verifying the UI quality as it is. And having more devs vibing on frontend code is probably going to make it a lot worse. I'm thinking about something like having agents regularly traversing the code to identify non-approved components (and either fixing or flagging them). Maybe with this we won't fall further behind with verification debt than we already are.
cousinbryce 6 days ago||
That’s just static code analysis with extra steps
huflungdung 6 days ago||
[dead]
gijoeyguerra 6 days ago||
Make it do TDD. That'll slow it down.
vjvjvjvjghv 6 days ago|
Make it do scrum with sprint planning, retrospectives and sprint demos. A then another AI as product owner and scrum master. Ideally this AI has only a vague idea of what the product needs to or the technology but still has decision power. That should really slow it down.
theshrike79 6 days ago||
This was done already, they're called "Agent networks" or whatever buzzword the dev decided to give their abomination :)
trjordan 7 days ago||
The verification asymmetry framing is good, but I think it undersells the organizational piece.

Daniel works because someone built the regime he operates in. Platform teams standardized the patterns and defined what "correct" looks like and built test infrastructure that makes spot-checking meaningful and and and .... that's not free.

Product teams are about to pour a lot more slop into your codebase. That's good! Shipping fast and messy is how products get built. But someone has to build the container that makes slop safe, and have levers to tighten things when context changes.

The hard part is you don't know ahead of time which slop will hurt you. Nobody cares if product teams use deprecated React patterns. Until you're doing a migration and those patterns are blocking 200 files. Then you care a lot.

You (or rather, platform teams) need a way to say "this matters now" and make it real. There's a lot of verification that's broadly true everywhere, but there's also a lot of company-scoped or even team-scoped definitions of "correct."

(Disclosure: we're working on this at tern.sh, with migrations as the forcing function. There's a lot of surprises in migrations, so we're starting there, but eventually, this notion of "organizational validation" is a big piece of what we're driving at.)

gaigalas 6 days ago||
Prompt engineering: just basic articulation skills.

Context engineering: just basic organization skills.

Verification engineering: just basic quality assurance skills.

And so on...

---

"Eric" will never be able to fully use AI for development because he lacks knowledge about even the most basic aspects of the developer's job. He's a PM after all.

I understand that the idea of turning everyone into instant developers is super attractive. However, you can't cheat learning. If you give an edge to non-developers for development tasks, it means you will give an even sharper edge to actual developers.

booleandilemma 6 days ago|
This is true. I've been anti-ai but I started using it recently as an alternative to stack overflow (because google is shoving it down my mouth via search results). It's pretty effective. It does get things wrong from time to time, but then I just fix it up manually. I can't claim it's making me 100x more productive or anything like that. It's just a nice alternative to scrolling through SO answers and looking for the one with the green checkmark.

I still find it sad when people use it for prose though.

gaigalas 6 days ago||
If an agent gets things wrong you should stop it and correct it instead.

Sometimes the correction will cost more than starting from scratch. In those cases, you start from scratch.

You do things manually only when novel work is required (the model is unlikely to be trained with the knowledge). The more novel the thing you're doing, the more manual things you have to do.

Identifying "cost of refactoring", and "is this novel?" are also developer skills, so, no formula here. You have to know.

bamboozled 6 days ago|
I'm starting to come to the realization that unless there is a bottom to the amount of work people want done, it doesn't really matter about AI or not, there just seems to be a never ending supply of work so yeah, not sure how AI would resolve this.
More comments...