Top
Best
New

Posted by speckx 19 hours ago

AI coding is gambling(notes.visaint.space)
330 points | 399 commentspage 9
irarrazaval26 16 hours ago|
[flagged]
lokimoon 17 hours ago||
h1b coding is ignorance.
extr 18 hours ago||
I mean, this completely falls apart when you're trying to do something "real". I am building a trading engine right now with Claude/Codex. I have not written a line of code myself. However I care deeply about making sure everything works well because it's my money on the line. I have to weight carefully the prospect of landing a change that I don't fully understand.

Sometimes I can get away with 3K LoC PRs, sometimes I take a really long time on a +80 -25 change. You have to be intellectually honest with yourself about where to spend your time.

vermilingua 16 hours ago||
Not only is it gambling, it has the full force of the industry that built the attention market behind it. I find it extremely hard to believe that these tools have not been optimised to keep developers prompting the same way tiktok keeps people scrolling.
bensyverson 18 hours ago||
This "slot machine" metaphor is played out. If you're just entering a coin's worth of information and nudging it over and over in the hopes of getting something good, that's a you problem, not a Claude problem.

If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.

ctoth 18 hours ago||
> If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.

I am consistently using 100% of my weekly $200 max plan. I know how this thing works, I know how to get value out of it, and I wish what you said were true.

If you do all of these things? You are in a better spot. You are in a far better spot than if you hadn't! Setting up hooks to ensure notes get written? Massive win! Red-green TDD? Yes, please! But in terms of just ... well, being able to rely on the damn thing?

https://github.com/ctoth/claude-failures

bensyverson 17 hours ago||
[dead]
james2doyle 18 hours ago|||
_hyper-competent collaborator who may completely make things up occasionally and will sometimes give different answers to the same question*_
bensyverson 17 hours ago||
So, indistinguishable from a human then
bigstrat2003 16 hours ago||
No. A competent human doesn't make things up, he admits ignorance. He also only very rarely changes answers he previously gave.
rustyhancock 18 hours ago||
Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.

In a healthy environment. We are harmed more by being totally risk adverse. Than by accepting risk as part of life and work.

DiscourseFan 18 hours ago||
When a code doesn't compile, it doesn't kill anyone. But if a Waymo suddenly veers off the road, it creates a real threat. Waymos had to be safer than real human drivers for people to begin to trust them. Coding tools did not have to be better than humans for them to be adopted first. Its entirely possible for a human to make a catastrophic error. I imagine in the future, it will be more likely that a human makes such errors, just like its more likely that a human will make more errors driving a car.
Verdex 17 hours ago|
My understanding is that waymo has gone on the record to say that they have human operators that remotely drive the vehicle in scenarios where their automated system is confused.

Which I assert is semantically equivalent to saying: Human drivers (even when operating at the diminished capacity of not even being present in the car) are less likely to make errors driving a car than AIs.

krupan 17 hours ago||
This is getting off topic but they did not say the remote humans drive the cars. The cars always drive themselves, the remote humans provide guidance when the car is not confident in any of the decisions it could make. The humans define a new route or tell the car it's ok to proceed forward
CraftingLinks 18 hours ago||
I see whole teams pushed by c- level going full in with spec driven + tdd development. The devs hate it because they are literally forbidden to touch a single line if code. but the results speak for themselves, it just works and the pressure has shifted to the product people to keep up. The whole tooling to enable this had to be worked out first. All Cursor and extreme use of a tool called Speckit, connected to Notion to pump documentation and Jira.
RealityVoid 18 hours ago||
> literally forbidden to touch a single line if code.

That is extremely stupid. What does that ban get you? I reqct to this because a friend mentioned exactly this. And I was dumbfounded.

CraftingLinks 15 hours ago|||
I don't think it's implemented that harsh or enforced so hostile, but they have these rict procedures now on how the code is to be developed. That procedure they follow is all centered around automated code generation. So they simply... don't anymore in practice, it is not part of the job description so to speak. He wasn't happy I can tell, but also acknowledged it was working very well.
ryandrake 18 hours ago||||
It seems like just a CxO dick measuring exercise.

CEO1: "We allow our engineers to use AI for all work."

CEO2: "Oh yea? We mandate our engineers use AI for at least N% of their work!"

CEO3: "You think that's good? We mandate our engineers use AI for all code!!"

CEO4: "Pfff, amateurs. We don't even allow our engineers to open source code editors or even look at the LLM output..."

CraftingLinks 15 hours ago||
I also thought it was pushing it to the limit, but I think this is just some Founder of a successful company deciding engineering was going to transform to this way of working. A huge bet, but the implementation didn't feel amateuristic or ad hoc. Just not very pleasant for most devs to work that way. I'm sure some will look elsewhere. I know I would!
comboy 18 hours ago|||
> That is extremely stupid. What does that ban get you?

confidence in firing coders I presume..

CraftingLinks 15 hours ago||
They are hiring "architects", or do we call them analysts. The impression is we're going back to analysts drawing those pld school UML-like diagrams etc. Also, a lot of the devs are on the brink of just quitting, because it's "not programming" anymore. So, not only will you still need devs, or people massaging those specs, you'll also need enough "product" people to keep that engine fed! If your management isn't lazy, I can see the need for growing people count will continue to rise within such companies. That doesn't mean the work will be ...satisfying for devs.
bigstrat2003 16 hours ago|||
> but the results speak for themselves, it just works

The results do speak for themselves, but it doesn't work.

rsoto2 16 hours ago||
yeah i'm not gonna be an AI company's guinea pig just because the c-suite wants to sign me up. "the results" you mean AI-psychosis and dunning-kruger syndrome?
CraftingLinks 15 hours ago||
Like I said, devs don't like it. He said productivity went up 3-4x. "It works". There was no question of denying that as far as he was concerned. At the same time he was going to look for another job as it was just painful to work like that.
post-it 17 hours ago|
> But this doesn't really resemble coding. An act that requires a lot of thinking and writing long detailed code.

Does it? It did in the past. Now it doesn't. Maybe "add a button to display a colour selector" really is the canonical way to code that feature, and the 100+ lines of generated code are just a machine language artifact like binary.

> But it robs me of the part that’s best for the soul. Figuring out how this works for me, finding the clever fix or conversion and getting it working. My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.

Skill issue. Two nights ago, I used Claude to write an iOS app to convert Live Photos into gifs. No other app does it well. I'm going to publish it as my first app. I wouldn't have bothered to do it without AI, and my soul feels a lot better with it.