Top
Best
New

Posted by speckx 2 hours ago

AI coding is gambling(notes.visaint.space)
208 points | 231 comments
itsgrimetime 1 hour ago|
All of this new capability has made me realize that the reason i love programming _isn't_ the same as the OP. I used to think (and tell others) that I loved understanding something deeply, wading through the details to figure out a tough problem. but actually, being able to will anything I can think of into existence is what I love about programming. I do feel for the people who were able to make careers out of falling in love w/ and getting good at picking problems & systems apart, breaking them down, and understanding them fully. I respect the discipline, curiosity, and intellect they have. but I also am elated w/ where things are at/going. this feels absurd to say, but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in months, but having tools that can finally match the speed my ideas come to me is intoxicating
applfanboysbgon 50 minutes ago||
> but I finally feel like I'm _good_ at programming, which is insane

Yes, it is insane. You couldn't torture this confession out of me. But that's the drug they're selling you, isn't it? You don't even write code, but you're getting a self-inflated sense of worth. It must be addicting! Of course, whether or not the programs you prompt are actually good surely has no relation to whether you feel they're good, since you're not the one writing them, and apparently were not capable of writing them before so are not qualified to review them very much.

> having tools that can finally match the speed my ideas come to me

Anyone can be an "ideas guy". We laughed at those people, because having ideas is not the hard part. The hard part was in all of the hundreds and thousands of little details that go into building the ideas into something actually worthwhile, and that hasn't changed. LLMs can build an idea into a prototype in a weekend. I am still waiting to see LLMs build an idea into something other people use at scale, once, ever, other than LLM wrappers. Either every person who is all-in on vibes only has ideas that consist of making .md files and publishing them as a "meta agent framework", or LLMs are not actually doing a great job of translating ideas into tangibly useful software.

542458 42 minutes ago|||
> Anyone can be an "ideas guy".

I disagree with this. I've worked with amazing "ideas guys" who just cranked out customer insights and interesting concepts, and I've worked with lousy ones, who just kinda meandered and never had a focused vision beyond a milquetoast copy of the last thing they saw. There's a real skill to forming good concepts, and it's not a skill everyone has!

applfanboysbgon 20 minutes ago||
I do agree that having good ideas is a skill in its own right. But people with bad ideas are idea guys too! You see them all the time in the indie game development scene in particular. "I need a programmer, and an artist, and a composer, to build this amazing idea for me!", together with an 8 paragraph wall of text (the paragraphs are if you're lucky) describing the idea, and as you'd expect from somebody who couldn't be bothered to develop a single skill, their game ideas are exactly as good as their programming, art, and music.

I find that the strength of people's ideas tends to be highly correlated with their overall skills. I don't know that you can develop the capability for good ideas without getting your hands dirty learning a field, experimenting, absorbing all kinds of information and understanding what really goes into the making of a good idea. In that way, the person with good ideas always ends up being more than just a ideas guy. They don't just have good ideas, they have good ideas and the skills to back them up. Whereas the "ideas guy" label is usually applied to people who have nothing to bring to the table other than their ideas, and wouldn't you know it, they aren't nearly as good as they think they are.

supern0va 23 minutes ago|||
>Anyone can be an "ideas guy".

I think there's way more nuance to this than you're willing to admit here. There's a significant difference between the guy who thinks "I'm going to make X app to do Y and get loaded." and the person who really understands the details of what they want to create and has a concrete vision of how to shape it.

I think that product shaping and detail oriented vision of how something should work and be used by people is genuinely challenging, wholly aside from the lower level technical skills required to execute it.

This is part of the reason why I wouldn't be surprised at all to see product manager types getting more hands-on, or seeing the software engineering profession evolve into more of a PM/SDE hybrid.

manmal 11 minutes ago|||
I've felt this exact same way until very recently. But in the end, it's slop that never quite does what it's supposed to. Anthropic is proud of themselves that they brute-forced the world's crappiest C compiler into existence. Guess what, nobody will use it.
strangattractor 1 hour ago|||
One size never fits all. I am old enough to remember what a game changer Spreadsheets (VisiCalc) where. They made the personal computer into a SwissArmy knife for many people that could not justify investing large sums of money into software to solve a niche problem. Until that time PCs simply were not a big thing.

I believe AI will do something similar for programming. The level of complexity in modern apps is high and requires the use of many technologies that most of us cannot remotely claim to be expert in. Getting an idea and getting a prototype will definitely be easier. Production Code is another beast. Dealing with legacy systems etc will still require experts at least for the near future IMHO.

hungryhobbit 42 minutes ago||
I remember when my dev team included some people using Emacs, some using Eclipse (this was pre-VS Code), and some using IntelliJ.

Developers will always disagree on the best tool for X ... but we should all fear the Luddites who refuse to even try new tools, like AI. That personality type doesn't at all mesh with my idea of a "good programmer".

rsoto2 18 minutes ago||
Flat out wrong. The most impressive engineers I've met in my career did not care for fancy tools with bells and whistles.
bluefirebrand 1 hour ago||
> but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in months

This is exactly the sort of mentality that makes me hate this technology

You finally feel good at programming despite admitting that you aren't actually doing it

Please explain why anyone should take this seriously?

pdntspa 1 hour ago|||
Because the programming is and was always a means to an end. Obsessing over the specific mechanical act of programming is taking the forest for the trees.

I agree with gp that the speed in which I am able to execute my vision is exhilarating. It is making me love programming again. My side projects, which have been hanging on the wall for years, are actually getting done. And quickly!

The actual act of keying in code is drudgery for me. I've written so much code in so many languages that it is hard not to hate them all. Why the fuck is it a hash in ruby but a dict in python? How the hell do I get the current unixtime in this language again?!? Why the fuck do I need to learn yet another stupid vocabulary for what is essentially databinding? Who cares, let the AI handle it

mkehrt 1 hour ago|||
None of my side projects are things where I want the output. They're all things where I want to write the code myself so I understand it better. AI is antithetical to this.
dolebirchwood 37 minutes ago|||
I have three side projects that revolve around taking public access data from shitty, barely usable local government websites, and then using that data to build more intuitive and useful UIs around them. They're portfolio pieces, but also a public service. I already know how to build all of these systems manually, but I have better things to do. So, hell yeah I'm just going to prompt my way to output. If the code works, I don't care how it was written, and neither do the members of my community who use my free sites.
pdntspa 1 hour ago||||
All of my side projects scratch an itch, so I do want the output. There are not enough hours in the day for me to make all the things I want to make. Code is just the vessel, and one I am happy to outsource if I can maintain a high standard of work. It's a blessing to finally find a workflow that makes me feel like I have a shot at building most of the things I want to.
cess11 36 minutes ago||
Are these things that no one previously built and published, so you can go and take a look at their implementation?
pdntspa 11 minutes ago||
Possibly. Mostly?

I wanted a stackable desk tray shelf thing for my desk in literally any size for my clutter. Too lazy to go shopping for one so I had claude write me an openSCAD file over lunch break then we iterated on it after-hours. By end of work next day I had three of them sitting on my desk after about 3 hours of back-and-forth the night before (along with about half a dozen tiny prototypes), and thats including the 2hr print time for each shelf.

I want a music metadata tool that is essentially TheGodfather but brought into the modern day and incorporates workflows I wish I had for my DJing and music production. And not some stupid web app, a proper desktop app with a proper windowing toolkit. I'd estimate it would take me 12-18 months to get to a beta the old way, to the exclusion of most of my other hobbies and projects, instead first Gemini then Claude and I managed to get a pretty nice alpha out in a few months over the summer while I was unemployed. There's still a lot left I want to add but it already replaced several apps in my music intake workflow. I've had a number of successful DJ gigs making use of the music that I run through this app. Funny enough the skills I learned on that project landed me a pretty great gig that lets me do essentially the same thing, at the same pace, for more pay than I've ever made in my SWE career to-date.

A bunch of features for my website, a hand-coded Rails app I wrote a few years ago, went from my TODO pile to deployment in just a couple of hours. Not to mention it handled upgrading Ruby and Rails and ported the whole deployment to docker in an afternoon, which made it easy to migrate to a $3 VPS fronted by cloudflare.

I have a ton of ideas for games and multimedia type apps that I would never be able to work on at an acceptable pace and also earn the living that lets me afford these tools in the first place. Most of those ideas are unlike any game I've ever seen or played. I'm not yet ready to start on these yet but when/if I do I expect development to proceed at a comfortably brisk pace. The possibilities for Claude + Unreal + the years and years of free assets I've collected from Epic's Unreal store are exciting! And I haven't even gotten into having AI generate game assets.

So idunno, does that count?

hk__2 1 hour ago|||
All my side projects exist to solve a problem.
hk__2 1 hour ago||||
> The actual act of keying in code is drudgery for me. I've written so much code in so many languages that it is hard not to hate them all. Why the fuck is it a hash in ruby but a dict in python? How the hell do I get the current unixtime in this language again?!? Why the fuck do I need to learn yet another stupid vocabulary for what is essentially databinding?

These are the downsides, but there are also upsides like in human languages: “wow I can express this complex idea with just these three words? I never though about that!”. Try a new programming paradigm and that opens your mind and changes your way of programming in _any_ language forever.

beepbooptheory 12 minutes ago||||
"I really really love cooking. In fact, I have optimized my cooking completely, I go out to restaurants every night!"

I believe gp and others just like food instead of cooking. Which is fine, but if that's the case, why go around telling everyone you're a cook?

bigstrat2003 1 hour ago|||
> Because the programming is and was always a means to an end.

No. Programming is a specific act (writing code), and that act is also a means to an end. But getting to the goal does not mean you did programming. Saying "I'm good at programming" when you are just using LLMs to generate code for you is like saying "I'm good at driving" when you only ever take an Uber and don't ever drive yourself. It's complete nonsense. If you aren't programming (as the OP clearly said he isn't), then you can't be good at programming because you aren't doing it.

NewsaHackO 57 minutes ago|||
I guess I agree with you, but I think the GP may have mispoke and meant he loves building software. It's sort of like the difference between knitting and making clothes. The GP likely loves making clothes on an abstract basis and realized that he won't have to knit anymore to do so. And he really never liked knitting in the first place, as it was just a means to an end.
munk-a 27 minutes ago||
Most people who are knitting do it purely for the experience of knitting. If you need clothes it's far more affordable to buy the cheap manufactured stuff. Some people certainly enjoy the creativity of expression and wish they could get to that easier - but most of those people have moved away from manual tasks like knitting and instead just draw or render their imagination. There's genuine value in making things by hand as the process allows us time to study our goal and shape our technique mid-approach. GP may legitimately like knitting more than making clothes.
NewsaHackO 6 minutes ago||
I think you misunderstood my post. Now many people do knitting for the joy of knitting, but people used to knit to create clothing to wear or to sell. Of course, automated knitting machines have largely replaced hand knitting, and people now still do it. If you are very good at hand knitting, you might see if you can sell some work. However, if you want to make knitted clothing at scale, you would be better served taking a high-level approach to the actual design of the clothing and learning how to prompt the automated knitting machine to do so instead of optimizing for how you yourself would hand knit it.
pdntspa 1 hour ago|||
I'm still reading the code, I'm still correcting the LLM's laughably awful architecture and abstractions, and I'm still spending large chunks of time in the design and planning phase with the LLM. The only thing it does is write the code.

But that's not programming because its a natural-language conversation?

wmeredith 51 minutes ago||||
I think this is a semantics thing. I feel the same way, but I wouldn't say that I feel like I'm good at programming. I'm most certainly not. What I am good at is product design and development, and LLM tech has made it so that I can concentrate on features, business models, and users.
orsorna 1 hour ago||||
Well for one, programming actually sucks. Punching cards sucks. Copywriting sucks. Why? Well, implementation for the sake of implementation is nothing more than self-gratifying, and sole focus on it is an academic pursuit. The classic debate of which programming language is better is an argument of the best way to translate human ideas of logic into something that works. Sure programming is fun but I don't want to do it. What I do want to do is transform data or information into other kinds of information, and computing is a very, very convenient platform to do so, and programming allows manipulation of a substrate to perform such transformations.

I agree with OP because the journey itself rarely helps you focus on system architecture, deliverable products and how your downstream consumers use your product. And not just product in the commercial sense, but FOSS stuff or shareware I slap together because I want to share a solution to a problem with other people.

The gambling fallacy is tiresome as someone who, at least I believe, can question the bullshit models try to do sometimes. It is very much gambling for CEOs, idea men who do not have a technical floor to question model outputs.

If LLMs were /slow/ at getting a working product together combined with my human judgement, I wouldn't use them.

So, when I encounter someone who doesn't pin value into building something that performs useful work, only the actual journey of it, regardless of usefulness of said work, I take them as seriously as an old man playing with hobby trains. Not to disparage hobby trains, because model trains are awesome, but they are hubris.

munk-a 31 minutes ago|||
> Well for one, programming actually sucks. Punching cards sucks. Copywriting sucks.

There's a significant difference between past software advancements and this one. When we previously reduced the manual work when developing software it was empowering the language we were defining our logic within so that each statement from a developer covered more conceptual ground and fewer statements were required to solve our problems. This meant that software was composed of fewer and more significant statements that individually carried more weight.

The LLM revolution has actually increased code bloat at the level humans are (probably, get to that in a moment) meant to interact with it. It is harder to comprehend code written today than code written in 2019 and that's an extremely dangerous direction to move in. To that earlier marker - it may be that we're thinking about code wrong now and that software, as we're meant to read it, exists at the prompt level. Maybe we shouldn't read or test the actual output but instead read and test the prompts used to generate that output - that'd be more in line with previous software advancements and it would present an astounding leap forward in clarity. My concern with that line of thinking is that LLMs (at least the ones we're using right now for software dev) are intentionally non-deterministic so a prompt evaluated multiple times won't resolve to the same output. If we pushed in this direction for deterministic prompt evaluation then I think we could really achieve a new safe level of programming - but that doesn't seem to be anyone's goal - and if we don't push in that direction then prompts are a way to efficiently generate large amounts of unmaintained, mysterious and untested software that won't cause problems immediately... but absolutely does cause problems in a year or two when we need to revise the logic.

bluefirebrand 46 minutes ago|||
> Well for one, programming actually sucks

Speak for yourself. Programming is awesome. I love it so much and I hate that AI is taking a huge steaming dump on it

> So, when I encounter someone who doesn't pin value into building something that performs useful work, only the actual journey of it, regardless of usefulness of said work, I take them as seriously as an old man playing with hobby trains

Growing and building rapidly at all costs is the behavior of a cancer cell, not a human

I love model trains

orsorna 2 minutes ago||
Your cancer cell analogy is moot unless you paint all AI generated applications to be unusable trash, which is not the case, and I wouldn't describe my own work with it. It's true that standards have dropped to the floor where anyone can "ship" something but doesn't mean it's good. I think I have a better handle on how to steer GenAI versus the average linkedinbro. But the divide between journey and destination is valid, I guess it's something that hasn't been explored until GenAI.
poszlem 35 minutes ago||||
Why do you feel good about programming despite not writing in machine code?
MattGaiser 48 minutes ago||||
Different definitions of programming.

OP defines it as getting the machine to do as he wants.

You define it as the actual act of writing the detailed instructions.

bluefirebrand 39 minutes ago||
It is very difficult to get the machine to do what you want without the detailed instructions

If you have an LLM generate the instructions, then the LLM is programming, you're just a "prompter" or something. Not a programmer

thendrill 1 hour ago|||
I see alot of people get really confused between the act of writing code VS. programming...

Programming is willing the machine to do something... Writing code is just that writing code, yes sometimes you write code to make the machine do something and other times you write code just to write code ( for example refactoring, or splitting logic from presentation etc.)

Think about it like this... Everyone can write words. But writing words does not make you a book writer.

What always gets me is that the act of writing code by itself has no real value. Programming is what solves problems and brings value. Everyone can write code, not everyone can "program"....

bigstrat2003 1 hour ago||
Programming is writing code. There's nothing to confuse because that's what the word means.
simplyluke 46 minutes ago|||
Is it? I wouldn't consider punch cards writing code but they were certainly programming. Programming is a broader concept than code in a text file.
ModernMech 51 minutes ago|||
They're saying writing code is programming but not all programming is writing code. What is Scratch?
watzon 1 hour ago||
I think this article makes a valid point. However, if AI coding is considered gambling, then being a project manager overseeing multiple developers could also be seen as a form of gambling to a certain degree. In reality, there isn't much difference between the two. AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.
Spooky23 29 seconds ago||
[delayed]
m00x 1 hour ago|||
AI coding is gambling on slot machines, managing developers is betting on race horses.
SkyPuncher 1 hour ago|||
Only if your AI coding approach is the slot machine approach.

I've ended up with a process that produces very, very high quality outputs. Often needing little to no correct from me.

I think of it like an Age of Empires map. If you go into battle surrounded by undiscovered parts of the map, you're in for a rude surprise. Winning a battle means having clarity on both the battle itself and risks next to the battle.

Obscurity4340 32 minutes ago|||
Would you mind sharing some of your findings?
murkt 1 hour ago||||
Good analogy! Would be interesting to read more details about how you’re getting very high quality outputs
input_sh 23 minutes ago|||
Until it produces predictable output, it's gambling. But it can't produce predictable output because it's a non-deterministic tool.

What you're describing is increasing your odds while gambling, not that it's not gambling. Card counting also increases your odds while gambling, but it doesn't make it not gambling.

IanCal 17 minutes ago|||
This is a pretty wild comparison in my opinion, it counts almost everything as gambling which means it has almost no use as a definition.

The most obvious issue is it’d class working with humans as gambling. Fine if you want to make that as your definition but it seems unhelpful to the discussion.

darkhorse222 20 minutes ago|||
Similar to quantum computing, a probabilistic model when condensed to sufficiently narrow ranges can be treated as discrete.
munk-a 4 minutes ago||||
Would it make us uncomfortable to reword the above example to

> AI coding is gambling on slot machines, managing developers is gambling on the stock market.

Because I feel like that is a much more apt analogy.

bazmattaz 1 hour ago||||
Dam this is so accurate. As a project manager turned product manager this is so true. You need to estimate a project based on the “pedigree” of your engineers
cko 41 minutes ago||||
What is it with you guys and stallions?
deadbabe 33 minutes ago||
There is a long history of managers just wanting to work their developers like horses.
edu 53 minutes ago|||
Great analogy, I’m saving it!
yoyohello13 1 hour ago|||
I think the addiction angle seems to make AI coding more similar to gambling. Some people seem to be disturbingly addicted to agentic coding. Much more so than traditional programming. To the point of doing destructive things like waking up in the middle of the night to check agents. Or giving an agent access to their bank account.
shepherdjerred 21 minutes ago|||
I mean, it’s just so fun. Claude wrote a native macOS app for me today.

I don’t think I’d describe my behavior as destructive though

deadbabe 32 minutes ago|||
I know at least one case where the obsession with agents ruined a marriage.
ChiefTinkeer 16 minutes ago|||
I think this is a very good point. We have a natural bias toward human output as there is an illusion of full control - in reality even just from a solo dev perspective you've still got a load of hidden illogical persuasions that are influencing your code and how you approach a problem. AI has its own biases that come out of the nature its training on large unknowable data sets, but I'd argue the 'black box' thinking that comes out that isn't too different to the black box of the human mind. That's not at all to say that AI isn't worse (even if quicker) than top developer talent today writing handwritten code - just that the barrier to getting that level of quality isn't as insurmountable as it might appear.
MeetingsBrowser 1 hour ago|||
You (in theory) have more control over the quality of the team you are managing, than the quality of the models you are using.

And the quality of code models puts out is, in general, well below the average output of a professional developer.

It is however much faster, which makes the gambling loop feel better. Buying and holding a stock for a few months doesn't feel the same as playing a slot machine.

tossandthrow 45 minutes ago|||
What theory is that?

My experience is the absolute opposite. I am much more in control of quality with Ai agents.

I am never letting junior to midlevels into my team again.

In fact, I am not sure I will allow any form of manual programming in a year or so.

MeetingsBrowser 21 minutes ago|||
> I am never letting junior to midlevels into my team again

Exactly. You control the quality of the people in your team. You can train, fire, hire, etc until you get the skill level you want.

You have effectively no control over the quality of the output from an LLM. You get what the frontier labs give you and must work with that.

DrJokepu 21 minutes ago|||
Eh. You want a good mix of experience levels, what really matters is everyone should be talented. Less experienced colleagues are unburdened by yesterday’s lessons that may no longer be relevant today, they don’t have the same blind spots.

Also, our profession is doomed if we won’t give less experienced colleagues a chance to shine.

PaulHoule 1 hour ago||||
One difference is those developers are moral subjects who feel bad if they screw up whereas a computer is not a moral subject and can never be held accountable.

https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...

ponector 12 minutes ago||
Right, you need to hire a scapegoat. Usually tester has that role: little impact but huge responsibility for quality.
est31 1 hour ago|||
You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes.

E.g. look at the "SWE-Bench Pro (public)" heading in this page: https://openai.com/index/introducing-gpt-5-4/ , showing reasoning efforts from none to high.

Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though.

kraemahz 25 minutes ago||
You also have control over the workflow they follow and the standards you expect them to stick through, through multiple layers of context. Expecting a model to understand your workflow and standards without doing the effort of writing them down is like expecting a new hire to know them without any onboarding. Allowing bad AI code into your production pipeline is a skill issue.
QuantumGood 1 hour ago|||
Framing anything with a common blanket concepts usually fails to apply the same framing to related areas. A lot of things include some gambling, you need to compare how it was before was 'gambling', and how 'not using AI' is also 'gambling', etc.

As @m00x points out "coding is gambling on slot machines, managing developers is betting on race horses."

runarberg 1 hour ago|||
I don‘t think so. A project manager can give feedback, train their staff, etc. An AI coding model is all you get, and you have to wait until your provider trains a new model before you might see an improvement.
krupan 1 hour ago|||
I ssk an AI to play hangman with me and looked at it's reasoning. It didn't just pick a secret word and play a straightforward game of hangman. It continually adjusted the secret word based on the letters I guessed, providing me the "perfect" game of hangman. Not too many of my guesses were "right" and not too many "wrong" and I after a little struggle and almost losing, I won in the end.

It wasn't a real game of hangman, it was flat out manipulation, engagement farming. Do you think it's possible that AI does that in any other situations?

ModernMech 56 minutes ago|||
That says more about how you see developers than whether or not managers are in a sense gamblers.
ares623 1 hour ago|||
This must be it. So many of our colleagues have been burnt by bad coworkers that they would rather burn everything down than spend another day working with them.
rvz 1 hour ago|||
> AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.

Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot; making such an entity completely unsuitable for high risk situations.

This typical AI booster comparison has got to stop.

tossandthrow 41 minutes ago|||
Love that you needed to make it clear that it is humans that can explain themselves..

Employees can only be held accountable with severe malice.

There is a good chance that the person actually responsible (eg. The ceo or someone delegated to be responsible) will soon prefer to have AIs do the work as their quality can be quantified.

thunky 20 minutes ago|||
> Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot

You "own" the software it creates which means you're responsible for it. If you use AI to commit crimes you'll go to jail, not the AI.

underlipton 1 hour ago||
As a human, you generally have the opportunity make decent headway in understanding the other humans that you're working with and adjusting your instructions to better anticipate the outputs that they'll return to you. This is almost impossible with AI because of a combination of several factors:

>You are not an AI and do not know how an AI "thinks".

>Even if you come to be able to anticipate an AI's output, you will be undermined by the constant and uncontrollable update schedule imposed on you by AI platforms. Humans only make drastic changes like this under uncommon circumstances, like when they're going through large changes in their life, not as a matter of course.

>However, without this update schedule, problems that were once intractable will likely stay so forever. Humans, on the other hand, can grow without becoming completely unpredictable.

It's a Catch-22. AI is way closer to gambling.

darrinm 3 minutes ago||
I hear it a lot but this gambling analogy breaks when you look at actual outcomes. If you went to Vegas and after a few pulls on a one-armed bandit could _reliably_ walk away with the jackpot we wouldn’t even call it gambling anymore.
copypaper 1 hour ago||
You got to know when to Ship it,

Know when to Re-prompt,

Know when to Clear the Context,

And know when to RLHF.

You never trust the Output,

When you’re staring at the diff view,

There’ll (not) be time enough for Fixing,

When the Tokens are all spent.

koolba 1 hour ago||
> When you’re staring at the diff view,

Bold assumption that people are looking at the diffs at all. They leave that for their coworkers agents.

zephen 21 minutes ago||
Will the diffs be small enough for people to even usefully wade through them?
sedawkgrep 1 hour ago|||
You're a gamblin' man, I see...
niccl 36 minutes ago||
thank you. I knew there was something I was missing
krupan 58 minutes ago||
I really hope that was your creativity and not AI
copypaper 19 minutes ago||
Indeed it was (I was listening to it while stumbling across this post). Also, fun fact: The Gambler was written by Don Schlitz while working as a Computer Operator in 76' which makes it all the more relevant [1].

[1]: https://web.archive.org/web/20230130060050/https://www.rolli...

FL4TLiN3 38 minutes ago||
In my corner of the world, average software developers at Tokyo companies, not that many people are actually using Claude Code for their day-to-day work yet. Their employers have rolled it out and actively encourage adoption, but nobody wants to change how they work.

This probably won't surprise anyone familiar with Japanese corporate culture: external pressure to boost productivity just doesn't land the same way here. People nod, and then keep doing what they've always done.

It's a strange scene to witness, but honestly, I'm grateful for it. I've also been watching plenty of developers elsewhere get their spirits genuinely crushed by coding agents, burning out chasing the slot machine the author describes. So for now, I'm thankful I still get to see this pastoral little landscape where people just... write their own code.

vermilingua 6 minutes ago||
Not only is it gambling, it has the full force of the industry that built the attention market behind it. I find it extremely hard to believe that these tools have not been optimised to keep developers prompting the same way tiktok keeps people scrolling.
Terr_ 1 hour ago||
I'd emphasize that prompting LLMs to generate code isn't just metaphorical gambling in the sense of "taking a risk", the scary part is the more-literal gambling involving addictive behaviors and how those affect the way the user interacts with the machine and the world.

Heck, this technology also offers a parasocial relationship at the same time! Plopping tokens into a slot-machine which also projects a holographic "best friend" that gives you "encouragement" would fit fine in any cyberpunk dystopia.

interestpiqued 32 minutes ago|
I think AI literally makes even being wrong feel like getting something done. And that is the addictive part for people.
rsoto2 15 minutes ago|||
Look at all this text I have! It can't be worthless right?!
Terr_ 29 minutes ago|||
[dead]
wolandomny 8 minutes ago||
Obviously the following isn't a completely original take, but it's worth stating that AI coding is just a fundamentally different job than "traditional" or "manual" coding. The previous job was to spec something out to a comfortable degree without spending all of your time on a spec when there are so many unknowns that will come up during the engineering stage. Then, the job was to engineer at a snail's pace (compared to today) and adjust the spec.

Now, the job is to nail the spec and test HARD against that spec. Let the AI develop it and question it along the way to make sure it's not repeating itself all over the place (even this I'm sure is super necessary anymore...). Find a process that helps you feel comfortable doing this and you can get the engineering part done at lightning speed.

Both jobs are scary in different ways. I find this way more fun, however.

dzink 1 hour ago||
It’s variable rewards and even with large models the same question can lead to dramatically different answers. Possibly because they route your request through different models. Possibly because the model has more time to dig through the problem. Nonetheless we have some illusion of control over the output (you we wouldn’t be playing it) but it is just the quality of the model itself that leads to better outcomes - not your input. If you can’t let go of the feeling thought, it’s definitely addictive. And as I look back, it’s a fast iteration on the building cycle we had before AI. But the brain really likes low latency - it is addicted to the fast reward for its actions. So AI, if it gets fast enough (sub 400ms) it will likely become irreversibly addictive to humans in general, as the brain will see is at part of itself. Hope it has our interest at heart by then.
markhahn 10 minutes ago||
This (variable rewards -> gambling, illusion of control) is really important.

I'm not an expert in the psych/neuro literature on addiction, but I suspect latency isn't that critical. But is that just because it's things like fruit-machines that have been studied? Gambling (poker, racehorses) are quite long-latency. OTOH, scrolling is closer to 400ms, and that's certainly the modern addition...

krupan 54 minutes ago|||
Well said! My only qualm with this is saying you hope "it" has our interests at heart. "It" is a machine made by humans that work for corporations. I would correct your hope to, "I hope they have our interest at heart by then."
zephen 15 minutes ago||
This is being overlooked, downplayed, or simply not understood, by many commenters.

It is exactly like the proverbial monkey or rat pressing a bar for a food pellet to come out.

If the pellet unerringly drops, and is always tasty and nutritious, the rat stops when it's no longer hungry.

Otherwise, an inordinate amount of time is spent pressing the bar.

thisisbrians 1 hour ago|
It is and will always be about: 1) properly defining the spec 2) ensuring the implementation satisfies said spec
QuadrupleA 17 minutes ago||
Side note, everyone's talking about having AI agents "conform to the spec" these days. Am I in my own bubble, or - who the hell these days gets The Spec as a well-formed document? Let alone a good document, something that can be formally verified, thouroughly test-cased, can christen the software "complete" when all its boxes are ticked, etc.?

This seems like 1980's corporate waterfall thinking, doesn't jibe with the messy reality I've seen with customers, unclear ideas, changing market and technical environments, the need for iteration and experimentation, mid-course correction, etc.

nickjj 1 hour ago|||
> properly defining the spec

Why do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.

I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.

I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.

seanmcdirmid 1 hour ago|||
> I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worth of being git commit.

Are you stuck entering your prompts in manually or do you have it setup like a feedback loop like "beautify -> check beauty -> in not beautiful enough beautify again"? I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.

nickjj 1 hour ago||
I do everything manually. Prompt, look at the code, see if it works (copy / paste) and if it works but it's written poorly I'll re-prompt to make the code more readable, often ending with me making it more readable without extra prompts. Btw, this isn't about code formatting or linting. It's about how the logic is written.

> I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.

If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning. If not, we're admitting it's providing an initial worse result for unknown reasons. Maybe it's to make you as the operator feel more important (yay I'm providing feedback), or maybe it's to extract the most amount of money it can since each prompt evaluates back to a dollar amount. With the amount of data they have I'm sure they can assess just how many times folks will pay for the "make it better" loop.

seanmcdirmid 1 hour ago||
Why do you orchestrate the AI manually? You could write a BUILD file that just does it in a loop a few times, or I guess if you lack build system interaction, write a python script?

> If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning.

This is the wrong way to think about AI (at least with our current tech). If you give AI a general task, it won't focus its attention at any of these aspects in particular. But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.

People who feel like AI should just do the right thing already without further prompting or attention focus are just going to be frustrated.

> Btw, this isn't about code formatting or linting. It's about how the logic is written.

Yes, but you still aren't focusing the AI's attention on the problem. You can also write a guide that it puts into context for things you notice that it consistently does wrong. But I would make it a separate pass, get the code to be correct first, and then go through readability refactors (while keeping the code still passing its tests).

giancarlostoro 1 hour ago|||
There's two secret sauces to making Claude Code your b* (please forgive me future AI overlords), one is to create a spec, the other is to not prompt merely "what" you want and only what you want, but what you want, HOW you want it done (you can get insanely detailed or just vague enough), and even in some cases the why is useful to know and understand, WHO its for sometimes as well. Give it the context you know, don't know anything about the code? Ask it to read it, all of it, you've got 1 million tokens, go for it.

I have one shot prompted projects from empty folder to full feature web app with accounts, login, profiles, you name it, insanely stable, maybe and oops here or there, but for a non-spec single prompt shot, that's impressive.

When I don't use a tool to handle the task management I have Claude build up a markdown spec file for me and specify everything I can think of. Output is always better when you specify technology you want to use, design patterns.

raizer88 1 hour ago|||
AI: "Yes, the specs are perfectly clear and architectural standards are fully respected."

[Imports the completely fabricated library docker_quantum_telepathy.js and calls the resolve_all_bugs_and_make_coffee() method, magically compiling the code on an unplugged Raspberry Pi]

AI: "Done! The production deployment was successful, zero errors in the logs, and the app works flawlessly on the first try!"

krupan 57 minutes ago|||
Good sir, have you heard the Good Word of the Waterfall development process? It sounds like that's what you are describing
bwestergard 1 hour ago|||
That can't be the whole story, right? Because there are an arbitrarily large number of (e.g.) Rust programs that will implement any given spec given in terms of unit tests, types, and perhaps some performance benchmarks.

But even accounting for all these "hard" constraints and metrics, there are clearly reasons to prefer some possible programs over others even when they all satisfy the same constraints and perform equally on all relevant metrics.

We do treat programs as efficient causes[1] of side effects in computing systems: a file is written, a block of memory is updated, etc. and the program is the cause of this.

But we also treat them as statements of a theory of the problem being solved[2]. And this latter treatment is often more important socially and economically. It is irrational to be indifferent to the theory of the problem the program expresses.

[1]: https://en.wikipedia.org/wiki/Four_causes#Efficient

[2]: https://pages.cs.wisc.edu/~remzi/Naur.pdf

MeetingsBrowser 1 hour ago||
> there are clearly reasons to prefer some possible programs over others even when they all satisfy the same constraints

Maintainability is a big one missing from the current LLM/agentic workflow.

When business needs change, you need to be able to add on to the existing program.

We create feedback loops via tests to ensure programs behave according to the spec, but little to nothing in the way of code quality or maintainability.

ambicapter 1 hour ago|||
Then pulling the lever until it works! You can also code up a little helper to continuously pull the lever until it works!
SV_BubbleTime 1 hour ago||
We have a monkeys and typewriters thing for this already.

Just instead of hitting keys, they’re hitting words, and the words have probability links to each other.

Who the hell thinks this is ready to make important decisions?

rawgabbit 1 hour ago|||
I had a CIO tell me 15 years ago with Agile I was wasting my time with specs and design documents.
vidarh 1 hour ago||
I was in a call just today where specs were presented as a new thing.
dgxyz 1 hour ago|||
Well it’s more how much we care about those.

Which with the advent of LLMs just lowered our standards so we can claim success.

BurningFrog 1 hour ago|||
That was always the easy part.

The endless next steps of "and add this feature" or "this part needs to work differently" or "this seems like a bug?" or "we must speed up this part!" is where 98% of the effort always was.

Is it different with AI coding?

CodingJeebus 1 hour ago||
Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quickly to the point that it will keep me up all night wanting to push further and further.

That's where the gambling metaphor really resonates. It's not whether or not the output is correct, I've been building software for many years and I know how direct LLMs pretty well at this point. But I'm also an alcoholic in recovery and I know that my brain is wired differently than most. And using LLMs has tested my ability to self-regulate in ways that I haven't dealt with since I deleted social media years ago.

natpalmer1776 1 hour ago|||
It also doesn’t help that producing features is also wired to a sense of monetary compensation. More-so if you’re building a product to sell that might finally be your ticket to whatever your perception of socio-economic victory is.
CodingJeebus 1 hour ago||
That's definitely part of it, sure. I also just get a cosmic kick out thinking about the possibilities that this technology unlocks and that thinking can spiral in all sorts of unhealthy ways.
acedTrex 1 hour ago|||
> Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quickly

I dont think i've read a sentence on this website i can relate to less.

I watch the LLM build things and it feels completely numb, i may as well be watching paint dry. It means nothing to me.

CodingJeebus 1 hour ago|||
Trust me, I have many days where I wish I had your relationship to this. I wish it were as boring as watching paint dry. But it triggers that part of my brain that wants more, and I have to be very careful about that.
zer00eyz 1 hour ago|||
I wonder if the difference here is age/experience or what you're working on/in.

When I was 20, writing code was interesting, by the time I was 28 it became "solving the problem" and then moved on to "I only really enjoy a good disaster to clean up".

All of my time has been spent solving other peoples problems, so I was never invested in the domain that much.

MrScruff 48 minutes ago||
Yeah, I used to enjoy writing code but after a while I realised I actually more enjoy creating tools that I (and other people) liked to use. Now I can do that really quickly even with my very limited free time, at a higher level of abstraction, but it's still me designing the tool.

And despite the amount of people telling me the code is probably awful, the tools work great and I'm happily using them without worrying about the code anymore than I worry about the assembly generated by a compiler.

More comments...