Posted by speckx 4 hours ago
The main difference is that you're exploiting your own weaknesses, rather than others'. Limitations in typing speed, information gathering, pattern recognition.
1/ Dependency -- Once I got used to agentic coding, I almost always reached out to it even for small changes (e.g. update a yaml config)
2/ Addiction -- In the initial euphoria phase, many people experience not wanting to "waste" any time agent idle and they'd try to assign AI agents task before they go to sleep
3/ You trust your judgement less and less as agent takes over your code
4/ "Slot machine" behavior -- running multiple AI agents parallel on same task in hope of getting some valuable insight from either
5/ Psychosis -- We have all met crypto traders who'd tell you how your 9-5 is stupid and you could be making so much trading NFTs. Social media if full of similar anecodotes these days in regards to vibecoding with people boasting their Claude spend, LOC and what not
It's not an inherent feature to slot machines, it's something we enforce because people got angry about the outcomes (i.e. fraud) when they didn't operate that way.
It doesn't matter because a dodgy slot-machine is still a slot machine, and the person using it would still be a gambler.
The important part of the not-really-a-metaphor is the relationship between user and machine, and how it affects the user's mind.
What the machine outputs on "wins" doesn't matter as much, addictive gambling can still happen even when the payouts are dumb.
You can get more consistent results from a slot machine with a bunch of magnets and some swift kicks. It's still gambling.
This is a subreddit about selfhosting things others built for free. Honestly, often for piracy purposes. It's insane how entitled people have become.
Also annoys me that all of the suggestions on how to handle filtering AI demonstrate a clear lack of understanding around how agentic coding works. Like if you can’t be bothered to understand why “ban any project that uses AI” is not possible, the entire subreddit is probably above your pay grade…
That isn't true, which is the exact reason why people have a binary mindset. More than once on Hacker News I've had people accuse me of being an AI booster just because I said I had success with agents and they did not.
Personally I use coding agents for boring parts (I really don't enjoy putting the same piece of string to 20 different classes just to register a new component) and they work quite well, I'm going to use them for foreseeable future, because they make coding much more enjoyable for me. On the other hand I don't have an OpenClaw box burning billions of tokens weekly for me, because I usually don't have ideas that could be clearly specified.
We love a good holy war for sure.
The nuance is lost, and the conversations we should be having never happen (requirements, hiring/skills, developer experience).
Applies here? :D
I’ve certainly been spending more time coding. But is it because it’s making me more efficient and smarter or is it because I’m just gambling on what I want to see?
Is this really a difficult question to answer for oneself? If you can't tell if you're learning anything, or getting more confident describing what you want, I would suggest that you cannot be thinking that deeply about the code you're producing. Am I just pulling the lever until I reach jackpot?
And even then, will you know you've won?At the very least, a gambler knows when they have hit jackpot. Here, you start off assuming you've won the jackpot every time, and maybe there'll be an unpleasant surprise down the line. Maybe that's still gambling, but it's pretty backwards.
Overall I’m a fan, but yes there are things to watch for. It doesn’t replace skilled humans but it does help skilled humans work faster if used right.
The labor replacement story is bullshit mostly, but that doesn’t mean it’s all bad.
Fast & Cheap (but not Good?) - I wouldn't really say that AI coding is "cheap"
Cheap & Good (but not Fast) - Again, not really "cheap"
Fast & Good (but not Cheap) - This seems like maybe where we're at? Is this a bad place?
Eventually, it will be just Fast and Good. It won't be cheap, as companies start moving towards profitability.
Remember when Uber was super cheap? I do. They're fast and good though.
As for good. Well, how much software is really good? A lot of it is sewn together APIs and electron-like runtimes and 5,000 dependencies someone else wrote. Not exactly hand-crafted and artisanal.
I'm sure everyone here's projects are the exception, but engineering is always about meeting the design requirements. Either it does or it doesn't.
A big theme of software development for me has been finishing things other people couldn’t finish and the key to that is “control variance and the mean will take care of itself”
Alternately the junior dev thinks he has a mean of 5 min but the variance is really 5 weeks. The senior dev has mean of 5 hours and a variance of 5 hours.
2. Who here thinks that having interns write all/almost all of your code and moving all your mid level and senior developers to exclusively reviewing their work and managing them is a good idea?
Coding agents look at existing text in the codebase before they act. If they previously used a pattern you dislike and you tell them how to do differently, the next time they run they'll see the new pattern and are much more likely to follow that example.
There are fancier ways of having them "learn" - self-updating CLAUDE.md files, taking notes in a notes/ folder etc - but just the code that they write (and can later read in future sessions) feels close-enough to "learning" to me that I don't think it makes sense to say they don't learn any more.
I’ve never worked anywhere where the interns had net productivity on average.
To come up with an analogy that works at all for AI, it would have to be something like temporary workers who code fast, and read fast, but go home at the end of the day and never return.
You can make a lot of valuable software managing a team like that working on the subset of problems that the team is a good fit for. But I wouldn’t work there.
The reason i think this metaphor keeps popping up, is because of how easy it is to just hit a wall and constantly prompt "its not working please fix it" and sometimes that will actually result in a positive outcome. So you can choose to gamble very easily, and receive the gambling feedback very quickly unlike with an intern where the feedback loop is considerably delayed, and the delayed interns output might simply be them screaming that they don't understand.
The first is equating human and LLM intelligence. Note that I am not saying that humans are smarter than LLMs. But I do believe that LLMs represent an alien intelligence with a linguistic layer that obscures the differences. The thought processes are very different. At top AI firms, they have the equivalent of Asimov's Susan Calvin trying to understand how these programs think, because it does not resemble human cognition despite the similar outputs.
The second and more important is the feedback loop. What makes gambling gambling is you can smash that lever over and over again and immediately learn if you lost or got a jackpot. The slowness and imprecision of human communication creates a totally different dynamic.
To reiterate, I am not saying interns are superior to LLMs. I'm just saying they are fundamentally different.
And, if we're being honest, the way people talk about interns is weirdly dehumanizing, and the fact that they are always trotted out in these AI debates is depressing.
Yeah, I agree with that.
That thought crossed my mind as I was posting this comment, but I decided to go with it anyway because I think this is one of those cases where I think the comparison is genuinely useful.
We delegate work to humans all the time without thinking "this is gambling, these collaborators are unreliable and non-deterministic".
Human collaboration has always been slow and messy. Large tech companies have always looked for ways to speed up the feedback loop, isolating small chunks of work to be delegated to contractors or offshore teams. LLMs have supercharged that. If you have a skilled prompter you can get to a solution of good enough quality by rapidly iterating, asking for output, correcting the prompt, etc.
That is good in that if you legitimately have good ideas and the block is execution speed. But if the real blocker is elsewhere, it might give you the illusion of progress.
I don't know. Everything is changing too fast to diagnose in real time. Let's check back in a year.
You should value assigning tasks to human interns more than AI because they are human
But looks like the intern mafia is bombarding you with downvotes.