Top
Best
New

Posted by MrGilbert 1 day ago

Task Paralysis and AI(g5t.de)
247 points | 127 comments
dgellow 1 day ago|
I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.

YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project

KallDrexx 22 hours ago||
I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.

I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.

Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).

I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.

LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.

I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).

The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".

fasterik 10 hours ago|||
Hard agree about the intrinsic motivation. The intrinsic/extrinsic distinction is an unspoken assumption in a lot of conversations about AI and work in general. Not everyone is motivated by money and status.

I do believe you can use LLMs while maintaining the intrinsic rewards of programming. For me, right now that means writing code by hand and using LLMs primarily for research, documentation, and brainstorming. Sometimes I ask it to write a piece of code just so that I can see what it comes up with and maybe learn something from it. I'm also planning on experimenting with coding agents, but I will probably have it work in its own parallel repo and hand-pick the changes I want to keep.

I think a "late adopter" mindset is actually beneficial. It allows you to focus on fundamental skills that will never be outdated, and you get the benefits of new technologies once they mature.

MrGilbert 15 hours ago||||
> But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).

I get that. I recently watched a "talking head" style video by javid9x, where he said something along the lines of disconnecting from the code emotionally [0]. He has to get into the code to understand that. I get the same feeling, however, for me, it feeds my curiosity and my need of exploration. At least for now, I might add.

[0]: https://youtu.be/1qjn1QRxlng?si=_75-J51UnZ0eJyb7&t=705

ryandrake 15 hours ago||
That’s exactly it! There is no feeling of accomplishment whatsoever, because we aren’t really accomplishing anything. The LLM is doing all the work. Out pops an application, but it might as well have been written by someone else, because it was, but also it wasn’t!

It’s great that an application now exists where there wasn’t one before, but it’s hollow because I didn’t make it. Nobody made it! It just exists now with nothing actually accomplished by anyone. It’s a very spooky way to conjure things up.

realharo 2 hours ago||
If that's how you feel, maybe the applications you're making are too simple to need any of your unique contribution.

The answer could be to just push further, and try solving harder problems.

mettamage 3 hours ago||||
> LLMs take all the intrinsic wins and leaves only the extrinsic ones.

Not for me, but that's because I like playing around with software. So in web apps, the UX is done by me.

I'm not here to invalidate your experience. I get what you're saying and I feel it. I just also want to show the point of view where intrinsic motivation can still pop up (for some people at least).

But yea, it sucks if all of it is taken away from you. I'm sorry to hear that.

munksbeer 15 hours ago||||
> I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic. > I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect. .. > LLMs take all the intrinsic wins and leaves only the extrinsic ones.

I'm not sure I understand this. For me, programming was at first a tool to use to satisfy my curiousity. When I first started coding I knew nothing about software patterns, how I should be naming my variables, length of functions, DOD vs OOP, functional vs imperative, single responsibility principle, and on and on.

I wrote a mess of a program and got it to do very cool things (for me). I loved it.

Then I got taught more, got my first jobs, learned why programming large systems needs standards, patterns, etc. I became good at that, and have had a long lucrative career out of it.

But I cannot wait for the day when I no longer need to earn money from programming and I can go back to using it just to do "cool shit". At that point, whether I am hacking and slashing myself, or working with an LLM to do something, I don't care. It is the intrinsic goal of solving a puzzle and programming just happens to be the tool I use.

Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?

KallDrexx 12 hours ago|||
> Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?

So there's a lot of nuance to the "is it that you enjoy figuring out the instructions to use to solve a problem".

At the surface I don't enjoy typing, I don't enjoy fighting syntax checkers, rust's borrow checker, or manual memory management in my personal C projects, typing out the HDL for nand2tetris problems, etc...

However, there have been studies done decades before this LLM boom about the psychological concept called the Generation Effect. While everyone is different and it's not completely black and white, the studies have found that people learn more by actual practice (the act of doing) than just by reading material. That's 100% the case for me.

I can read blogs and resources till the cows come home and I'll have a very surface understanding of a concept. Then I'll go to write the code to implement it and it rarely works right away because there are demonstrable gaps in my understanding. I'll debug it and iterate on it until it works, and that is what actually solidifies the mental model of what I was trying to learn in my mind. Not only can I say for sure that I remember it better, that seems to form connections in my brain that allows me to apply it in other use cases, or build fascinating technical tangents.

I not only get my high from that initial "Aha!" moment when I really feel like I understand a concept enough to actually apply it in other scenarios, but I also get my high from tangents that spawn off of that concept.

In many cases, I can map a direct line of my personal projects to a set of root projects that spawned them off because of ideas I came up with while actually implementing the projects. Since I tried real hard to optimize a C# game engine for an embedded platform, I realized where limitations were and it solidified my knowledge of how old game consoles worked.

This led me down to the interest of creating a GPU out of embedded device that I can pair with I/O constrained embedded devices. This taught me soooo much about the embedded space, and while I heavily improved my C writing abilities it also made me wish I could write C# on embedded.

Since I had learned C for the embedded project (and I knew MSIL from previous deep dives), I realized I can just translate MSIL into C and that would allow me to run C# anywhere (got C# working on an SNES, the linux kernel, and on an ESP32S3).

By implementing that by hand and coming face to face with many small decisions I had to make, that solidified a bunch of concepts in my head around intermediate representations and why they are a massive benefit. Those aha moments (among others) then led me down the path to implementing a just-in-time compilation engine for NES games and the C64 OS into the .net runtime.

The learnings from that have already spawned some other ideas in my mind, which is why I'm now learning Verilog and FPGA development.

None of these projects solved any useful problem (as in nothing was created that I or anyone else would use). The satisfaction and the high I got from them was having the curiosities of a problem, ideas of a solution, and persevering (partially due to being stubborn) through it and actually accomplishing it. The satisfaction that I actually understand the concepts at a foundational level, which actually ends up breeding excitement for a whole other tangent/problem.

These learnings have indirectly helped me in my day job as well. While I'm not working on anything that sophisticated or cool, all of these actual implementations I've learned have given me direct learnings I have been able to successfully use to create better software in other domains.

So it's not the actual typing I enjoy, but the whole picture of what comes out of the end through that typing. LLMs take most of that away. It lets me ideate on a vague solution and then it goes ahead and implements it for me. Even if I'm specific on the details of the algorithm it uses, it subtly fills in the blanks and the missing pieces that I haven't cemented in my brain yet thus making me miss out of the opportunity to do so.

And it steals the accomplishment of the final thing existing. I don't feel an accomplishment by typing in google "I need a C# to C transpiler" and just downloading it. That's what LLMs feel like, even if I'm trying to steer them at a lower architectural level. I don't have the aha moments, I don't have the learnings, and I'm disconnected from the code.

Thus it feels like it's stealing all the intrinsic rewards from me, only leaving the extrinsic ones. And those are not rewards I am particularly motivated by.

throw-the-towel 13 hours ago|||
[dead]
robotbikes 22 hours ago|||
I have found the opposite to be true. I really like getting stuff done for people and struggled for years with all of the specific syntax and details of solving any particular problem. I have a relatively in-depth knowledge of computers and how they work and algorithms and the like but always struggled with the exact details of how to do something so it feels like a blessing to be able to spit ball some conceptual understanding and get back real code. I always struggled with making my ideas real before the novelty of the inspiration wore off unless I happened to get hyper focused on solving a particular problem.

Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.

It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.

It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.

I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.

I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.

KallDrexx 19 hours ago||
Yeah and that's totally fair!

We are all motivated by different things and being extrinsically motivated isn't a bad thing at all.

But being more interested in the problems rather than the solutions (and not wanting to "productize the solutions") is why LLMs are demotivating for me.

entrox 23 hours ago|||
Almost a decade ago, I moved my career into the management track. I am a director by now and have two more management levels between myself and individual contributors.

I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on

To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.

rubslopes 15 hours ago||
I feel lucky to have been promoted to a management position recently, just as I was starting to feel less excited about dev work because of AI. I still enjoy building systems, but I have to admit that the loss of challenge made the work much less enjoyable for me.

Now I have a team of interns to mentor. They're sharp and use AI constantly, so my guidance is less about code and more about UI/UX, understanding what the client actually wants, good work practices, well-documented tickets, thorough reviews, and so on. Thankfully, I like this work, it has been very rewarding.

m463 17 hours ago|||
I wonder if this is a new thing or if it is a repeat of the past.

Like ...

When I was young, I wrote this REALLY tight assembly code - loops that were measurably better than C or other high-level languages.

Then obviously assembly was minimized, then forgotten.

Then years later, I found I was happy using even interpreted languages, not even using a compiler.

When first using perl and having a data structure not be as useful to the final output, in a line of code I used a different data structure and sorted the output exactly like I wanted. Too much effort if it had been C, and very much so for assembly language. But I got what I reall

Is AI a repeat of this? instead of assembly language, instead of C, instead of python, do we become high-level-english-language tech folks? Will AI just let us hand off our code and physical design to a fab, and will it make us happier?

I also wonder if SoA to you is how it behaves or how it is, and does it matter if you stop looking at the code just like I stopped comparing the code the C compiler generated to the assembly language I wrote. And what about years later with -O3, will AI have -O3?

svachalek 14 hours ago||
I feel like this is where it's going -- it's not where we are, the tools are not reliable enough that it makes sense to step back quite this far, but it feels like where we are going to arrive really soon.

If you look at agile processes one of the biggest criticisms is that there's always a magic "customer" role that needs to prioritize existing work, do acceptance testing for completed tasks, and give requirements deep enough to create real specifications. This often required a lot of attention to detail and very fine grained judgment typically lacking in those that are eager to have a job title of "customer".

And now if you look at dark software factories, these pieces are also basically everything they're missing. The person/people responsible for this role were never seen as being engineers/programmers in those processes, but I think that's where most SWEs will end up, because as these tools mature to the point they manage the code all on their own that's what's going to be left to the SWE in the chair.

The SWE of course won't be the actual customer/stakeholder, they'll be the proxy, the one that has to navigate meetings in meatspace and make soothing noises to the actual customers. Will they be happy doing this? That's a big group of "they" so some will, sure. But I think a lot of people who got into this career consider this the worst part of it, and it's now going to be the whole job.

seba_dos1 22 hours ago|||
Agentic harnesses go in the exact opposite direction to what I'd want to get from LLMs. I don't want another black box to (poorly) work on a black box for me, I want to be better at reaching into and understanding boxes that I already have in front of me. I don't want tools to autocompact contexts and store generated memories to facilitate long runs I have barely any control over, I want tools that allow me to painlessly craft a more relevant context for short ones. I don't want agents to author commits, I want them to use Git (or other tools) to get the information that I'm looking for when it's tedious to do it myself. I don't need them to do the fun and beneficial part of the job for me, I want them to do the boring parts that I already know how to do which block me from proceeding because my brain just isn't interested. Some of those things you can script yourself relatively easily, but the current tooling for LLM coding is absolutely atrocious and disconnected from programmer's needs.

The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.

visarga 22 hours ago|||
> I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely.

100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"

Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.

majormajor 18 hours ago|||
>The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges.

Honestly I've had the opposite experience.

If I can leave the boring crap to the LLMs, I can focus more on the deep important bits. The bits where the LLM accuracy is spotty because there's a ton of moving pieces and the "how/what" of the code becomes crucial for auditability and debuggability. The code that I've written bugs in, that Opus has written bugs in that code, where the design around it to make that less catastrophic when it happens is often system-specific and unique.

If I can spend 5 minutes delegating all the tedious plumbing updates around it, then I have more time to put towards the core.

The system design challenge becomes making sure that they are well separated.

Managing fleets of agents hasn't entered into the picture because the needle-moving things there tend to be successive and cumulative, not easily parallelizable. (I believe this is true on the product side as well - 10 crappy MVP features in a week would be way less interesting to me as a user than 1 new feature released in a 3x-more-fleshed-out-way than it would've been three years ago.)

tim-projects 19 hours ago|||
I'm also diagnosed and I'm the complete opposite.

For the first time I can not only compete with normal people's work loads but now with AI I can supersede them. I've never been more excited.

wrer 11 hours ago|||
People will blame this on 'your org is shaped wrong' but the reality is - until and unless LLMs becomes shaped to match the human it'll lead to pear-shaped outcomes.

Are people forgetting that we needed to make PC's more powerful to enable better experiences and interfaces to make them super intuitive and easy to use for humans...? It's amazing how all these learnings get lost in the midst of disruption.

dspillett 13 hours ago|||
> I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents.

I've tried to stay away for a variety of reasons (not approving of the way the tech was developed hovering up everyone's data for commercial gain, high amongst them), but the company I'm now part of (due to them buying us) is drinking deep from the GenAI water fountain, so I will very soon have no choice but to engage or be pushed out¹. I get it, I see the benefits, but it feels like turning into a manager (for GenAI agents rather than people, but still…) which is something I've always avoided because I want to tinker, I got into programming and database work because I like to play with the nitty-gritty details and I'm going to have to let that go.

To be frank, there is a sizable part of me that has wanted to be out of tech for a while² for various reasons³ and that part of me would prefer to go waiting tables if that is what it takes to escape! Maybe then I can reclaim tinkering as a hobby.

--------

[1] Redundancy would be nice, with 26 years service the statutory minimums would be more than enough to tide me by for a while, but I expect they'd not do that. I'd instead be put on a PIP for not performing (assuming they can make a case for not engaging with GenAI making me less efficient), and if I still don't play ball that'll be grounds for dismissal.

[2] Or at least take a fairly long sebatical.

[3] Not liking remote teams being a significant one, and even though I go into the office⁴ I'm still remote because most of everyone else is.

[4] which grants me the home/work separation

larodi 3 hours ago|||
so Claude is an Adderal substitute in a way, right?
DANmode 16 hours ago|||
> ready to go back to the old ways for my next personal project

This stood out to me.

Because you shouldn’t, or can’t go back, in your professional projects?

righthand 21 hours ago||
I have never jumped on the train but I am writing a project that uses v4l2 or libcamera. I have been experimenting with both and spent 4 hours reading linux kernel docs, libcamera docs, and not writing any code. I’m okay with that and the project has still moved ahead even though I only have written v4l2 sample code.
bah9 13 hours ago||
AI basically killed my joy for programming. I've been working as swe in bigtech for 8 years. I like learning stuff, actually coding stuff with my hands, gather information and understanding before implementing, polish my code. Thats all gone

Now I am just a monkey thats: 1) add enough context, description and harness to an agent 2) review ouput and repeat 1) if context is lacking

It was bottom to top: from understanding to implementation. You were the owner. Now it is top to bottom: get implementation first, try to get understanding later. Thinking is also delegated. "Think" nowadays is "reformulate,answer questions, add context, try again". This doest feels like I am doing the work, this feels like I am a limiting factor here

Another side effect is that any code now have 0 value. No one evaluating how you guided an agent, what decisions you took. People seeing your work and think "ye, I could vibe code that too with enough time" even if this is not true

And my work isn't css and html (with all respect). It is mostly high performance clusters, parallel computing, OS, low level, SOTA online llm inference etc

Now I am seriously considering blue collar job, as I have more joy building stuff with my hands than to be a passenger/context generator to an ai. I am not a business driven person, I don't really care how much money my company earns (sorry). I just like to solve technical puzzles and think hard

P.S. yes, there are corner cases ai can't do well: non trivial, highly specific algorithms and implementations; complex patches to gigantic multi domain proprietary code bases, but that's like 5% of my work

simgt 1 hour ago||
I'm in the same boat, weirdly I can't find colleagues or friends who share my point of view. It's always the same "I can do higher level thinking now" or "no it can't do X". You nailed it with the output having no value anymore.

I find some solace in electronics repair, sadly there isn't much money to make in that.

bah9 43 minutes ago||
Ahh, famous "thinking", also known as "write couple more prompts, answer llms questions and combine results". Pretty exciting
iugtmkbdfil834 2 hours ago||
<< No one evaluating how you guided an agent, what decisions you took.

Oh, don't worry. That part is coming. It might be a cynical read, but the matured version of the field will have a ton of after the fact reviews ( especially in more regulated parts ) and you will hate it.

Weryj 1 day ago||
I could have written this article myself.

The addiction part, the ADHD part and the pending test part.

The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.

My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.

I've gone back to Pro to stop what was happening.

Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.

MrGilbert 1 day ago||
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.

It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.

Weryj 21 hours ago|||
I thought about it a little deeper and I think software development has always had the addictive tendency. That hunt for the solution to the problem, has a rush when you complete it.

It’s just that the rush is more frequent, addiction intensity scales with dose and frequency.

mettamage 3 hours ago|||
Don't know man. I'm also neurodivergent, subclinical in many ways at least on many things (I use science and self-development to keep myself remarkably stable for my neurotype). My issue with programming has always been that it feels so lonely and you get to care about things that no one else seems to really care about. So it removes one further from the general public.

I feel with AI agents, the pendulum shifted back a bit.

I do get what you're saying that software development has an addictive tendency as 20% of the time I am like that as well (and then I'm the "eat, sleep, code" kind). But at the same time, it's just not true for everyone.

I guess what it is: in order for software development to have an addictive tendency for one, certain conditions need to be met beforehand.

> has always had the addictive tendency

If you meant just your own experience by the way then I misread your comment. Since it reads to me as if you're trying to generalize it a bit.

Weryj 46 minutes ago||
I was generalising a bit, but in a way where only a subset would agree. It's all anecdotal and very personal.
hyperadvanced 15 hours ago|||
I think I might be going through withdrawal because I feel like I rarely get that fun feeling anymore with coding :(

It can be gratifying to get shit done but I love the feeling of coming up with a great reusable component and then making an entire app out of it

thoughtpeddler 7 hours ago||||
I struggle to see the difference between "Let AI do that" and what a founder/executive is instinctively led to do also (i.e. delegate). Why does it have to be an ADHD thing? Yes, I see the risks of AI for someone with ADHD (described well in this article [0]), and for that reason I agree that ADHDers should be careful with these tools, as they present both a lot of promise and peril. But also... delegating functions to an 'agent' (whether human or AI) is just what people end up doing in life. Hard to tell these things apart...

[0] Rachel Thomas - Breaking the Spell of Vibe Coding: Sinister variations on the positive state of flow (https://www.fast.ai/posts/2026-01-28-dark-flow/)

Forgeties79 22 hours ago|||
As someone with ADHD, it’s really a problem. I have so many random documents of random outputs from prompts I didn’t track. It’s honestly accelerated some of my worst habits because it feels like I actually completed a task. The reality is I just have folders of half finished projects, which anyone with ADHD can relate to.

I’ll finish modding that Dreamcast one day…

bluefirebrand 22 hours ago||
I feel kind of lucky in a way that I hate working with AI so much. I'd rather hammer nails through my fingers than spend my time prompting

So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it

I still have that folder of half finished things just like you, though. It's just not AI generated

iugtmkbdfil834 2 hours ago|||
You got me curious, mostly because I did not evolve past one terminal for whatever reason. Can anyone tell me how that happens? Can you realistically keep track of much? Or is it really a move to management as one of the other poster's noted?
Weryj 39 minutes ago||
My monolith is large enough to work on multiple systems in parallel without overlapping. One prompt with Opus might take 30-40 minutes once past the planning phase.

So I plan the next work, while the current is still running and if that's a task that can't have parallel work, I have a bunch of time to keep planning the next steps for other systems.

And then there's time for reading through the changes and applying corrective changes to the code or the meta-skills.

I use CMUX and setup workspaces for each topic I'm working on, each workspace has number of tabs. That helps keeping track of everything I'm working on, but also means no topic gets left behind until I close the workspace. So they accumulate

ip26 22 hours ago|||
The counterweight has been, after using it for a bunch of projects, I have internalized that it will very, very quickly get me to maybe 60% and then I'll have to take it the rest of the way mostly by myself (or handholding it tightly for the remaining 40% at a much slower pace).

In other words, the initial implementation is practically already there, already done. So there's no rush left in generating it - it's only worth bothering if I'm prepared to see it through to 100%.

When it is worth pushing through to 100%, it's pretty great for getting the inertia going though.

mettamage 3 hours ago||
I feel AI agents are an amazing replacement for many Figma screens. Just create a crude version of your app and have users test it immediately.
willwade 1 day ago|||
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
7777332215 19 hours ago||
Average drug connesiuer activities
tim-projects 19 hours ago|||
For the addiction part I'm trying to squeeze as much quality code out of the free tokens possible. I'm having a blast!
rufasterisco 1 day ago|||
Instead of jumping from project to project, I focus on one (maybe a few) and let myself free while agents spew out their output.

Something physical is excellent for me: minor wood carving, origami, drawing exercises, also light physical exercises.

My trick is to (try to) do something that requires high focus, on unrelated matters.

To give a practical example, the simple gesture to connect 2 points on a sheet of paper via a direct, non trembling line, requires high focus: if you try to do it sloppily it is too long, too short, etc. I need to shadow the moment, gain focus, draw the line.

It keeps my brain in focus, busy and engaged. Videos, podcasts, and in general enything digital seems to distract me away and/or overloads me.

Also, I am back at using pomodoro technique more frequently.

Just some pointers, in case you want to try out, or suggest some you find effective yourself.

moron4hire 23 hours ago||
Might call it the OnlyFans model of Software Development.
protocolture 6 hours ago||
I am in a similar position but I have worked things out so that the AI enhances me and not the other way around. Depending on the project its about clearing my blockers, and giving me training, rather than being a slot machine that successful results fall out of.

One of the big issues I had to overcome was realising that theres nearly zero value in a solution I dont understand. And my understanding is woefully incomplete.

For instance, I have started doing a lot of personal electronics work. Its easy enough to request a circuit diagram and a BOM. but, the work still has to be done with my hands and crucially, the parts are purchased with my money. So I see a circuit diagram and I go "Hey why does this work" or "Uh shouldnt the added resistance between these components send the charge straight to ground" and by the time I have asked 100 questions, I have either established dominance (proven that the original diagram was incorrect) or learned some valuable information. And the questions I ask I can be generally assured are not worthy of being answered by skilled electrical people, and they definitely arent awake/wanting to answer my annoying questions at 2am or whatever.

I have done this for years, when I started I approached it as "Hey ChatGPT, teach me python" and its been really good.

albert_e 1 day ago||
Resonates with me.

In a paradoxical way, the amount of stuff you can get done in an hour now is like a firehose -- which we rarely experienced in our earlier life -- which can be overwhelming to my brain. So I subconsciously resist starting a session because I never feel fully rested, calm, and focussed to take all that and process it well.

There are also 10x more "active" projects now -- and prioritization and choosing between them at every moment is still a struggle. The tempation to do the fun and novel thing and avoid important but familar boring chores pops up every step of the way and can derail you for days.

I am still trying to create a system that works -- now using the very tools. Long journey ahead.

EDIT: My experience --

I was paying for both Claude Code as well as ChatGPT Pro ...but was heavily almost exlucively using CC for coding work because it was so good. After CC started hammering the session and weekly quotas lately -- I tentatively srated using Codex and find that it seems equally good and almost indistnguishable for my work, and ocassionally shines by one-shotting some tasks. This helped me stay afloat with just 2x$20 spend per month without feeling held-up for ransom. Also never hit codex limits till now.

Leaving a 5 hour session quota unused towards the end, or worse not even starting a 5 hour session clock, was a source of constant anxiety -- that I am wasting precious quota getting nothing done. I think I am getting over that now.

rsyring 21 hours ago|
I've been using Augment's agents (VS code, CLI) for 8ish months. It let's me easily switch between GPT and Claude models.

I've found the best results from letting GPT 5.4 code and then asking Opus to do code reviews to a file. I do the review in a different agent session so it's "fresh". Then I review the file, edit until I agree with everything, and let the existing GPT agent session address the items in the review file. I've found Clause agents don't perform as well for me in coding for whatever reason. They feel more sloppy.

I've also been doing a very organic spec-driven development process where I have a md file for each non-trivial project update and use that to define the task and address questions or problems the agent has.

I've also found I can give agents conditional instructions which they will usually use like skills. This gives me a way to easily distribute my instructions to any agent/model on any machine with a single AGENTS.md as the entry point:

https://github.com/rsyring/agent-configs/blob/main/default.m...

This has all been very effective, more than I would have predicted a year ago.

sajithdilshan 1 day ago||
I can relate to this. Last October, I had a real epiphany using Claude Code at work. Suddenly, that initial inertia of starting something whether it’s drafting a JIRA ticket, structuring a PR, or just brainstorming completely vanished.

I started using Claude exclusively in plan mode, and within minutes, I’d have full clarity on exactly what I wanted to do and how to do it. With the release of the Opus model, I felt 100% more productive because I stopped spending time on menial tasks like manual coding or documentation. Instead, I shifted my focus to architecting, problem solving, and reviewing code to make it perfect. I even wrote two PyCharm plugins to unify my workflow (one to manage Claude Code sessions as a first class citizen and another to render Markdown in a less eye straining way) so I don't have to leave the IDE.

However, the novelty is starting to wear off. Six months ago, I would have truly admired how efficient and productive the current version of myself has become, but now I just take it for granted. It has become the new normal, and I’m finding myself bored and stuck in a vicious cycle of constantly needing to reach the next level.

canardadry 4 hours ago||
Ah, the famous https://en.wikipedia.org/wiki/Hedonic_treadmill
bah9 13 hours ago||
"shifted my focus to architecting, problem solving, and reviewing code to make it perfect" aka write couple more prompts and combine results. Pretty exciting
hyperific 23 hours ago||
Addressing the end of the article, I think that we are all very much still learning how to use AI responsibly. It's like we just discovered alcohol and we're going on a rager every night because we don't know any better yet.

It's too easy to buy €100 of Claude tokens and burn through them to make those dream projects appear as if by magic. There's a middle ground where, for example, instead of building a whole project it could produce a project template and provide guidance as you build. That should take the edge off the task paralysis and hopefully disrupt the addiction loop.

hirvi74 15 hours ago|
That's how I use LLMs for programming. I predominately use the chatbots instead of the CLI tools. Every so often, I'll ask for a one-shot of some MVP, but then I take that MVP and make all the changes myself. However, I must say that I rarely do the one-shot-and-edit style of development. I find that such a process can save time, but not always.
pllbnk 1 day ago||
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.

So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.

stavros 1 day ago|
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).

It's just paying to get stuff done, which is how it's always been, since the dawn of man.

Thanemate 1 day ago|||
>AI development doesn't involve luck to any appreciable degree

Reading this while I'm prompting for the third time to fix a 100+ line function is amusing, to say the least. I don't care about the definition of "appreciable", but I definitely have to repeat myself to get stuff done, sometimes even to undo things I never told it to touch.

pickleRick243 8 hours ago|||
"Insanity is doing the same thing over and over again and expecting different results."

There is certainly randomness in model output that the user has to work around, but sending the same prompt with the same context (or even worse- with added entropy leaving the previous failed prompt in the context) over and over again akin to pulling a slot machine lever is certainly user error and not the way to "hold it".

stavros 1 day ago|||
That sounds like a process problem. LLMs, like any tool, work better if you don't use them in the naive "do this" way. This works well for me:

https://news.ycombinator.com/item?id=48083267

daveguy 17 hours ago|||
> The gambling trope is so tired...

>>> That sounds like a process problem. LLMs, like any tool, work better if you don't use them in the naive "do this" way...

The "you're holding it wrong" trope is even more tired than the gambling trope.

stavros 15 hours ago||
If you can't get results with the thing I'm getting results with, what other explanation would you give?
wavemode 14 hours ago|||
That logic only makes sense if you and the other person are working on the exact same kinds of projects.
daveguy 13 hours ago||
Exact same kinds of projects with the exact same development environment, models, etc. Either he's never worked with a development team or he doesn't consider things outside his own perspective. shrug
wrer 11 hours ago|||
What results lmao? Literally everything you've shared has zero-value - yes I checked them out. Thinking wtf is this?
stavros 4 hours ago||
What a useless comment to leave. Of course it's a throwaway account.
xantronix 22 hours ago|||
What's your monthly token spend?
stavros 22 hours ago||
I have a $100 Claude sub and a $20 OpenAI sub.
js8 1 day ago||||
> certainly not more than hiring people to do a job can be considered "gambling"

Actually it's quite possible that being a business manager/owner is actually addictive (having power over people), we just don't recognize it as such.

stavros 1 day ago||
All gambling addiction is addiction, not all addiction is gambling.
js8 1 day ago|||
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
stavros 1 day ago||
Yeah, that hasn't been my experience. The outcome, for me, is extremely consistent. I ~never have to "reroll" by wiping work and doing it again.
js8 1 day ago||
Strange. I tell Claude Code to do things differently all the time.
stavros 1 day ago||
I'd recommend a different workflow, with extensive upfront planning. This works extremely well for me:

https://www.stavros.io/posts/how-i-write-software-with-llms/

It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.

iugtmkbdfil834 1 hour ago||
Thank you for sharing this. It is weirdly encouraging to see someone take a path similar to mine.
cindyllm 1 day ago|||
[dead]
pllbnk 1 day ago||||
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
stavros 1 day ago||
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?

I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.

js8 1 day ago||
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
stavros 1 day ago||
I don't think the difference between a designer and a slot machine is that one gives you results more slowly, "therefore it's not gambling".

If you're making the argument that LLMs are gambling simply because they're faster than humans, I'd like to see some evidence.

js8 1 day ago||
> If you're making the argument that LLMs are gambling simply because they're faster than humans

No I am not. It's more addictive because of the timescale. The comparison of AIs to gambling is through addiction mechanism, as I explain elsewhere.

My aunt used to put in (the same) lottery numbers every week. It was gambling, but probably not an addiction in the clinical sense. If she had played slot machines, god forbid, it could have been more problematic. AI is a slot machine, a hire is a lottery ticket.

HumblyTossed 1 day ago||||
I don’t like the gambling comparison either. It’s more like smoking or drinking. It’s an addiction you lean on to help you do something- even if that something is just getting through the day.
Schiendelman 1 day ago|||
Like the internet!
stavros 1 day ago|||
Yeah but those are classified as addictions because they have a harm component (lung cancer, liver disease, societal impact). LLMs aren't going to kill you. If anything, it might be like gaming addiction.

If you've gotten to the point where you'd rather talk to an LLM than socialise, go to work, etc, then yes, you definitely have a problem, same as with a gaming addiction.

Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.

miyoji 23 hours ago||
> Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.

People absolutely do say that video games are slot machines. [0][1]

0: https://lvl-42.com/2018/11/06/video-games-as-slot-machines/

1: https://www.psu.com/news/three-ways-casino-games-are-similar...

stavros 23 hours ago||
Hence the parenthesized section of the part of my comment you quoted.
rasur 1 day ago||||
I'd observe that there are professional gamblers, and there are amateur gamblers.

If you know what you're doing, know how to spec a problem space, and can manage the tool competently enough to churn out good results, then everything's fine, and you're maybe being productive or increasing your productivity by some degree. (Professional "Gambler")

If you DON'T know what you're doing, and you're just vibe-coding, then I would argue that it is at least a form of gambling (Amateur "Gambler")

Both of these conditions can also be applied to "hiring people to do a job" however there we can also observe things like reputation, credentials and so on.

"It's just paying to get stuff done..." is, with respect, superflous.

stavros 1 day ago||
I don't know, I can understand "some people might overdo it and get addicted to LLMs". I can't understand "LLMs are slot machines and that's all they're good for" when I use LLMs every day to do tons of actual work.
mrbungie 1 day ago||||
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
stavros 1 day ago||
Really? All the hires I've seen had an 8-hour/5-day limit, or you had to pay through the nose for extended usage outside that window.

Where do you get your 24/7 hires from?

mrbungie 1 day ago||
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects you get when engaging with LLMs.

You can play overextending the hire analogy all you want but it is simply not the same.

timacles 16 hours ago|||
Not in that sense but social media companies already know the value of not giving a user exactly what they want. This keeps them on the platform longer and excited some lizard part of our brain for challenge.

Due to capitalism’s law of all businesses convergening on maximizing profit, it’s just a matter of time until AI companies employ similar techniques with LLMs. We can all imagine how that will look like

rufasterisco 1 day ago||
Some traits I recognized in many excellent coders i worked with, their drive to optimization, intellectual thirst, critical and creative thinking are attributes i consistently correlated with them being in some sort of neurodivergence spectrum.

Being able to remove the "first step" block is great, but what worries me is that this is coupled with LLMs sycophantic behaviours. My gut feeling is that coupling the feeling of unblocking ones capabilities with dopamine hits with the constant praising over someone abilities is an intro to psychosis and paranoia for them.

Weebs 22 hours ago|
I'm wrestling with this right now. I only use LLMs for design and exploration because I am not employed and can't pay for a subscription right now, and they make the design phase feel like less of a fever dream because checking my ideas doesn't involve hours of scanning search results online and trying to see how my ideas fit with what exists or trying to evaluate if my ideas even make sense, so I feel more encouraged to get started on working, but I often wonder if the prompts are being sycophants

In one case recently I explained a garbage collector design I had been toying with a while ago, but couldn't find research related to my idea or really evaluate if my idea would work. After enough arguing with the prompt it finally "understood", started praising my "novelty", and when I later asked for research related to it I was given a paper that already implemented most of my idea

It was a funny moment of seeing how it was clearly trained on too many online forum comments (simply mentioning reference counting got it on this whole awkward line of false folklore about memory management) before switching to sycophancy, to finally showing me a paper

adamtaylor_13 1 day ago|
Nitpick: Stop the throat clearing and get to the point. The final paragraph is the whole point of the article.

It's a real turnoff when I have to scroll past a moral lecture on artistry and piracy when I just want to hear your thoughts on task paralysis.

---

To the author's point though, AI is incredible at building some initial momentum on a task. The initialization energy is basically zero.

MrGilbert 17 hours ago||
Appreciate your nitpick. As I dislike recipes that introduce you to the fine art of wheat milling before getting to the recipe itself, I tried to keep that section short(-ish). I felt the need to provide some context and thoughts, that's why I included it. Not sure what I'll do next time: Either put the conclusion at the beginning and offer some more context and thoughts at the end (then you can drop out if you don't want it), or just leave it out completely. I'll reflect on that.
renticulous 22 hours ago|||
When I don't have time, I just ask AI to summarize the main points and expand on the point I like. I do this with even HN discussions. I just copy the whole HN page and paste into Claude and ask it to summarize and deduplicate talking points.
jplusequalt 22 hours ago|||
You didn't have to read the article.
furyofantares 18 hours ago|||
OP posted their article on the internet and then to HN presumably because they'd like people to read it.
pegasus 19 hours ago||||
And you don't have to read the critique.
pessimizer 19 hours ago|||
You can't know that it was a waste of time to read until after you've read it. Especially if the point is at the end, and is a good point.
kordlessagain 1 day ago|||
IP law is incompatible with AI. It's an important point, but not here.
dinfinity 21 hours ago||
Not a nitpick, but a justified criticism of the post. The technical term is "burying the lede" and it is incompetence at best and malice at worst.

It's absolutely awful. It's not a novel or entertainment. Don't "foreshadow" or "set the scene". Just get to the fucking point.

More comments...