YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.
Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).
I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.
LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.
I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".
I do believe you can use LLMs while maintaining the intrinsic rewards of programming. For me, right now that means writing code by hand and using LLMs primarily for research, documentation, and brainstorming. Sometimes I ask it to write a piece of code just so that I can see what it comes up with and maybe learn something from it. I'm also planning on experimenting with coding agents, but I will probably have it work in its own parallel repo and hand-pick the changes I want to keep.
I think a "late adopter" mindset is actually beneficial. It allows you to focus on fundamental skills that will never be outdated, and you get the benefits of new technologies once they mature.
I get that. I recently watched a "talking head" style video by javid9x, where he said something along the lines of disconnecting from the code emotionally [0]. He has to get into the code to understand that. I get the same feeling, however, for me, it feeds my curiosity and my need of exploration. At least for now, I might add.
It’s great that an application now exists where there wasn’t one before, but it’s hollow because I didn’t make it. Nobody made it! It just exists now with nothing actually accomplished by anyone. It’s a very spooky way to conjure things up.
The answer could be to just push further, and try solving harder problems.
Not for me, but that's because I like playing around with software. So in web apps, the UX is done by me.
I'm not here to invalidate your experience. I get what you're saying and I feel it. I just also want to show the point of view where intrinsic motivation can still pop up (for some people at least).
But yea, it sucks if all of it is taken away from you. I'm sorry to hear that.
I'm not sure I understand this. For me, programming was at first a tool to use to satisfy my curiousity. When I first started coding I knew nothing about software patterns, how I should be naming my variables, length of functions, DOD vs OOP, functional vs imperative, single responsibility principle, and on and on.
I wrote a mess of a program and got it to do very cool things (for me). I loved it.
Then I got taught more, got my first jobs, learned why programming large systems needs standards, patterns, etc. I became good at that, and have had a long lucrative career out of it.
But I cannot wait for the day when I no longer need to earn money from programming and I can go back to using it just to do "cool shit". At that point, whether I am hacking and slashing myself, or working with an LLM to do something, I don't care. It is the intrinsic goal of solving a puzzle and programming just happens to be the tool I use.
Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?
So there's a lot of nuance to the "is it that you enjoy figuring out the instructions to use to solve a problem".
At the surface I don't enjoy typing, I don't enjoy fighting syntax checkers, rust's borrow checker, or manual memory management in my personal C projects, typing out the HDL for nand2tetris problems, etc...
However, there have been studies done decades before this LLM boom about the psychological concept called the Generation Effect. While everyone is different and it's not completely black and white, the studies have found that people learn more by actual practice (the act of doing) than just by reading material. That's 100% the case for me.
I can read blogs and resources till the cows come home and I'll have a very surface understanding of a concept. Then I'll go to write the code to implement it and it rarely works right away because there are demonstrable gaps in my understanding. I'll debug it and iterate on it until it works, and that is what actually solidifies the mental model of what I was trying to learn in my mind. Not only can I say for sure that I remember it better, that seems to form connections in my brain that allows me to apply it in other use cases, or build fascinating technical tangents.
I not only get my high from that initial "Aha!" moment when I really feel like I understand a concept enough to actually apply it in other scenarios, but I also get my high from tangents that spawn off of that concept.
In many cases, I can map a direct line of my personal projects to a set of root projects that spawned them off because of ideas I came up with while actually implementing the projects. Since I tried real hard to optimize a C# game engine for an embedded platform, I realized where limitations were and it solidified my knowledge of how old game consoles worked.
This led me down to the interest of creating a GPU out of embedded device that I can pair with I/O constrained embedded devices. This taught me soooo much about the embedded space, and while I heavily improved my C writing abilities it also made me wish I could write C# on embedded.
Since I had learned C for the embedded project (and I knew MSIL from previous deep dives), I realized I can just translate MSIL into C and that would allow me to run C# anywhere (got C# working on an SNES, the linux kernel, and on an ESP32S3).
By implementing that by hand and coming face to face with many small decisions I had to make, that solidified a bunch of concepts in my head around intermediate representations and why they are a massive benefit. Those aha moments (among others) then led me down the path to implementing a just-in-time compilation engine for NES games and the C64 OS into the .net runtime.
The learnings from that have already spawned some other ideas in my mind, which is why I'm now learning Verilog and FPGA development.
None of these projects solved any useful problem (as in nothing was created that I or anyone else would use). The satisfaction and the high I got from them was having the curiosities of a problem, ideas of a solution, and persevering (partially due to being stubborn) through it and actually accomplishing it. The satisfaction that I actually understand the concepts at a foundational level, which actually ends up breeding excitement for a whole other tangent/problem.
These learnings have indirectly helped me in my day job as well. While I'm not working on anything that sophisticated or cool, all of these actual implementations I've learned have given me direct learnings I have been able to successfully use to create better software in other domains.
So it's not the actual typing I enjoy, but the whole picture of what comes out of the end through that typing. LLMs take most of that away. It lets me ideate on a vague solution and then it goes ahead and implements it for me. Even if I'm specific on the details of the algorithm it uses, it subtly fills in the blanks and the missing pieces that I haven't cemented in my brain yet thus making me miss out of the opportunity to do so.
And it steals the accomplishment of the final thing existing. I don't feel an accomplishment by typing in google "I need a C# to C transpiler" and just downloading it. That's what LLMs feel like, even if I'm trying to steer them at a lower architectural level. I don't have the aha moments, I don't have the learnings, and I'm disconnected from the code.
Thus it feels like it's stealing all the intrinsic rewards from me, only leaving the extrinsic ones. And those are not rewards I am particularly motivated by.
Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.
It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.
It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.
I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.
I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.
We are all motivated by different things and being extrinsically motivated isn't a bad thing at all.
But being more interested in the problems rather than the solutions (and not wanting to "productize the solutions") is why LLMs are demotivating for me.
I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on
To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.
Now I have a team of interns to mentor. They're sharp and use AI constantly, so my guidance is less about code and more about UI/UX, understanding what the client actually wants, good work practices, well-documented tickets, thorough reviews, and so on. Thankfully, I like this work, it has been very rewarding.
Like ...
When I was young, I wrote this REALLY tight assembly code - loops that were measurably better than C or other high-level languages.
Then obviously assembly was minimized, then forgotten.
Then years later, I found I was happy using even interpreted languages, not even using a compiler.
When first using perl and having a data structure not be as useful to the final output, in a line of code I used a different data structure and sorted the output exactly like I wanted. Too much effort if it had been C, and very much so for assembly language. But I got what I reall
Is AI a repeat of this? instead of assembly language, instead of C, instead of python, do we become high-level-english-language tech folks? Will AI just let us hand off our code and physical design to a fab, and will it make us happier?
I also wonder if SoA to you is how it behaves or how it is, and does it matter if you stop looking at the code just like I stopped comparing the code the C compiler generated to the assembly language I wrote. And what about years later with -O3, will AI have -O3?
If you look at agile processes one of the biggest criticisms is that there's always a magic "customer" role that needs to prioritize existing work, do acceptance testing for completed tasks, and give requirements deep enough to create real specifications. This often required a lot of attention to detail and very fine grained judgment typically lacking in those that are eager to have a job title of "customer".
And now if you look at dark software factories, these pieces are also basically everything they're missing. The person/people responsible for this role were never seen as being engineers/programmers in those processes, but I think that's where most SWEs will end up, because as these tools mature to the point they manage the code all on their own that's what's going to be left to the SWE in the chair.
The SWE of course won't be the actual customer/stakeholder, they'll be the proxy, the one that has to navigate meetings in meatspace and make soothing noises to the actual customers. Will they be happy doing this? That's a big group of "they" so some will, sure. But I think a lot of people who got into this career consider this the worst part of it, and it's now going to be the whole job.
The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.
100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"
Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.
Honestly I've had the opposite experience.
If I can leave the boring crap to the LLMs, I can focus more on the deep important bits. The bits where the LLM accuracy is spotty because there's a ton of moving pieces and the "how/what" of the code becomes crucial for auditability and debuggability. The code that I've written bugs in, that Opus has written bugs in that code, where the design around it to make that less catastrophic when it happens is often system-specific and unique.
If I can spend 5 minutes delegating all the tedious plumbing updates around it, then I have more time to put towards the core.
The system design challenge becomes making sure that they are well separated.
Managing fleets of agents hasn't entered into the picture because the needle-moving things there tend to be successive and cumulative, not easily parallelizable. (I believe this is true on the product side as well - 10 crappy MVP features in a week would be way less interesting to me as a user than 1 new feature released in a 3x-more-fleshed-out-way than it would've been three years ago.)
For the first time I can not only compete with normal people's work loads but now with AI I can supersede them. I've never been more excited.
Are people forgetting that we needed to make PC's more powerful to enable better experiences and interfaces to make them super intuitive and easy to use for humans...? It's amazing how all these learnings get lost in the midst of disruption.
I've tried to stay away for a variety of reasons (not approving of the way the tech was developed hovering up everyone's data for commercial gain, high amongst them), but the company I'm now part of (due to them buying us) is drinking deep from the GenAI water fountain, so I will very soon have no choice but to engage or be pushed out¹. I get it, I see the benefits, but it feels like turning into a manager (for GenAI agents rather than people, but still…) which is something I've always avoided because I want to tinker, I got into programming and database work because I like to play with the nitty-gritty details and I'm going to have to let that go.
To be frank, there is a sizable part of me that has wanted to be out of tech for a while² for various reasons³ and that part of me would prefer to go waiting tables if that is what it takes to escape! Maybe then I can reclaim tinkering as a hobby.
--------
[1] Redundancy would be nice, with 26 years service the statutory minimums would be more than enough to tide me by for a while, but I expect they'd not do that. I'd instead be put on a PIP for not performing (assuming they can make a case for not engaging with GenAI making me less efficient), and if I still don't play ball that'll be grounds for dismissal.
[2] Or at least take a fairly long sebatical.
[3] Not liking remote teams being a significant one, and even though I go into the office⁴ I'm still remote because most of everyone else is.
[4] which grants me the home/work separation
This stood out to me.
Because you shouldn’t, or can’t go back, in your professional projects?
Now I am just a monkey thats: 1) add enough context, description and harness to an agent 2) review ouput and repeat 1) if context is lacking
It was bottom to top: from understanding to implementation. You were the owner. Now it is top to bottom: get implementation first, try to get understanding later. Thinking is also delegated. "Think" nowadays is "reformulate,answer questions, add context, try again". This doest feels like I am doing the work, this feels like I am a limiting factor here
Another side effect is that any code now have 0 value. No one evaluating how you guided an agent, what decisions you took. People seeing your work and think "ye, I could vibe code that too with enough time" even if this is not true
And my work isn't css and html (with all respect). It is mostly high performance clusters, parallel computing, OS, low level, SOTA online llm inference etc
Now I am seriously considering blue collar job, as I have more joy building stuff with my hands than to be a passenger/context generator to an ai. I am not a business driven person, I don't really care how much money my company earns (sorry). I just like to solve technical puzzles and think hard
P.S. yes, there are corner cases ai can't do well: non trivial, highly specific algorithms and implementations; complex patches to gigantic multi domain proprietary code bases, but that's like 5% of my work
I find some solace in electronics repair, sadly there isn't much money to make in that.
Oh, don't worry. That part is coming. It might be a cynical read, but the matured version of the field will have a ton of after the fact reviews ( especially in more regulated parts ) and you will hate it.
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
It’s just that the rush is more frequent, addiction intensity scales with dose and frequency.
I feel with AI agents, the pendulum shifted back a bit.
I do get what you're saying that software development has an addictive tendency as 20% of the time I am like that as well (and then I'm the "eat, sleep, code" kind). But at the same time, it's just not true for everyone.
I guess what it is: in order for software development to have an addictive tendency for one, certain conditions need to be met beforehand.
> has always had the addictive tendency
If you meant just your own experience by the way then I misread your comment. Since it reads to me as if you're trying to generalize it a bit.
It can be gratifying to get shit done but I love the feeling of coming up with a great reusable component and then making an entire app out of it
[0] Rachel Thomas - Breaking the Spell of Vibe Coding: Sinister variations on the positive state of flow (https://www.fast.ai/posts/2026-01-28-dark-flow/)
I’ll finish modding that Dreamcast one day…
So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it
I still have that folder of half finished things just like you, though. It's just not AI generated
So I plan the next work, while the current is still running and if that's a task that can't have parallel work, I have a bunch of time to keep planning the next steps for other systems.
And then there's time for reading through the changes and applying corrective changes to the code or the meta-skills.
I use CMUX and setup workspaces for each topic I'm working on, each workspace has number of tabs. That helps keeping track of everything I'm working on, but also means no topic gets left behind until I close the workspace. So they accumulate
In other words, the initial implementation is practically already there, already done. So there's no rush left in generating it - it's only worth bothering if I'm prepared to see it through to 100%.
When it is worth pushing through to 100%, it's pretty great for getting the inertia going though.
Something physical is excellent for me: minor wood carving, origami, drawing exercises, also light physical exercises.
My trick is to (try to) do something that requires high focus, on unrelated matters.
To give a practical example, the simple gesture to connect 2 points on a sheet of paper via a direct, non trembling line, requires high focus: if you try to do it sloppily it is too long, too short, etc. I need to shadow the moment, gain focus, draw the line.
It keeps my brain in focus, busy and engaged. Videos, podcasts, and in general enything digital seems to distract me away and/or overloads me.
Also, I am back at using pomodoro technique more frequently.
Just some pointers, in case you want to try out, or suggest some you find effective yourself.
One of the big issues I had to overcome was realising that theres nearly zero value in a solution I dont understand. And my understanding is woefully incomplete.
For instance, I have started doing a lot of personal electronics work. Its easy enough to request a circuit diagram and a BOM. but, the work still has to be done with my hands and crucially, the parts are purchased with my money. So I see a circuit diagram and I go "Hey why does this work" or "Uh shouldnt the added resistance between these components send the charge straight to ground" and by the time I have asked 100 questions, I have either established dominance (proven that the original diagram was incorrect) or learned some valuable information. And the questions I ask I can be generally assured are not worthy of being answered by skilled electrical people, and they definitely arent awake/wanting to answer my annoying questions at 2am or whatever.
I have done this for years, when I started I approached it as "Hey ChatGPT, teach me python" and its been really good.
In a paradoxical way, the amount of stuff you can get done in an hour now is like a firehose -- which we rarely experienced in our earlier life -- which can be overwhelming to my brain. So I subconsciously resist starting a session because I never feel fully rested, calm, and focussed to take all that and process it well.
There are also 10x more "active" projects now -- and prioritization and choosing between them at every moment is still a struggle. The tempation to do the fun and novel thing and avoid important but familar boring chores pops up every step of the way and can derail you for days.
I am still trying to create a system that works -- now using the very tools. Long journey ahead.
EDIT: My experience --
I was paying for both Claude Code as well as ChatGPT Pro ...but was heavily almost exlucively using CC for coding work because it was so good. After CC started hammering the session and weekly quotas lately -- I tentatively srated using Codex and find that it seems equally good and almost indistnguishable for my work, and ocassionally shines by one-shotting some tasks. This helped me stay afloat with just 2x$20 spend per month without feeling held-up for ransom. Also never hit codex limits till now.
Leaving a 5 hour session quota unused towards the end, or worse not even starting a 5 hour session clock, was a source of constant anxiety -- that I am wasting precious quota getting nothing done. I think I am getting over that now.
I've found the best results from letting GPT 5.4 code and then asking Opus to do code reviews to a file. I do the review in a different agent session so it's "fresh". Then I review the file, edit until I agree with everything, and let the existing GPT agent session address the items in the review file. I've found Clause agents don't perform as well for me in coding for whatever reason. They feel more sloppy.
I've also been doing a very organic spec-driven development process where I have a md file for each non-trivial project update and use that to define the task and address questions or problems the agent has.
I've also found I can give agents conditional instructions which they will usually use like skills. This gives me a way to easily distribute my instructions to any agent/model on any machine with a single AGENTS.md as the entry point:
https://github.com/rsyring/agent-configs/blob/main/default.m...
This has all been very effective, more than I would have predicted a year ago.
I started using Claude exclusively in plan mode, and within minutes, I’d have full clarity on exactly what I wanted to do and how to do it. With the release of the Opus model, I felt 100% more productive because I stopped spending time on menial tasks like manual coding or documentation. Instead, I shifted my focus to architecting, problem solving, and reviewing code to make it perfect. I even wrote two PyCharm plugins to unify my workflow (one to manage Claude Code sessions as a first class citizen and another to render Markdown in a less eye straining way) so I don't have to leave the IDE.
However, the novelty is starting to wear off. Six months ago, I would have truly admired how efficient and productive the current version of myself has become, but now I just take it for granted. It has become the new normal, and I’m finding myself bored and stuck in a vicious cycle of constantly needing to reach the next level.
It's too easy to buy €100 of Claude tokens and burn through them to make those dream projects appear as if by magic. There's a middle ground where, for example, instead of building a whole project it could produce a project template and provide guidance as you build. That should take the edge off the task paralysis and hopefully disrupt the addiction loop.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
Reading this while I'm prompting for the third time to fix a 100+ line function is amusing, to say the least. I don't care about the definition of "appreciable", but I definitely have to repeat myself to get stuff done, sometimes even to undo things I never told it to touch.
There is certainly randomness in model output that the user has to work around, but sending the same prompt with the same context (or even worse- with added entropy leaving the previous failed prompt in the context) over and over again akin to pulling a slot machine lever is certainly user error and not the way to "hold it".
>>> That sounds like a process problem. LLMs, like any tool, work better if you don't use them in the naive "do this" way...
The "you're holding it wrong" trope is even more tired than the gambling trope.
Actually it's quite possible that being a business manager/owner is actually addictive (having power over people), we just don't recognize it as such.
https://www.stavros.io/posts/how-i-write-software-with-llms/
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
If you're making the argument that LLMs are gambling simply because they're faster than humans, I'd like to see some evidence.
No I am not. It's more addictive because of the timescale. The comparison of AIs to gambling is through addiction mechanism, as I explain elsewhere.
My aunt used to put in (the same) lottery numbers every week. It was gambling, but probably not an addiction in the clinical sense. If she had played slot machines, god forbid, it could have been more problematic. AI is a slot machine, a hire is a lottery ticket.
If you've gotten to the point where you'd rather talk to an LLM than socialise, go to work, etc, then yes, you definitely have a problem, same as with a gaming addiction.
Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
People absolutely do say that video games are slot machines. [0][1]
0: https://lvl-42.com/2018/11/06/video-games-as-slot-machines/
1: https://www.psu.com/news/three-ways-casino-games-are-similar...
If you know what you're doing, know how to spec a problem space, and can manage the tool competently enough to churn out good results, then everything's fine, and you're maybe being productive or increasing your productivity by some degree. (Professional "Gambler")
If you DON'T know what you're doing, and you're just vibe-coding, then I would argue that it is at least a form of gambling (Amateur "Gambler")
Both of these conditions can also be applied to "hiring people to do a job" however there we can also observe things like reputation, credentials and so on.
"It's just paying to get stuff done..." is, with respect, superflous.
Where do you get your 24/7 hires from?
You can play overextending the hire analogy all you want but it is simply not the same.
Due to capitalism’s law of all businesses convergening on maximizing profit, it’s just a matter of time until AI companies employ similar techniques with LLMs. We can all imagine how that will look like
Being able to remove the "first step" block is great, but what worries me is that this is coupled with LLMs sycophantic behaviours. My gut feeling is that coupling the feeling of unblocking ones capabilities with dopamine hits with the constant praising over someone abilities is an intro to psychosis and paranoia for them.
In one case recently I explained a garbage collector design I had been toying with a while ago, but couldn't find research related to my idea or really evaluate if my idea would work. After enough arguing with the prompt it finally "understood", started praising my "novelty", and when I later asked for research related to it I was given a paper that already implemented most of my idea
It was a funny moment of seeing how it was clearly trained on too many online forum comments (simply mentioning reference counting got it on this whole awkward line of false folklore about memory management) before switching to sycophancy, to finally showing me a paper
It's a real turnoff when I have to scroll past a moral lecture on artistry and piracy when I just want to hear your thoughts on task paralysis.
---
To the author's point though, AI is incredible at building some initial momentum on a task. The initialization energy is basically zero.
It's absolutely awful. It's not a novel or entertainment. Don't "foreshadow" or "set the scene". Just get to the fucking point.