Top
Best
New

Posted by raphaelcosta 5 hours ago

What I'm Hearing About Cognitive Debt (So Far)(margaretstorey.com)
183 points | 102 comments
efitz 2 hours ago|
There are many projects that should not be built agentically.

For things that are appropriate to build with agents, I have come to hold the strong opinion that you need to go all-in. If you built it with an agent, then you fix it with an agent, you debug it with an agent, and you change it with an agent.

In that case you should not consider yourself the steward of the source code and worry about “cognitive debt”- it’s literally not your job anymore. Your job is the keeper of the specification and care and feeding of the agents.

If you adopt the mindset that “I’m not going to build the documentation for me, I’m going to build it for the agent”, and “I’m not going to try to use my development skills to debug something I didn’t write, I’m going to make specific interfaces for the agent to understand the state and activity of the running code”, etc.- you’ll be a lot happier and more successful.

If you are using agents for autocomplete in your editor, or you open a separate chat window to ask a question about your code- that’s a very low level of agent usage and all your existing dev skills and responsibilities still apply.

If you’re using a planning framework like superpowers (the skill) and just laying out the spec for the program, then keep your fingers out of the source code, and don’t waste your time reading it. Have the agent explain it, showing you in the IDE, and make the agent make any changes you want.

Aperocky 2 hours ago||
This is correct, but it misses out an important dimension.

You can inject philosophy into the agent and ensure that it sticks to it. The LLM will, with sufficient drilling, begrudgingly implement it, most important of which is SIMPLE>COMPLEX on all levels and you have to either manually or agentically continuously monitor this.

Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more. This is the default path, and the path that way too many has taken.

deterministic 1 hour ago|||
Not my experience at all. I’ve been using Claude Code on a large "hand written" project and it’s genuinely excellent at finding bugs and generating new methods or classes.

That said, it still frequently introduces subtle bugs, so I have to review every change carefully.

The real trick is learning when to use it. Some tasks are much faster to do myself, while others are faster with Claude Code.

nicbou 1 hour ago||
Same experience here. It produces code that I would not accept if it was written by a human, but it also produces a lot of completely fine code. It made me clear tasks that I had been procrastinating for a long time.
Yokohiii 2 hours ago||
tldr; agentic coding just works, you just have to put puny humans on a leash
hogehoge51 3 hours ago||
Unfortunately I think Cognitive Debt is the cry of the software craftsperson who thought they were an Engineer. Upon working with the agent subcontractor, the agent factory, the agent part vendor, they approached it as a craft; they found themselves wanting to walk through the offices of the subcontractor reviewing screens, inspect pieces at the factory, and get the internal design for the parts they ordered. It's natural to get overwhelmed: this is why Engineers have contracts, specifications, design drawings, datasheets, and characterization data, handed over at clearly defined boundaries of abstraction, accepting the other side may be a black box.

Of course, we have had compilers and tooling, but those are the pencil and drafting board of the draftsperson. An ecosystem of packages, dependencies and APIs has evolved, but those are often just spells the software magician invokes after reading the spellbook^H^H^H^H^H^H^H^H^H stackoverflow^H^H^H^H^H^H^H^H^H^H^H^H^H API documentation.

We are going to need to build a new set of boundaries and abstractions with new handover protocols to manage this mess.

praptak 1 hour ago||
There is no party capable to take responsibility on the other side of the handover protocol.
platzhirsch 3 hours ago||
Because waterfall software engineering has been so successful, right? ;-)
onion2k 2 hours ago|||
Lots of very successful software was built using a waterfall approach. It's a methodology that works well if you know precisely what the end result needs to be. That doesn't make it appropriate for everything - if you don't know what the customer needs, or if you want to get an MVP out, then Agile works better, but you shouldn't dismiss an approach because it doesn't work everywhere.

Plus, 'agile' in quite a lot companies is really waterfall that's been broken into sprints without the planning of proper waterfall or the discovery and learning of real Agile. The software still gets built though. Maybe software is actually quite easy to plan.

mikestaas 1 hour ago||
Agilefall, the worst of both worlds.
21asdffdsa12 14 minutes ago||||
As if anything in the world is ever pure. Waterfall can have small scrums run on side-shows, while staying waterfally on the center project.
saulpw 3 hours ago|||
Because agile has been so successful, right? ;)
marcosdumay 2 hours ago|||
In fact, agile has been extremely successful.

It's the people that claim to "do agile" that invariably don't do it. But software development used to fail most of the time, and it doesn't do that anymore.

scorpioxy 2 hours ago|||
What makes you say that it has been extremely successful? And when you say doesn't fail anymore, do you mean it doesn't go over budget and/or changes scope?
Turskarama 2 hours ago||
Agile cannot go over budget or scope because those are failures of planning. Agile is the methodology that was developed specifically to counteract those problems with planning. Projects that use Agile can go over budget and scope but they never do that because they are using Agile, rather they use Agile because they might do that.
sevenzero 1 hour ago||
It always felt like Agile is the lazy attempt of people unwilling to learn what it takes to build software, to make it more predictable. Unfortunately project planning methods that work for buildings are not really great for software. It's just corpo stuff project managers do to feel meaningful.
21asdffdsa12 13 minutes ago||||
They cloned one true scottsmann in the end..

If the idea does not compute with human nature, the idea is flawed, its basis the knowledge of human nature is non-existent and thus it had no place in reality after all..

s-lambert 1 hour ago||||
It used to fail once after a long time, now software fails a lot every 2 weeks.
nottorp 35 minutes ago||||
Hmm that reminds me of what some people say about communism :)
philipswood 1 hour ago|||
Waterfall was also extremely successful.

People who failed just did it wrong. /s

platzhirsch 2 hours ago|||
Agile fails when folks don't adjust and tailor the process to the specific needs of their team or organization but instead try to cargo cult it.
vrighter 2 hours ago||
so it's called agile when it works, but not when it doesn't, got it!
habinero 2 hours ago||||
You gotta understand how to use a tool for it to be effective, yes.
scorpioxy 1 hour ago||
And if a tool is that difficult to use, how can you tell if the problem is in the tool or the user? There's a large industry built around doing training and certifications in agile methodologies now. If a tool is that difficult to get right, maybe it's just not a good tool to begin with.

To be fair, the manifesto and methodology is quite good in theory. But I just have never heard of(or experienced) it working properly and the response is always that it wasn't implemented correctly.

Ozzie_osman 3 hours ago||
I think the antidote is ownership. Every part of a company or product needs some person or small group that owns it, understands it, and feels ownership over its long-term health. They review or decide what level of review changes to it can take.

So much of what makes high-functioning teams work is a sense of ownership and stewardship, and what makes low-functioning teams break is a lack thereof. Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.

In the past, that ownership could be individual or collective, but with AI and a lot more lane-crossing, ownership should tend toward smaller groups (or individuals).

A developer can design, but a designer needs to review it. A designer can code, but the owner of the code must review it.

This might feel like gatekeeping, but it's the only way.

zbentley 2 hours ago||
Fortunately, the widespread use of LLMs results in companies reducing the number of things each team owns, and/or leaving lots of engineers on payroll who suddenly have abundant free time with which to start self-learning and developing a sense of ownership of parts of the product.

Wait...

scorpioxy 2 hours ago|||
I agree but don't see it working out. Ownership implies that a person or a team becomes irreplaceable or at least more difficult to replace. People stop being resources and go back to being people. Management is not going to like that.

At one business I was a part of where that experiment was tried, it failed badly. In reality, people were being switched around on projects and the "owner" was changing every few months. The end result was quite messy, both in terms of technical debt and politics(about who's the final decision maker).

xbmcuser 2 hours ago|||
What you need is ownership of not just the code but ownership of the company if your work is your own then you care a lot more about then if it belongs to someone else.
yuye 2 hours ago||
>I think the antidote is ownership.

I've said this before, but people gloss over this fact.

>Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.

I've also said this before, but AI-glazers just respond with "I think we may just have to let go of pride & kudos and their connection to our identity."

Most people who vibecode don't give a shit about their work. Any solution is a solution as long as it works.

>This might feel like gatekeeping, but it's the only way.

Gatekeeping is not inherently bad. We want gatekeeping.

If I'm getting surgery, I want an actual doctor with proven credentials to do it.

And to anyone claiming that software doesn't kill, please look up "Therac-25" or the 65 people that died due to Tesla's "Full Self-Driving".

pizzly 3 hours ago||
Cognitive Debt has existed much earlier before LLMs became mainstream. Technical people got good at their jobs and then was promoted to management. After time they lost their technical abilities but if they are a good manager they kept up to date with the technological landscape and used their engineering thinking to ensure that the people below them worked to their optimum efficiency to achieve the companies goals.

Now we all know horrible mangers who didn't keep up to date nor used their thinking. This will happen with AI useage too. What is more we are expecting people who are engineers to have a manager's mindset (by managing AI agents, products requirements, etc). Many engineers are horrible at this and have no desires or ability to become a manager. This is why they went to engineering in the first place.

yuye 3 hours ago||
>Many engineers are horrible at this and have no desires or ability to become a manager. This is why they went to engineering in the first place.

Bingo. If I wanted to spend my life managing incompetent sycophants, I would've studied for an MBA to try to rise the ranks at McKinsey.

casualscience 3 hours ago|||
While this isn't a unique perspective, I think it's wild more people don't understand this. What happened is everyone is being "promoted" to staff+ level engineer and they're realizing the realities of that situation.

The funny part is that these are the same people who are upset that these folks up the food chain "do nothing".

JambalayaJimbo 58 minutes ago|||
If you’re a manager you have people under you that care about the code they write and the direction of the company, not typewriter monkeys.
bluefirebrand 2 hours ago|||
They're being "promoted" without any kind of extra income, only elevated expectations and responsibilities

So no wonder people aren't happy

yuedongze 3 hours ago|||
This, and I would even say we are promoting people to be kings and queens. I'm afraid AI will amplify our worst parts because they are ultimately sycophants. I've heard so many things about AI enabling a single person to run a billion dollar business. But I believe without the right mindset/discipline, a person cannot go too far with any technology.
postpriorx 2 hours ago||
I'm consistently surprised by how many "software engineers" I've worked with have never read Naur's paper (https://pages.cs.wisc.edu/~remzi/Naur.pdf) or not even familiar with this notion before agentic coding. This was always a reality in our discipline whether folks realized it or now.
darth_avocado 3 hours ago||
> the question becomes how teams will manage cognitive debt

That’s the neat trick kiddo, they won’t. Across the industry, the messaging is clear: use AI and be more productive. Management is salivating at the idea of getting rid of people and keeping a higher share of profits for themselves. Most ICs I talk to are increasingly expressing the feeling of burnout, fear of losing jobs and resentment that AI is being pushed the way it is being pushed. I have more than a few conversations where people have clearly expressed that they are mostly focused on keeping their jobs. They don’t care about cognitive debt and some are looking forward to the time when the debt comes due.

It is depressing, but it is the reality.

Aperocky 2 hours ago||
You can use AI to manage cognitive debt, but it doesn't reflect on the sprint board.

Across the board, I still see people loving to over design things that can be much simpler. This isn't much changed because of LLM, LLM just allowed them to create the complicated implementations much faster.

Yokohiii 1 hour ago||
If the default mode is that LLMs generate crap code that you have to fix with even more LLMs, then something is fundamentally wrong.

In terms of over engineering, I wouldn't be surprised if the human tendency for skeuomorphism (combined with an loss of technical skill) will create even weirder code.

dawnerd 56 minutes ago|||
Exactly what I’m hearing and feeling too. Devs I know are just going with it so they still have a job. No one cares about quality anymore it’s just a race to output as much code as possible. And when there’s docs, it’s just whatever slop they can have an llm output so it appears to their managers that won’t read it that they did work.
prox 2 hours ago||
And we learned nothing of previous hype cycles.

Enshitification in this area will be shift. And there will be grand articles here on HN “nobody could possibly have seen this coming.” Yes we did.

Aperocky 2 hours ago||
Meh, enshitification is the same, it just happen faster.

Which means the stuff that replace it will also happen faster.

Overall, the quality of the software is likely similar, since AI do not have purpose, and software still largely reflect human will and thinking.

nsoonhui 16 minutes ago||
To me, the cognitive debt incurred by Agentic AI described here is not so different from the cognitive debt incurred by code written by someone else. Even when you are the reviewer of your colleague’s code, you can’t just grok everything as if the code were written by you. What more to say if you are not even the reviewer.

And that’s okay! Much like it’s okay to let other people write the code.

What is important is that the code written by Agentic AI is covered by automated tests adequately, and that you verify that the architectural plan is solid. But then this is also what you do with your colleagues’/juniors’ code.

CobrastanJorji 1 hour ago||
I worry about these remediation measures. The problem described is that the increased reliance on AI has increased cognitive debt, and the solution is to write tests that better capture intent and to update design documents continuously. But the people with cognitive debt problems that got there by using AI are also going to be using AI to write tests that "capture intent better" and using AI to update design documents. And while, sure, those are good things for your agents to be doing just for the sake of the effectiveness of those same agents, it's also not going to help you get out of the cognitive debt spiral.
suzzer99 48 minutes ago||
I'm currently working on a large project that was started by a 3rd-party vendor, then dumped onto the in-house team's lap due to an unforeseen financial pinch.

The vendor was basically right at the end of the "fun" part of cranking out features, and just about to hit the "rubber meets the road" part where you start fixing bugs, finding new edge cases, discovering new hidden requirements, and realizing X% of your design assumptions were completely wrong. Oh yeah, and minor little mop-up tasks that don't wow the client, like integrating with a payment processor, integrating with our internal scheduling system, exporting invitee lists from our CRM into our app, etc.

It's possible we're in a similar cognitive debt situation to having to maintain a large, swiftly-AI-coded app. After about 6 months of stressful development, which started with what I call throwing dye in the water and eventually progressed to understanding one small feature or flow at a time, we have maybe 50% of the mental model we'd have if we'd built the app ourselves. Whole chunks of the app are still a black box to us.

It doesn't help that requirements have evolved so much since the original documentation that it's worse than useless because we can't trust it. So the code, which we don't understand, is the only documentation of the current requirements.

Of course, our internal clients are pissed because the final product is taking so much longer than expected, when they could see all these awesome shiny, happy-path, 80%-done features 6 months ago. We're in a constant fire drill. Everyone on the project is miserable. It's the least fun kind of development.

afro88 4 hours ago||
I find it disconcerting that an article about cognitive debt contains many "tells" of being written by AI.
chromacity 4 hours ago||
Independent of that, the article is just a summary of a HN thread...
unignorant 3 hours ago||
I had the same reaction, but the article is not AI-generated according to pangram, which I've generally found reliable. I wonder if LLM turns of phrase and even thought patterns are creeping into normal human thought.
Zetaphor 3 hours ago|||
It's worth mentioning pangram is more confident in it's positive detections than it's negative ones, as stated by the founder in an interview on the most recent ThursdAI episode
erikerikson 3 hours ago||||
Or, stay with me here, the LLMs were trained on how we, statistically, write.
unignorant 3 hours ago|||
There are typical LLM voices and styles, just like human writers have differentiated voices and styles. And some common elements of the typical LLM style are distinct from humans I've previously read.
erikerikson 3 hours ago|||
I recognize this. It's also the case that I suspect that I've read more about how annoying suspected LLM output is to read than I have read LLM output. The slop is, to me, an incredibly unwelcome contribution of humans that don't enjoy the craft but complaining about it is equally stuck in and further exacerbating the froth rather than distilling down to the substance. That is it keeps the focus on the surface rather than on what the core content is and whether it has value.
grey-area 56 minutes ago||
LLM writing doesn’t have substance, it’s statistically likely text generated from some bullet points, without intention or style.
donaldjbiden 3 hours ago|||
[flagged]
shiveenp 3 hours ago|||
Anytime I see “this is not just x, it’s y” i can almost guarantee with high degree of confidence that slop was used.
jdw64 2 hours ago|||
As someone from outside the Anglophone cultural sphere, when I first learned to write in English, the kind of writing that AI now often produces was taught to me as “formal" writing.

But these days, when I write in that formal style, people sometimes say it sounds like AI. That has been a difficult and frustrating point for me.

I still find the subtle difference hard to understand.

zbentley 2 hours ago||||
I'm still pissed that I had to practice removing that from my writing habits. I liked that device, dammit!
yuye 1 hour ago|||
It's not just AI-generated, it's also slop!
pizzly 3 hours ago|||
I think its bidirectional. We change our writing based on what we see (AI generated content on the internet) and AI will learn based on what we write.
Aperocky 2 hours ago|
Struck a cord, and I think I managed this prior to LLM in a certain degree.

My primary editor is vim, and for a significant amount of time I was using it almost in puritan fashion, this was before LLM was mainstream.

However, I could not use vim to edit java, even with language server - I tried, but each time I went back to intellij - the rest of the code base in python, ruby and typescript was typically fine.

The reason was two fold, because everyone was using all of the features that intellij had to offer, the code was structured similar to intellij and obviously the java design patterns that was popular at the time. Everything went through factories and managers and interfaces and tracking them through a pure editor was almost impossible. The IDE handled it for you.

But everything else? Things I or others had to build from ground up was built with this cognitive limitation in mind, which means I can fit everything nicely and edit with vim, even without a language server with high efficiency.

Those cognitive limitation is good for the software. It's easy to explain, easy to debug, easy to add and subtract. And I've come to disregard the intellij way, or the current vibe coding till it works that is common everywhere now. The principle is KISS - keep it simple stupid. If AI will not do that, then you have to. It is a simple philosophical question that is more important than ever. And sadly most people still don't realize it - they will happily tack on the next "feature" in with the scaling they didn't need at that time with the design pattern that they don't need at the time and prematurely optimize themselves into cognitive and technical bankruptcy.

More comments...