Top
Best
New

Posted by raphaelcosta 7 hours ago

What I'm Hearing About Cognitive Debt (So Far)(margaretstorey.com)
189 points | 111 commentspage 3
logickkk1 5 hours ago|
fwiw, understanding was never really the goal. calling it "debt" assumes all of it needs to be repaid, but most of the time this just feels like brain OOM. imo the skill is knowing what actually needs to stay in your head.
pineapple_opus 3 hours ago||
As you said it's distributed across - People, conversations, AI agents , tooling, etc... , can't the LLM Knowledgebase/ wiki ( a.k.a. org's second brain) solve this ? I think if , second brain exists, no one needs to pay cognitive debt.
techpression 4 hours ago||
Good article but it misses, for me, the core aspect of cognitive debt, by not doing the work you don’t learn anything. There is no mitigation for writing code, no amount of reviewing or documentation will fix it, it’s just not how our brains work and how we learn things. We learn by doing, and by repetition, failure makes us repeat it, which is why it’s good to fail sometimes.

Carmack once wrote something I’ve been holding dear ever since, and I’m paraphrasing, ”even if you copy paste code, make sure you write it”. And it actually works, the outcome of just having your brain make your fingers type the code is easily differentiated from just pasting it.

melvinroest 6 hours ago||
Edit: yep, I really do type this much. I'm a bit of a "thinking out loud" person.

> Cognitive Debt, Like Technical Debt, Must Be Repaid

In quite a few circumstances, cognitive debt doesn't entirely need to be repaid. I personally found with multiple projects that certain directions aren't the one I want to go in. But I only found it out after fully fleshing it out with Claude Code and then by using my own app realizing that certain things that I thought would work, they don't.

For example, I created library.aliceindataland.com (a narrative driven SQL course). After a while, I noticed that the grading scheme was off and it needed to be rewritten. The same goes for how I wanted to implement the cheatsheet, or lessons not following the standard format. Of course, I need to understand the new code but I don't need to understand the old code.

With other small forms of code, I just don't really need to know how things work because it's that simple. For example, every 5 minutes I track to which wifi network I'm connected with. It's mostly useful to simply know whether I went to the office that day or not. A python script retrieves the data and when I look at it, I can recognize that it's correct. But doing it this way is sure a lot faster than active recall.

At work, I've had similar things. At my previous job I created SEO and SEA tools for marketing experts. So I remember creating this whole app that gave experts insights into SEO things that Ahrefs and similar sites don't, as it was tailored to the data of the company I worked at. The feedback I basically got was: the data is great, the insights are necessary, but the way the app works is unusuable for us. I was a bit perplexed as I personally didn't find it that complicated. But I also know that I'm not the one using it. Then I created a second version and that was way more usable. The second version assumed a completely different front-end app and front-end app architecture though. All the cognitive debt of V1? No payback needed.

The reason that this is the case, as it seems to me, fall under a few categories:

1. Experimenting with technologies. If you have certain assumptions about how a technology works but it turns out you're wrong, or you learn through the process that an adjacent technology works way better, then you need to redo it. Back when coding by hand was such a thing, I had this with a collaborative drawing project called Doodledocs (2019). I didn't know if browsers supported pressure sensitivity and to what extent it was easy to implement. It required a few programming experiments.

2. It's a small and simple script, not much more to it.

3. Experimenting with usability. A lot of the time, we don't know how usable our app is. In my experience, this seems to be either because (1) it's a hobby project or (2) the UX people have been fired years ago. In these cases, more often than not, UX becomes an afterthought. But with LLMs, delivering a 95% fully working version is usually done within a week for a greenfield project. This 95% fully working version is an amazing high fidelity interaction prototype (95% no less). Once you do that for a few iterations, you then understand what you really need. Once you understand what you really need, then you can start repaying the cognitive debt.

I've found it's usually category 3, sometimes 2 and rarely 1.

kusokurae 4 hours ago||
Why must so much gumflapping involve the spew of any words but those which encourage not using the clear problem tool more.

"the question becomes how teams will manage cognitive debt" the question is why it is allowed to occur when it is avoidable. Farcical nonsense. Write the code yourself or be silent.

linsomniac 3 hours ago||
We keep talking about AI fatigue and burnout.

Am I the only one that is finding quite the opposite? I feel like a kid again, back when I had no responsibilities and infinite time to play around and build things. Being able to look at my existing tooling and say "there's a rough edge here" and then whip out the equivalent of a Milwaukee Bandfile [1] and smooth it out is making it fun to go to work again.

[1] https://www.milwaukeetool.com/products/details/m12-fuel-1-2-...

faangguyindia 3 hours ago||
I may get downvoted for this but

I don't agree that demand for software guys will drop.

What I think is, demand for software people will go up while wages will be suppressed. And more software will be in the market as a whole.

There are so many craftsmen in market who hardly make liveable wage, select few make a bank! Same pattern will repeat in software.

Mass market software with large-scale adoption will drop. And specialised tools and services will take its place.

Which means 1000s of calorie trackers, 1000s of image editors, etc... but as scale will drop, income and revenue of companies will also drop.

Software wages are an anomaly in select countries; I always believe software wages shouldn't be more than a plumber or mechanic.

yuye 2 hours ago|
>I always believe software wages shouldn't be more than a plumber or mechanic.

Wages in the trades have gone up a lot recently, at least where I'm from. Decades of parents telling their kids the trades are for losers lowering the supply of capable craftsmen...

And not all software will work as specialized tooling.

Calorie tracker apps? Sure.

Operating system kernels? Each with their own schedulers and allocators and ABIs and syscalls? Definitely not.

0xbadcafebee 5 hours ago||
For my entire career, most of the people working on the systems I've worked on have not known how the systems work. Sometimes a few people learn a great deal about the system. They are spoken of in hushed tones, as one would a powerful mage, or witch. However, never in my entire career did anyone complain about the fact that nobody knew how the system worked. Most people never noticed it, much less cared once they found out. "This is just how things are. Business as usual."

AI is making people afraid as they run into these things. It's a little sad that they don't have the historical context or perspective to realize these are old problems. I imagine this is what Samurai felt like as flintlock guns came in and completely upended hundreds of years of martial tradition. How will they be able to defend themselves if they don't learn Kenjutsu? What will happen to our Bushido?

And I do think the fear is warranted. But I don't think people are going to act differently once they realize this unfortunate status quo hasn't yet led to the collapse of civilization, or their paychecks. Once the fear has passed, we will move on into the new normal, willfully ignorant and mildly disappointed.

carbonguy 4 hours ago||
> High-performing teams have always managed technical debt intentionally. As AI is adopted by startups and large companies, the question becomes how teams will manage cognitive debt.

"Technical" and "cognitive" debt aren't really distinct phenomena; the spirit of the original definition of "technical debt" was that it WAS the delta between the system-as-it-is, and the human understanding of how best to solve whatever problem the system was intended to solve [1].

If we accept collapsing them back down to one term, then "managing cognitive debt" is the same thing as "managing technical debt": work to match the system to the human understanding of the problem the system is meant to address. The article calls out "emerging" techniques to do just this:

- More rigorous review practices

- Writing tests that capture intent

- Updating design documents continuously

- Treating prototypes as disposable

To me these are not "emerging," but rather "well-known industry best practices." Though maybe they're not that well known in fact? [EDIT TO ADD] On the other hand, it would make sense that they ARE well known, and that teams therefore reach for these familiar techniques to try and solve this "new" problem.

Putting in my 2c for the closing questions/thoughts in the article:

> How will they shape socio-technical practices and tools to externalize intent and sustain shared understanding?

Honestly? We'll probably end up doing these things more or less the same ways we always have. AI has not actually changed anything fundamental about how an individual encounters the world; there always was, and always will be, and always will have been, WAY more going on that we can fully get our heads around, but it's also always been the case that we can partially get our heads around most any problem space

> How will they use Generative and Agentic AI not only to accelerate code production, but to maintain their collective theory?

I suspect the answer to this one might well be that high-performing teams will have to scrupulously AVOID "accelerating code production" using AI in order to make sure what they are creating actually composes into the system they think they're building. If human understanding is the bottleneck, then the humans will have to produce less crap they need to understand!

[1]: https://wiki.c2.com/?WardExplainsDebtMetaphor, particularly the "Burden" and "Agility" sections.

boesboes 4 hours ago|
It's all fun and games, but I've yet to see anything of value come out of agentic coding. As in, all the code it produces is such total garbage, I'm not worries about the cognitive debt, I'm worried about the technical debt.

More code is not better. more code, more quickly is worse. Don't delude yourself into thinking you are more productive, you are just digging a deeper hole.

More comments...