Top
Best
New

Posted by raphaelcosta 8 hours ago

What I'm Hearing About Cognitive Debt (So Far)(margaretstorey.com)
192 points | 113 commentspage 4
boesboes 5 hours ago|
It's all fun and games, but I've yet to see anything of value come out of agentic coding. As in, all the code it produces is such total garbage, I'm not worries about the cognitive debt, I'm worried about the technical debt.

More code is not better. more code, more quickly is worse. Don't delude yourself into thinking you are more productive, you are just digging a deeper hole.

jdkee 5 hours ago||
Was this article written by AI?
hansvm 5 hours ago||
I'd bet a lot of money on AI at least having a substantial influence on the phrasing.
thin_carapace 5 hours ago||
bidirectional inverted commas, negative parallelism, short punchy prose, dot point listing ... these language techniques are consistently present throughout. whether this article was written by AI or not, its structure and style utterly screams of AI.
nottorp 3 hours ago||
But which was the first, the chicken or the egg?

The LLMs have been trained on soulless corporate speak.

saltyoldman 6 hours ago||
It is a bit surprising sometimes when you vibe code an AI tool and it ends up doing a bunch of regular expressions to "detect the user's intention". Instead of the code using an LLM to see which tool to run, or if they want to see the SQL or the code you end up seeing .*SQL or \i^build (or some crazy regex). It really likes to use a lot of regex when it's building AI tools.
whateveracct 5 hours ago||
mythical man month covered this
bugbuddy 7 hours ago||
The most important lesson from Gen AI is that it does not matter how much money you have, make, lose, or spend because in the long run everyone is…

So the logical next step is to focus on Biological Immortality and short of that Digital Immortality. God speed everyone.

jdw64 6 hours ago|
Personally, my observation is that 'cognitive debt" feels closer to a tool for selling essays than a precise engineering concept.

Lack of documentation, failed onboarding, poor architectural understanding, missing tests, review fatigue. if all of these are simply grouped together as “cognitive debt,” isn’t that just a failure to build a proper workflow?

The scope is too broad. It reminds me of Stepanov, the creator of the STL, saying that if everything is an object, then nothing is.

When an abstraction tries to cover too many things, that abstraction inevitably fails.

The way AI specifically amplifies this problem is through the difference between direct work and indirect work. The core issue is that “it works” can easily create the illusion that “I understand it.”

Another thing I felt while reading this essay is that it almost seems to go against the direction of modern software engineering. Once software grows beyond a certain size, it is already impossible for anyone except perhaps the original designer to understand the entire system. The goal is not for everyone to understand everything.

The real goal is to make local changes safely, and to ensure that the system keeps running without major disruption when one replaceable part — including a person — leaves.

At this point, many things being described in the industry as “cognitive debt' look to me like rhetorical tools for selling essays.

Reading this, I even wondered: if I write about trendy terms like cognitive debt or spec-driven development on my own blog, will people pay more attention?

To be honest, spec driven development has a similar issue. When you go from a specification down into implementation, information loss is inevitable. LLMs cannot fully solve that. In the end, a human supervisor still has to iterate several times and tune the result precisely. The real question should be: how far down should the specification go? In other words, at what local scope does it become faster for a human programmer to modify the code directly than to keep steering the AI-generated code?

But that discussion is often missing.

As people sometimes say, “when you start talking about Agile, it stops being agile.” In the same way, I think the “cognitive debt” frame may be a flawed abstraction of the current phenomenon.

The moment a living practice is nominalized, packaged, and turned into a consulting product, it loses its original dynamism and context-dependence, becoming a dead template.

It puts various discomforts that emerged after AI adoption — review burden, lack of understanding, fatigue — into a single box.

Then it attaches the economic metaphor of “debt” to emphasize the seriousness of the problem, and subtly injects the normative idea that “this must eventually be repaid.”

Thinking back to Parnas’s 1972 work on information hiding, software engineering was built on the principle that local understanding should be sufficient, and global understanding is not the goal.

The cognitive debt framing seems to implicitly reverse that principle by treating “shared understanding” as something that must be preserved as a global unit. I do not understand why the discussion keeps moving toward the idea that everything must be understood.

It reminds me of Bjarne’s onion metaphor for abstraction: if an abstraction works, you do not necessarily need to peel it apart without reason.

My main issue with the current cognitive debt framing is that the layer it tries to cover is too broad.