Top
Best
New

Posted by raphaelcosta 6 hours ago

What I'm Hearing About Cognitive Debt (So Far)(margaretstorey.com)
183 points | 102 commentspage 2
01100011 5 hours ago|
These sorts of articles just seem silly to me. Use AI where it helps you and avoid where it doesn't. That dividing line may change week to week.

I think it's great for writing tests and sanity checking changes but wouldn't let it write core driver code(I'm a systems programmer so YMMV). Maybe in a month I'll think differently.

Eufrat 4 hours ago||
Using a tool as a tool is hard when the market is telling you to use it in everything as if it’s the new sliced bread.
hansvm 3 hours ago||
It worked for bread. Why wouldn't it work for AI? I've been baking allen wrenches and screws into my sliced bread for ages, and no single living person has complained about it.
yuye 2 hours ago||
>Use AI where it helps you and avoid where it doesn't

When all you've got is a hammer...

...you'll eventually stop knowing how to use other tools.

skybrian 5 hours ago||
Sometimes people make an assumption that every codebase has a team (or at least a single person) devoted to maintaining it. Companies with large codebases may not be able to afford that, or don't think it's worthwhile. You could have dozens or hundreds of libraries and only a few maintainers. The libraries are effectively "done" until something comes up. Work on them is interrupt-driven.

In that situation, coming in cold to a library that you haven't worked on before to make a change is the normal case, not "cognitive debt."

If you have common coding standards that all your libraries abide by, then it's much easier to dive into a new one.

Also, being able to ask an AI questions about an unfamiliar library might actually help?

protocolture 5 hours ago||
Its more and more clear to me that AI is a force multiplier for small teams and hobby workflows, but seems to have diminishing returns for larger teams.
melvinroest 5 hours ago|
How so? Could you give some specific examples?
hogehoge51 4 hours ago|||
My experience moving between startup/SME/corp:

Smaller teams have more agency to move and usually team members with broader responsibility and understanding of the systems. Also possibly closer to stakeholders, so are already involved in specification creation and know where automation can add value. Add an AI agent and they can pick and choose where they can be most effective at a system level.

Bigger teams have clear boundaries that stop agency - blockers due to cross team dependencies, potentially no idea what stakeholders want, just piecemeal incremental change of a bigger system specified by someone else. If all they can do is automate that limited scope it's really just like faster typing.

hirsin 3 hours ago|||
That's the success case. In the failure case you have emboldened, pressured teams jumping in to make a "quick fix" or "that feature we needed" in a codebase for a team they've never heard of, and leaders cheering it on in the name of progress.

Not every company is going to see those boundaries and stakeholders as features, and they'll be under pressure to "mitigate those blockers to execution". That's where the cognitive debt skyrockets.

JohnBooty 3 hours ago||||
I agree with the agency aspect.

But... as team size grows, LLMs can be more valuable in other ways. Larger teams typically have larger codebases to comprehend, more users, more bug reports to triage, etc. It's SO much easier to get up to speed on a big existing codebase now.

melvinroest 4 hours ago|||
Ah, I feel what you're saying. Yea that makes total sense.
skylanh 4 hours ago|||
Small teams prioritize expertise and agency.

Large teams prioritize service resilience and depth of coverage.

gdulli 5 hours ago||
> High-performing teams have always managed technical debt intentionally.

The ability to generate code has seemingly transposed what people think of as a "high-performing team" from one that produces quality to one that produces quantity. With the short term gains obviously increasing long term technical debt.

alexmuresan 2 hours ago||
The post makes some good points. As a programmer I love writing code. I know coding is just a tool but I enjoy the act of thinking about a problem, finding a solution and implementing it. It gives me a little dopamine boost.

Ever since LLMs started writing decent code, I started feeling like a part of that joy of code-writing has been taken away.

Using LLMs literally leaves a developer to do (what I find is) the worst part about software development: debugging someone else’s code.

Besides this, everything feels rushed. I am under the impression that I can’t “take my time” to think about a problem anymore. It almost feels wasteful now. I have to “just do it”.

It makes me nostalgic and I feel like I’ve lost something about coding that made me enjoy it.

But it is the reality we live in and I’m adapting to it. What I’m wondering is whether I should adapt or, rather, push back.

That being said: this feels a little like it was written using AI.

adamddev1 2 hours ago||
> I am under the impression that I can’t “take my time” to think about a problem anymore. It almost feels wasteful now. I have to “just do it”.

This exactly, has been the problem in cultures that have produced broken, lower quality things in general. Don't think deeply about the problem and don't think about the long term consequences. Just grab whatever solution gives some immediate solution the fastest. "Jugaar."

Many people are slipping into this culture now with the new pressure for immediate production pushed by the AI crowd. It's "jugaar." It's trading short term gain for long term breakage, chaos, and pain. It's also social and economic pressure to not do things properly.

Those who want to take the time time to really understand things, or to build things correctly are mocked or punished for being slow and simple minded. "Just do it this way, look everyone is doing it and making more money faster!" This is also part of that culture that drags everyone into jugaar.

ido 2 hours ago|||
I empathize with all of the above, but ive honestly felt that way more or less the moment I went from “programmer” to “founder” (I run a small startup since 2021, so a couple years before “agentic programming”). At some point I accepted this as the cost of working at a higher level of abstraction (and getting more done as a result, whether via a human employee or an LLM doing the work I used to do).

I know older devs that reminisce for the days of programming straight to the metal in assembly (e.g. on DOS or Amiga) and “knowing exactly what the computer is doing” which feels somehow familiar!

Even more familiar are senior devs moving to management (I know this isnt an original metaphor).

yuye 2 hours ago|||
>Ever since LLMs started writing decent code, I started feeling like a part of that joy of code-writing has been taken away.

The SWEs that go all-in on AI will never understand this, because they have never enjoyed the joy of code-writing. I would even go as far as saying that many of them even hate it.

Of this group, I think the majority are the same people that have joined the industry not because of an innate love for engineering, but because they saw an opportunity to make big bucks in big tech.

flemhans 2 hours ago||
I see it the other way, I can do whatever creative and fun stuff I like, and the agent does the boring debugging or finishing up my loose ends.
BLKNSLVR 5 hours ago||
Just reading the first paragraph and I've already started to experience it when attempting to apply AI to Acceptance Criteria that testers have to test against.

The software is necessarily complex due to legislative requirements, and the corpus of documentation the AI has access to just doesn't seem to capture the complexities and subtleties of the system and its related platforms.

I can churn out ACs quicker, but if I just move on to the next thing as if they're 'done' then quality is going to decline sharply. I'm currently entirely re-writing the first set of ACs it generated because the base premise was off.

This is both a prompt engineering and an availability-of-enough-context documentation problem, but both of those have fairly long learning curve work. Not many places do knowledge management very well, and so the requisite base information just may not be complete enough, and one missing 'patch' can very much change a lot of contexts.

nevdka 5 hours ago|
I work with Australian tax - lots of regulatory complexity, add the documentation often assumes the reader is a CPA. I've got decent results by telling the chat bot to ask questions instead of making assumptions, and then grilling it to find edge cases.

I did a live demo in front of the CPAs, using their documentation, and Claude asked clarification questions they hadn't thought of and exposed gaps in the old manual processes.

belZaah 3 hours ago||
It’s a general principle. Complexity breeds complexity via a multitude of mechanisms thus growing exponentially. As complexity is also related to cost, this means, that there are two limits approached exponentially: financial (we can no longer afford complexity) and cognitive (ee can no longer understand). In an ideal world, financial barrier arrives first, as it is necessary to understand something to make it constructively simple. If it doesn’t, the only solution is destructive simplification by simply breaking the system into pieces forcefully. This is what Musk did to X and tried to do to the US Government.
thoughtpeddler 2 hours ago||
As the [newly popular] saying goes: > you can outsource your thinking, but you cannot outsource your understanding
linsomniac 2 hours ago||
We keep talking about AI fatigue and burnout.

Am I the only one that is finding quite the opposite? I feel like a kid again, back when I had no responsibilities and infinite time to play around and build things. Being able to look at my existing tooling and say "there's a rough edge here" and then whip out the equivalent of a Milwaukee Bandfile [1] and smooth it out is making it fun to go to work again.

[1] https://www.milwaukeetool.com/products/details/m12-fuel-1-2-...

pineapple_opus 2 hours ago|
As you said it's distributed across - People, conversations, AI agents , tooling, etc... , can't the LLM Knowledgebase/ wiki ( a.k.a. org's second brain) solve this ? I think if , second brain exists, no one needs to pay cognitive debt.
More comments...