Top
Best
New

Posted by koshyjohn 11 hours ago

AI should elevate your thinking, not replace it(www.koshyjohn.com)
354 points | 272 commentspage 2
000ooo000 6 hours ago|
People who let AI do their thinking at any level never valued it in the first place. "Use it or lose it", as they say. The count of studies backing this up continue to rise and yet so do the articles saying LLM use in software development is fine because our value is in our thinking.
waffleman21 5 hours ago|
It may be a byproduct of my ADHD and general anxiety, or it's a common trait among all of us workers of computers, but I am thinking almost all the time. It's one of the beautiful things about the gig to be able to be completely engrossed in something else and then have an inspired thought hit you, some solution that took you not looking at it for a moment. AI now helps me turn those thoughts into action faster than I ever could. Without it, I'd lose the thread before it ever got off the ground sometimes. Now a thought can be made at least partly real from my phone in minutes, then I can go back to what I was doing without feeling like I might lose it if I look/think away again. Just my two cents on what the technology has enabled for me.
halamadrid 10 hours ago||
This is true. Speaking only based on personal experience. My team had started treating AI like a super intelligent being.

“AI suggested we do it that way”

And we’ve been degrading our systems rapidly for last several weeks. We’ve decided to pause and reflect and change how we use AI on tasks that are not dead simple.

dannersy 9 hours ago||
No one uses it this way, despite what people say. They hit any sort of wall and then ask the robot. Thought ends.
nunez 1 hour ago||
These services are designed for that engagement loop. If they were designed to be tools to help you think, they would be much less front and center, like autocomplete or refactor tools in IDEs. This reminds me of how Google used BERT models (precursor to LLMs) to highlight relevant snippets of web pages in search results based on a search query. "Assistant-" type LLMs would be more like that (or early implementations of code assistants, like Roo or Aider).
girvo 8 hours ago||
Same way everyone gives lip service to reviewing output. I know for a fact that at work most don't, not deeply/properly. You basically can't and hit the volume that's been demanded.
nunez 1 hour ago|||
it's practically impossible when Claude flings like 1000-line diffs to your face and the tests are green
girvo 7 minutes ago||
Yeah I know. Which is why I wish we’d (the royal we) all stop pretending and lying about it haha.
23df 7 hours ago|||
I mean the workplace dynamics are such that nobody really cares unless they find themselves in a position of committing something that could get them fired. Most companies dont treat their workers all that well.

Why would you as a worker bother doing everything pristine? Theres no reward for you. The management of the company will fire you the day they see fit anyway. Not to mention companies tend to give higher salary raises to those who leave and later return - a true slap in the face of 'loyalty'.

kajaktum 7 hours ago||
I am rebuilding numba. It is very hard for me to imagine doing it by hand. I tried it a couple of years ago but it was excruitiangly painful. It was slow and messy. So many small things that gets stacked on top of each other over years of abstraction.

I am doing it again using LLM. Legitimately, things that would have taken weeks is now done overnight. I still have to look at the code, at the generated C output, still have control over the architecture to make it easy for me and the LLM to work with in the future, etc

Is this replacing my thinking? I am not sure. I suppose I would have learnt a lot more about compilers/transpilers had I preserver through it for months with manual writes and rewrites but I would solely be working on this. Instead, I also had some time to write a custom NFS server support for a custom filesystem in Golang.

beej71 2 hours ago||
> Is this replacing my thinking?

I'm extremely confident the answer is yes.

But we have to judge how much value that particular thinking has.

As an instructor, I've implemented linked list functionality a zillion times. I'm on the long tail of skills-gain from each reimplementation. But every time I implement it, I'm gaining a little more.

Now, is it worth it? Probably not. The time spent on that marginal gain would be better spent implementing something more novel by hand. So punting to an LLM, while it costs me, might be a net gain in that case. But implementing another compiler? Hell yeah, that would be replacing my thinking. I've only ever made one PL/0 compiler plus that one yacc thing in compiler theory class, and those were a long time ago.

We should quantify the loss of thinking when we decide how much to punt the code creation to someone or something else.

patrick-elmore 5 hours ago||
I too worry about the aspects that using AI is replacing in my thought process. I've built a sophisticated enough system to where agents can go out and determine the changes that need to be made for entire features and pretty much nail it out of the box. Everything is laid out in high detail during the planning phase. The implementation phase of actually writing the code is almost always unremarkable.

I have found myself going out and actually reading code less and less over the past year. I would be lying if I said that there are not fairly regular moments where I question the comfort level I have obtained with the system that I have built. I've seen it work with such a high accuracy and success rate so many times that my instinct at this point is to not question it. I keep waiting for this to really bite me in the ass somehow, but it just keeps not happening. Sure, there have been minor issues that have slipped through the cracks that caused me to backtrack, but that is nothing new. The difference is that with the previous way, I had painstakingly written that code and had a much more personal relationship with it. The code was the problem. Now whenever that does happen, I'm going back to the system and figuring out why it didn't get the answer right on its own, or why it didn't surface the whole thing in the plan to me prior to implementation.

clutter55561 10 hours ago||
AI isn’t creating the problem, it is just showing the problem. Those who did not want to learn before AI did so reluctantly, mixing Google and SO. Now they ask AI. An existing problem found a new solution.

Personally, I really enjoy using AI. I have created my own cascade workflow to stop myself from “asking one more question”. Every session is planned. Claude and Codex can be annoying as hell (for different reasons). Neither is sufficiently smart for me to trust them. I treat them as junior devs who never get tired, know a lot of facts but not necessarily how to build.

YZF 1 hour ago||
I wrote tens of thousands of lines of code before Google and SO.

I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.

One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.

Today software stacks tend to be a lot more complicated.

rangestransform 6 hours ago||
Funnily enough, I learned to code “depth first” by putting together enough documentation examples and stackoverflow answers to reach a working Android app, long before I learned to code “breadth first” in school.
sheepscreek 11 hours ago||
AI is creating problems. This isn’t one of them. Engineers are going to now think at a higher level of abstraction. No one misses coding in assembly.
cyclopeanutopia 10 hours ago||
> No one misses coding in assembly.

It's only your opinion that is provably false.

First, there are still people who don't like high level languages and don't use them, because they find assembly better.

Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".

High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.

threethirtytwo 10 hours ago||
When people communicate they speak in terms of the overwhelming generality of reality. There's always at least one guy that is an extreme exception.

I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.

Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.

cyclopeanutopia 10 hours ago||
Thank you.
hun3 10 hours ago|||
You can write unambiguous (UB-free) code and the compiler's output will be deterministic. There will even be a spec that explains how your source maps to your program's behavior. LLM has neither.

Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.

orblivion 11 hours ago|||
Compilers are a layer of abstraction that we can ask another human about. Some human is there taking care of it. Until we get to the point where we trust AI with our survival it would be good to be able to audit the entire stack.
andsoitis 11 hours ago||
any human can read the code an AI produces.
cyclopeanutopia 10 hours ago|||
Nope, not anymore. Many already forgot how to do that and it's not a joke.

And putting aside the vanishing skill, there is also an issue of volume.

daemin 1 hour ago|||
I agree that the problem is volume, even more so than correctness.

All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.

traderj0e 5 hours ago||||
You could say the same thing about compiled code, actually it's worse because anything a compiler spits out is very hard to understand even for those who understand assembly.
daemin 1 hour ago||
You don't need to look at the entire program at the assembly level to figure out parts that you want to optimise or prove for correctness. You do need to look at all the code the LLM generates in order to understand it.

You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.

You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.

It's not even a close comparison.

cyberpunk 10 hours ago|||
So... Our jobs are safe then? I mean, assuming we don't also atrophy to the same extent as the 'many'?
eleumik 9 hours ago|||
It's the "our jobs are lost" attitude that is part of problem. Is not about that. Is more quality thinking, is daring, not fearing or hoping
cyclopeanutopia 10 hours ago|||
I'm just saying that I already see that people are outsourcing all the thinking to the models - not only code generation and reviews, but even design - the part that "senior engineers" without imagination think only they are capable of doing.

It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.

cyberpunk 10 hours ago|||
It's a bit of a weird place to be in as a senior engineer who has spent 2 decades perfecting his craft.

So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.

Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.

And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.

I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.

I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?

I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.

cyclopeanutopia 10 hours ago||
Yeah, but the thing is, it's not "just work". Software now has really big impact on the world and actual lives.

In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.

Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.

cyberpunk 9 hours ago||
Yeah that's a tough spot to be in; I think though, your responsibility really ends with you at work, unless you're very high up on the management chain.

Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?

At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.

threethirtytwo 10 hours ago|||
I think those of us who have years of experience under our belt our safe. If we're older the knowledge is ingrained and atrophy of this knowledge will be limited based on the fact that it's already "imprinted" onto our brains.

Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.

kirth_gersen 11 hours ago||||
for now. some people seem to think we should make ai native programming languages and just let them be black boxes. which is a bad idea imo
dawnerd 10 hours ago||||
Have you tried to shift through a whole lot of vibe coded slop? It’s really mentally draining to see all of the really bad techniques they fall back on just to brute force a solution.
hun3 11 hours ago||||
How can you read a language you didn't learn?
orblivion 10 hours ago||||
Unless people can't think without the AI.
ares623 9 hours ago|||
here's a tip, it would really help if you put yourself into a Ralph loop before posting comments.
kimixa 9 hours ago|||
I suspect there are at least as many programmers working as the ASM level today than there ever was - they're a lower proportion, but the total number of programmers has increased dramatically.

I wonder if this sort of trend will continue?

Pannoniae 9 hours ago|||
Look at the comments about MSVC removing inline assembly as a supported feature for a counterexample. :D

(A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)

eleumik 8 hours ago||
Explained by LLM: It is 100% true that no human alive can write 1000 lines of assembly better than GCC or LLVM. It is also still 100% true, right now in 2026, that a truly competent assembly programmer can write 10 lines of assembly that will beat any compiler on earth by a factor of 2x, 3x, even 5x. The entire industry looked at this situation, and somehow concluded the exact wrong lesson: "humans should never write assembly". Instead of the correct lesson: "humans should almost only write assembly".
ThrowawayR2 8 hours ago|||
At a high level of abstraction, the product owner can talk to the LLM directly by themselves. The "engineers" will have abstracted themselves out of a job.
beej71 2 hours ago||
This isn't just another translation layer, though. It's squishy and stochastic. It's more like saying "managers think at a higher level of abstraction". Which is true, but it's not the same as compiled code.

GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.

ambicapter 6 hours ago||
> There is No Shortcut to Judgment

> This is the part that some people may not want to hear --

> There is no generated explanation that transfers mastery into your brain without you doing the work. > There is no way to outsource reasoning for long enough that you still end up strong at reasoning.

This is in relation to early-career engineers, but I wonder why people think this won't apply to mid- and late-career engineers. Are they not also constantly learning things on the job? Are they not thus shortcutting their own understanding of what they are learning day-to-day?

m4rkuskk 10 hours ago||
Before AI I would spend multiple days mapping out my database tables and queries while now I ask AI to propose multiple different approaches and I pick the best one. But then on the other hand I’m working on 10 features at the same time and have to carefully look through them. But I can see that I’m totally dependent on the AI now. Creating a full plan by yourself feels like a waste of time, since you know the AI can create the same or better plan in a split second. So when Claude is down, I end up not being productive at all.
nkrisc 9 hours ago|
> Creating a full plan by yourself feels like a waste of time, since you know the AI can create the same or better plan in a split second.

It IS a waste of time if your only goal is the creation of the plan. However, one must be very self-aware of their goals because if one of the unacknowledged ones is to retain the ability to create plans, then you must continue creating plans yourself.

ebipaul5194 5 hours ago||
> To be very frank if professional with 10 year experience they know the flow and logic to code if they use the AI they can make the code and improve they way they code but if new bee is coding he doesn't what the flow or logic he simply copy paste AI won't allow those people to think.
dkrich 6 hours ago|
This is so spot on and I’ve been harping on this for about two years based on my own professional experiences. The surprising thing, though, is that upper management is ostensibly cool with incompetent people using AI to produce things that are clearly not accurate and have no idea whether it is or not. I believe this is because upper management themselves believe AI is much more accurate in its current form than it is. It’s not clear what if anything will change this but I believe many organizations are rotting from within because they no longer have stringent requirements.
Havoc 6 hours ago|
It’s because senior management builds processes with a base assumption of unreliability because a good chunky of employees are.

Thats why they’re relaxed - it’s just switching from one sort of unreliability to a slightly different flavour

More comments...