Top
Best
New

Posted by koshyjohn 9 hours ago

AI should elevate your thinking, not replace it(www.koshyjohn.com)
354 points | 272 comments
staticshock 8 hours ago|
The eloquence with which this point gets (repeatedly) made is continuing to improve each next time I read it. However, I still feel like we haven't nailed it. That is, we are not yet at the "aphorism" stage of the discourse (e.g. "the medium is the message", "you ship your org chart", "9 mothers can't make a baby in a month"), in which the most pointed version of this critique packs a punch in just a few words that resonate with the majority of people. That kind of epistemological chiseling takes years, if not decades. And AI certainly won't do it for us, because we don't know how to RL meaning-making.

Edit: 9 babies → 9 mothers

bla3 7 hours ago||
> "can't make 9 babies in a month"

It's "9 women can't make a baby in one month".

staticshock 7 hours ago|||
Hah, right, I mixed it up!
bluefirebrand 6 hours ago|||
In fairness, 9 women can't make 9 babies in a month either
gerdesj 5 hours ago||
No idea why you were dv'd.

It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!

snickerbockers 1 hour ago|||
9 pregnant women produce one baby/month on average (assuming no miscarriages or late births,etc).

On paper your CPU can execute at least one instruction per core per cycle but that's on average too, if you actually only have one instruction to run it takes several cycles.

fragmede 3 hours ago||||
You're assuming all women in your cohort start not pregnant. However, given a random sampling of women across the entire human race, if you have approximately 14,000 women, statistics says you'll have a baby in a month. That is to say, the chances of one of those woman being 8 months pregnant reaches close enough to 1, given about 14,000 randomly selected women.

Also, you can get a baby tonight if you steal one from the maternity ward.

The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.

The pitfall of AI coding is that previously every shiny tangent that was a distraction, is now a rabbit hole to be leaped into for an afternoon, if you feel like it. It's like that ancient Chinese curse, may you live in interesting times. Everybody can recreate an MVP of Twitter in a weekend now when previously that was just a claim a certain type of people made.

majormajor 2 hours ago|||
> You're assuming all women in your cohort start not pregnant. However, given a random sampling of women across the entire human race, if you have approximately 14,000 women, statistics says you'll have a baby in a month. That is to say, the chances of one of those woman being 8 months pregnant reaches close enough to 1, given about 14,000 randomly selected women.

There's a good point in here along the lines of "if you need X in a month, and someone else has something that's 90% of what you want X to be, can you buy it from them before starting any crazy internal death marches instead?"

> The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.

This is quite possibly only a one-time switch from a changed baseline, though. Give it a few years and "the fastest way an LLM tool can do it" will be what gets tossed out a an estimate, and stakeholders will still want you to do it in a tenth the time...

b00ty4breakfast 3 hours ago||||
that's still one woman per pregnancy, it's not 14k women collaborating on a single pregnancy.
bluefirebrand 3 hours ago|||
> You're assuming all women in your cohort start not pregnant

As far as I know, all women everywhere start not pregnant

fragmede 2 hours ago||
Tribbles, on the other hand...
bluefirebrand 4 hours ago|||
Sometimes HN doesn't like jokes, which is okay. I didn't really contribute much to discussion, so I probably deserve some downvotes. I'm ok with it.
Brajeshwar 3 hours ago||
Actually, I like quite a lot of the subtle jokes on HN. It is harder to notice, fewer to find, and I don’t get it many a times. But when I get it (or someone explains it to me, perhaps out of pity), I chuckle, laugh, and laugh again. And I remember those comments.
godelski 2 hours ago||
I think the occasional joke is fine but when you have too many then the comments get diluted. It's exactly that kind of thing that makes me hate Reddit and so many other places: spam.
ctvdev 7 hours ago|||
> That is, we are not yet at the "aphorism" stage of the discourse

we learn by doing

nkrisc 6 hours ago|||
Put differently: you get good at what you actually do, not what you think you're doing.

If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.

ipython 7 hours ago||||
I’ve also seen along those lines “there is no compression algorithm for experience” - a nice summary of the hn posts from today.
skybrian 4 hours ago|||
It seems overly pessimistic about education. Book learning isn't everything, but a physics textbook could be seen as the compression of centuries of experience.
Ronsenshi 3 hours ago||
Book learning to me seems like a compression of knowledge that had to be acquired through many years of experimentation and observation. But knowledge is not an experience itself.

Take juggling for example - something that was on HN homepage last week. You can learn everything you need to know about juggling though a post or a book or an educational video. But can you juggle after all that book learning? Not at all - to be able to juggle one has to spend time practicing and no amount of reading can help meaningfully compress that process.

Muscle memory required for juggling is not a 1:1 correlation to experience, but I feel like it's close enough to it.

ua709 1 hour ago||
Juggling is a nice example. Maybe one could phrase it as, you can learn how to learn to juggle from a book.
canjobear 4 hours ago|||
There clearly is though. You don’t remember every detail of every moment that constitutes the experience.
kristianc 5 hours ago|||
... or by textbooks, Stack Overflow, senior engineers, code review. How many engineers today got their start by building Minecraft mods or even MySpace?

I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.

raincole 52 minutes ago|||
"Bicycle of the Mind" has been cited to death.

The problem is that it was coined so early that we are way past the aphorism stage now.

nemomarx 2 hours ago|||
Isn't it the vehicle metaphor about bicycles for the mind? Not fully crystallized yet but I feel like someone will
embedding-shape 7 hours ago|||
How about "Intelligence amplification, not artificial intelligence"?

Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".

viccis 6 hours ago|||
>the medium is the message

If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.

apsurd 5 hours ago||
You're right. ive struggled to understand what exactly this means, in large part perhaps it's so often misused?

I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.

but i think im off on that, ill look this person up and find out!

rdevilla 3 hours ago|||
Some examples.

Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).

Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.

Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:

    You are just an image on the air. When you don't have a physical body, you're a
    _discarnate being_ [...] and this has been one of the big effects of the electric age. It
    has deprived people of their public identity.
Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.

If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.

The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwich™ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.

[0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847

viccis 4 hours ago|||
It's confusing because "message" is not using its lay meaning, and decades of "medium" and "media" meaning drift meant that it isn't either.

For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.

McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."

You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.

If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.

alphabeta3r56 3 hours ago|||
Taste/judgement cannot an AI beget
IceDane 6 hours ago|||
Outsource manual labor, not your brain.
thomastjeffery 4 hours ago|||
Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.

The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.

So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.

I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.

What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.

Jarwain 3 hours ago|||
LLM's are a mediocre map, but they're a great compass, telescope, navigation tools and what have ye
staticshock 3 hours ago||||
> What we really need is to recreate software from a subjective perspective.

What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?

rdevilla 3 hours ago|||
> Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:

    Every Letter signifieth the member of the substance whereof it speaketh.
    Every word signifieth the quiddity of the substance.

    - John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].
The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).

LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.

Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).

[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...

xnx 8 hours ago||
This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.
lsy 7 hours ago|||
AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.
Krssst 5 hours ago|||
Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.
lukan 6 hours ago||||
"You can’t treat a prompt like source code because it will give you a different output every time you use it"

But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code

So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.

That requires enough thinking effort.

lelanthran 6 hours ago|||
Didnt work for the prod data that the AI nukes in spite of prompts saying "DON'T FUCKING GUESS", just like that in all caps: https://news.ycombinator.com/item?id=47911524

What makes you think it will work for you?

lukan 5 hours ago||
That I don't let agents run wild in a production environment?
ubertaco 1 hour ago||
You let them write code that runs in prod, which is the same thing with extra steps.

Unless you review that code carefully, and then we're back to the point about it not saving you any cognitive overhead.

habinero 3 hours ago|||
> if I made a very clear spec - I can be almost sure

That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.

In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.

On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.

xnx 5 hours ago||||
> AI coding isn’t an abstraction

Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.

habinero 3 hours ago|||
No, because software engineering is more than <insert coin, receive code>. I've never had a full spec dropped on my desk lol. There's no abstraction.

Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.

A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.

Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol

skydhash 3 hours ago|||
With an abstraction, you literally move your thinking up a level. So you move up a floor up the tower and no longer have to think what's happening below. The moment something leaves your floor, its course is set. If a result come back, its something familiar, not something from the lower floor.

A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.

We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.

IceDane 6 hours ago|||
It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.
girvo 6 hours ago||
> Some people are even comparing them to compilers.

A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol

habinero 3 hours ago||
Right? People have put in decades of work to make them extremely reliable, they didn't magically start like that.
staticshock 7 hours ago||||
That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."

There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.

kochikame 5 hours ago|||
> There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators)

I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.

There might be better examples that support your point more effectively e.g. cursive writing

staticshock 4 hours ago||
Yep, there are tons. Growing food, building shelter, etc. But, for pretty much all of the skills we've allowed to atrophy in response to the advances of capitalism, technological & scientific progress, and societal changes, one COULD make the same basic argument, which is that losing that skill is detrimental to the individual, and yet here we are, not growing our own food, not building our own shelter, etc.

The arguments you make ≤ the values you actually hold ≤ the actions you take in support of those values.

I'm only interested in any such argument to the extent to which you've personally put it into practice. Otherwise, you're living proof of the argument's weakness. (To be fair, it's extremely hard to be internally consistent on this stuff! We all want better for ourselves than we have time and energy for. But that's my point: your fully subconscious emotional calculus will often undercut at least some of your loftier aspirations. Skills that don't matter anymore invariably atrophy due to the opportunity cost of keeping them honed.)

koshyjohn 5 hours ago|||
> "not doing cognitive push-ups leads to cognitive atrophy" This is one of the points being made in the post, at least in reference to people who already have some mastery of their craft. If they outsource their thinking without elevating it, they aren't exercising that metaphoric muscle between their ears.
ua709 8 hours ago||||
I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
willhslade 5 hours ago||
Are compilers deterministic?
ua709 5 hours ago|||
I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.

The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.

Not so with the AI tools. At least with the ones I use anyway.

dbalatero 4 hours ago||||
Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.
23df 5 hours ago|||
For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.
imiric 6 hours ago|||
The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.

Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.

luckystarr 7 hours ago||
The way I use AI now feels more exhausting than the programming I did for the last 20 years. I pose a problem, then evaluate proposals, then pick the one I think is the "right one"(tm), then see the AI propose a bunch of weird shit, then call it out, refine the proposal until it feels just about right (this is the exhausting part), then let it code the proposal. The coding will then run for 1-5 hours and produce something that would have taken me at least 2 or 3 weeks (in that quality).

After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)

dwaltrip 6 hours ago||
I feel this as well. I think it’s something to do with having to be more “on” as you slowly work with the LLM to define the problem and find a reasonable solution. There’s not much of a flow-state. You have to process mountains of output and identify the critical points, over and over, endlessly. And it will always be an off in this unsettling little way, even when it’s mostly quite good. It’s jarring.

The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…

squirrellous 8 minutes ago|||
To me it’s more like being a super micro-managing TL that would annoy the hell out of their human reports. It comes with all the pros and cons of micro-management.
m463 7 hours ago|||
I think one of the benefits of AI is that it will get started, and keep going.

But maybe pacing/procrastination might be relief valves?

whatspt_anyway 6 hours ago||
[dead]
jasonjmcghee 8 hours ago||
There are plenty of engineers that couldn't work without a modern IDE or in languages without memory management.

Or without the ability to use a library from GitHub / their package manager.

It doesn't feel THAT much different to me.

"Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.

embedding-shape 7 hours ago||
> couldn't work

"Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.

themafia 7 hours ago||
It should probably be "would initially struggle to be as efficient without them."

This is not a binary.

Jcampuzano2 8 hours ago|||
Engineer as a term has already drifted vastly since nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

Engineers are accredited and in some countries even come with a title.

keeda 7 hours ago|||
> ... nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!

And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.

Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.

Jcampuzano2 7 hours ago|||
I don't really disagree with you. I was just pointing out how the parent mentioned how "engineering" is changing when it already has changed many many times.

Of course I want the best of the best who are top notch and rigorously trained working on mission critical software.

coldtea 6 hours ago||||
>I will challenge you to come up with a strict definition that excludes software engineering!

"Structured, mature, legally enforced, physically grounded standards based approach to the construction of repeatable, reliable, verifiable, artifacts under stable (to the degree that matters) external constraints".

Some niche software development (e.g. NASA/JPL coding projects with special rules, practices, MISRA etc) can look like that.

99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.

fc417fc802 6 hours ago|||
By that definition the vast majority of historic engineers weren't "real" engineers. It's correct to claim that software engineering isn't currently an accredited profession and it's also quite reasonable to question the extent to which the vast majority of software development qualifies as the practice of engineering. But the latter is highly subjective and will likely also rule out a significant fraction of the grunt work that accredited engineers perform.

Which is to say, engineer the job title is distinct from engineering the activity is distinct from engineer the accreditation.

coldtea 5 hours ago||
>By that definition the vast majority of historic engineers weren't "real" engineers.

And they weren't. They were craftsmen and tradesmen, e.g. stonemasons.

keeda 2 hours ago||||
>* ... legally enforced ...*

Other than that part (most countries in the world do not have regulations or licensing requirements for most engineering disciplines) I would agree. But I would also point out the set of software projects that meet that definition is much larger than those you listed.

As mentioned, it's a matter of economics, so the rigor scales with the pain it can cause if something that goes wrong. Hence any software that has a high blast radius is that rigorously built, probably even more. There are entire categories (not just individual examples!) of such projects. An obvious category are platforms that run or build other applications: OS kernels, databases, compilers, frameworks, cloud platforms (yes those 9's are an industry standard), and so on.

Then there are those regulated ones like automotive, aviation and medical software. There is even a case to be made for critical financial software.

Another less obvious category applies to any large software services company that has oncall engineers, because the high cost of engineers quickly climbs and quality processes quickly get installed, which basically amount to those critera you listed.

That internal LoB app with 5 users? That level of rigor simply does not make economic sense. Which is probably what you mean by:

> 99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.

To that I'll say, as someone whose first site outage as an intern was an actual industrial manufacturing factory (not an AbstractFactoryFactory!) a surprisingly large fraction of projects in other engineering disciplines match that description ;-)

arealaccount 4 hours ago|||
Yea we do standups every day and plan story points twice a month???
2OEH8eoCRo0 6 hours ago||||
It's a pet peeve because the truth hurts. We (most of us) aren't doing anything that resembles engineering.
keeda 2 hours ago||
I'd agree that applies to people, or more accurately specific projects, but not the discipline of software engineering as a whole.

Even most of the projects I personally have worked on simply did not need "engineering" as such, but other projects where uptime was critical and the cost of failure was high, there was a much higher level of rigor.

skywhopper 4 hours ago||||
“Accredited”
keeda 2 hours ago||
Most countries do not need accreditation for engineers.
analog31 5 hours ago||||
Engineers are accredited in the US too. But there is an "industrial exemption" that allows you to work as an engineer without a license for certain kinds of employers. You just can't offer engineering services to the public without a license. This is more important in some fields than in others.

Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.

Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.

lkmill 7 hours ago||||
as an actual engineer i just feel sad. i should probably feel happy but i like solving problems. fml i have becomea luddite.
therealdrag0 2 hours ago||
I get it. But there’s plenty of engineering to do in any serious system. I am in a very AI forward company using AI for everything, but I still am solving engineering problems every day.
jjtheblunt 8 hours ago|||
i think you accidentally overlooked accredited engineers who happen to be writing software
Jcampuzano2 7 hours ago||
Of course there are engineers who write software, I'm just speaking about the majority of roles where thats not the case.
torben-friis 8 hours ago|||
The huge difference is that we don't know the cost we're going to end up with.

Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?

heipei 8 hours ago||
Local AI models are already more than capable enough writing code that surpasses the ability of any bad or even mediocre engineer. That is not something we need to worry about.

In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.

girvo 6 hours ago|||
Qwen 3.6 27B is shockingly good, just to add to your point.
guelo 7 hours ago|||
Thank goodness for China or Silicon Valley capitalists would be locking us down into an unimaginably awful dystopia. Though they're not done trying.
bpye 8 hours ago|||
At least today, it isn't practical for most people to run these models locally- I think adding a dependency on a cloud service is different enough to some local (possibly open source) tool like an IDE.
StrauXX 7 hours ago|||
Self hosting at a reasonable scale is much cheaper than people think. I am running clusters of DGX Spark machines with BiFrost load balancers in our company and for client projects. They work flawlessly!

128 GB unified memory, Nvidia chip and ARM CPU for just around 3k€ net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.

And they don't even pull that much power.

jasonjmcghee 8 hours ago|||
Slack, GitHub, Figma, AWS, etc

Lots of people use firebase, supabase etc.

Many people's jobs are centered around using Salesforce

It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it

ares623 7 hours ago|||
"What kind of engineer are you" - Jesse Plemons wearing bright-red sunglasses
vict7 8 hours ago|||
IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

I’m sure you can see the difference between a garbage collector and a nondeterministic slop generator

But it feels good to equivocate, so here we are.

thunky 6 hours ago|||
Not all IDEs are free. Not all LLMs are subscriptions.
vict7 5 hours ago||
> Not all

is doing a lot of work to avoid engaging with the actual argument.

yjftsjthsd-h 4 hours ago|||
> IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.

c-hendricks 2 hours ago||
How much does the hardware to run them on cost? Especially to get decently sized models running at decent speeds.
Waterluvian 8 hours ago||
I think AI can generally be utilized in two ways:

1) you use it to help write code that you still “own” and fully understand.

2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like it’s someone else’s code if you were asked to make changes without AI.

I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesn’t matter.

I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because it’s quicker and easier. But it’s a lie. They’re mortgaging the codebase. And I think the atrophy sets in when people do this.

p_stuart82 7 hours ago||
the thing is it doesn't even feel like mortgaging. shipping, features going out, everything looks fine. then something breaks and you realize you can't debug your own code without asking the model again.
pona-a 34 minutes ago||
It feels like an addiction. Normal coding requires sustained attention, you can sense how deep you are in the progress and when you're too tired to continue, but with LLMs the next feature always feels like another prompt away, having sessions go well into the early morning/late-night. You rationalize you can quit, that you've been reading the source and each diff enough to "understand" the codebase. But the truth is when the rate limit runs out, you'll be absolutely helpless, crawling back for extra-usage, until you finally see the total bill at the end of the month.
kylebyte 7 hours ago|||
And any push to use 2 to build infra to make 1 easier is hard to sell when a lot of engineers think AI will be able to perfectly do 1 in some nebulous time in the near future.
tabwidth 2 hours ago||
[dead]
wasabinator 2 hours ago||
Is anyone tired of being told what AI is supposed to mean for the individual? As a software guy it's supposed to mean I am now a team lead of sorts. However all the people I see crowing about this never sought to become team leads in their career, nor did I.

Yet now suddenly everyone is supposed to want to become a team lead of sorts (ie. the agents becoming your team). I don't want to do that, I treat an AI agent as a pair in a pair programming unit. Nothing more, nothing less. If someone wants to treat it differently, good on them, but they have no place telling what works for thee works for me.

linsomniac 1 hour ago|
I agree, nobody should be telling you, specifically, how you are going to use AI in programming.

I think a lot of people are getting caught up in the discussion about how we, generally as technologists, are going to use AI. And it is looking like the industry is moving towards what used to be programmers now being team leads or project managers of AI teams.

So it's probably best for you to try to not get involved in those discussions, and when someone says "you" assume they mean "you (generally)"?

resident423 18 minutes ago||
I feel like these articles are just a reasurance for people who don't want to accept that AI will automate their jobs. It becomes easier to focus on a lesser group of AI users and feel superior than to confront the reality of things.
CorbenDallas 8 hours ago||
There are plenty of engineers, who simply can't think, AI will not change anything in this regard.
quantum_state 7 hours ago||
Can’t think properly seems to be the real issue. That’s one of the reasons that SE domain is mostly in ruin. AI won’t help, only to delay a bigger mess.
taurath 7 hours ago||
Ever since the standard office setup went from offices or cubicles to bullpens and hot desks there is less and less time to think, and all of that is a management decision to ship things as fast as possible
jfreds 3 hours ago|||
I agree in part, but I think AI does meaningfully make it harder for leadership to detect their bullshit.
joe_mamba 8 hours ago||
How do you graduate your engineering degree without being able to think?

Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.

People might hate this but being a good cheat requires a lot of critical thinking.

lispisok 8 hours ago|||
Grade inflation and schools passing kids who should fail to game metrics and keep collecting student loans is a problem. I wouldnt consider hiring anybody from my alma mater who didnt score a sandard deviation or higher on the tests.
23df 4 hours ago||
Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.

The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.

joe_mamba 4 hours ago||
>Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.

What you'd take is irrelevant if the HR/recruiter doing the initial screening of resumes is looking at an oversupply of candidates with degrees.

Hiring is broken is many ways. Candidates without degrees are faring even worse now are the initial recruiter screening stage due to the poor market.

In my EU country, academic inflation is so bad due to free education and psyopping everyone to path of academia, that not having a MSc is basically a red flag to companies for getting a SW job as most candidates have one, which means you're expected to have one too if you want to get a job.

ironman1478 8 hours ago||||
You don't need a 4.0 to graduate. And even if you got one, a lot of grades are composed of tests, not projects. You can just memorize your way through things if you were dedicated enough.

It's not really that hard to get a degree in engineering if your only goal is the degree itself.

sersi 1 hour ago|||
That does seem to depend on countries and universities.

I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining

johndough 8 hours ago|||
> a lot of grades are composed of tests, not projects

(Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.

vips7L 8 hours ago||||
Half of my graduating class could barely program.
whstl 7 hours ago|||
Yep. Way more than half of the people I interview can't even do a very basic FizzBuzz, even with guidance. Those are people with a degree, job experience and reference letters.
spacechild1 7 hours ago|||
What did you study?
vips7L 6 hours ago||
Computer Science.
spacechild1 6 hours ago||
I see. Computer Science is not an engineering degree and it is not about programming. That's what Software Engineering degrees are for.
traderj0e 3 hours ago|||
Many of the top schools don't have software/computer engineering degrees, rather people who want to be SWEs get CS degrees.
LtWorf 5 hours ago|||
Software engineers graduates I've met are usually much worse at programming than computer science graduates.
traderj0e 3 hours ago||
That too
patrick-elmore 3 hours ago||||
I've seen it happen multiple times. Engineering degrees are no different than a vast majority of degrees in that if you are good at the read and regurgitate cycle, you can make it through. Not only can you make it through, but you can do it with a very respectable GPA. They come out with a large dictionary of keywords in their arsenal, but no idea how to put them into practice. Some are able to put it into practice and tie it all together. As they see practical examples of those keywords in the real world, it starts falling like dominoes, and at an accelerating rate. For some, it never goes much beyond keywords. The dominoes fall, but it is slow, and they stop falling for extended periods of time for them. Not many mature engineering organizations can tolerate that sort of progression rate. They usually don't last very long at any one place, until they find a company where they can blend into the background due to a combination of company culture, and low complexity systems being worked on.
spacechild1 8 hours ago||||
OP should have put "engineers" in double quotes. Many software developers like to describe themselves as engineers although they don't have an actual engineering degree. A lot of software development resembles plumbing more than engineering, so most devs don't really need an engineering degree anyway, but they should be more honest about what they're actually doing and not try to elevate themselves with fancy titles.

You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.

dml2135 4 hours ago||
You can do engineering without an engineering degree. A degree is just a piece of paper.
what-the-grump 8 hours ago||||
I don't know but I can point at more than half of the people that I work with that can't think, and every time they try to, takes a whole group of people that can think to undo their mess, they all have degrees and I don't.

So what does that tell me?

Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.

shagie 8 hours ago||||
A degree is passing the test. Not all degree programs get into more advanced topics nor do they necessarily require that someone is able to work through how to solve a problem that they haven't seen before.

--

A lot of students (and developers out there too) are able to pass follow instructions and pass the test.

A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".

Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.

--

... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.

Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.

This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.

... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.

awesome_dude 8 hours ago|||
Mate, have you never had to deal with over-confident graduates who think they've got the complete answers, but, in reality, they only have a sliver of the whole picture in their minds?
operatingthetan 8 hours ago||
That is different than the suggestion that one could graduate with a CS degree and "never think." Which is absurd.
synergy20 2 hours ago||
Easier said than done. once you are given a lazy way to do things faster and easier and mostly better, it's hard to go back. this is by design. there is no turning point. this addiction is as strong as drugs I feel.
Unmotivator2677 8 hours ago||
That why I don't use AI for any personal projects, I like to keep my mind sharp. Unless it's a projects that incorporates AI in some way, but don't use AI to code it. But at work I don't care, I do what I am paid for, if my manager wants me to entirely vibe code using Claude, his choice, I will not be the one paying for technical dept that creates.
beej71 32 minutes ago|
100% agree.

In the middle ground:

I'm putting together exercises for a C/Systems programming class I'm teaching in the fall.

Partway through this, for some reason [cough procrastination cough], I thought it would be fun to implement them in Scheme. My Scheme was already poor, and what meager skills I had are completely rusty. I used Claude to great effect as a tutor for that, but didn't have it code any of the solutions at all, of course. I could tell I was leveling up fast as I coded the things up.

Gotta use it in the right way if one wants to sharpen ones skills.

0xbadcafebee 8 hours ago|
No, AI is not creating that group of people. They already existed. They were the people who would google for StackOverflow snippets and copy+paste them without even reading the entire snippet, much less understand them. Same people, new tool.
koshyjohn 4 hours ago||
100% agree. The key difference now though is that it's no longer 'swim or sink immediately' situation - which used to be a forcing function against intellectual laziness where it was a choice.
traderj0e 3 hours ago|||
Many people by now have probably seen a teammate who used to be a good SWE, now spamming slop code that puts all the real work on the reviewer. That's the "second group."
therealdrag0 2 hours ago||
Tell them no. Thats what I do. I have rejected multiple PRs that were too large and lacked proper design or alignment upfront. With code being so cheap, rejecting it should be just as cheaper. Set cultural standards that devs need to review their code before asking for reviews. Etc etc
clutter55561 8 hours ago||
Exactly what I posted as well!
More comments...