Posted by spf13 11/3/2025
Sure, it’s good to be aware of future business needs so that we, as technical people, can be asking the right questions to prepare for what that future may look like, but that almost never means a decision about language x over language y. It’s much deeper than that.
That's an example. It is not the subject under discussion.
I also don't think it's necessarily bad that people do this. The input to any decision a person makes includes their entire life experience up to that point[1]. How could an executive encode all that in some kind of pat logical explanation, and how could the also-human engineers at the company possibly digest such an explanation, and what could make it more compelling to them than their own life experiences? People need to get through life, though, so they need to make decisions. They can't fully rationalize every single one, but they want to feel at least OK about the decisions they're making, so they tell themselves and each other these incomplete little stories and get on with it. That managers scaffold this process with their own stories is a little manipulative, but how else could people cooperate enough to have companies? The whole process just seems intrinsically human to me.
The most important part of being an executive is understanding all of this and choosing to hire people who will ultimately make good strategic decisions for you. Don't hire a well-known Perl contributor as your CTO unless you like the idea of rewriting your product in Perl. If your company is dying because this has happened, my condolences but at least you're not alone.
Edit: I hadn't read this far when I wrote the comment but the author also literally says, "The moment you hire a Rust developer to evaluate languages, you’ve already chosen Rust." I guess I just disagree that it could work differently. Each of us possesses bounded knowledge and bounded rationality, and "which language is best", is probably too complicated for an individual to figure out, especially when you don't even know what the roadmap will be in a year—you'd have to build the company several times in several languages and compare the results (and the best engineers I've met do write code multiple times, but rarely more than twice IME). Each of us can only really know how we would solve the company's problems. Executives' job is to try and guess, and make decisions that are ideally optimal but at least internally consistent.
[^1] My favorite example of this, actually: even in the highly-rational field of scientific research, scientists have to decide whether a given body of evidence is dispositive of a particular theory, and the standards they apply likewise depend on who they are and what their life experience is. So, as Max Planck put it, science advances one funeral at a time.
However, we continue to write new code in C++ due to libraries and because people know it. These are all factors that the OP considers to be negligible.
The conventional wisdom was "Where will we get C++ programmers?", "We don't have experience with C++", "C++ is too bleeding edge", and so on. The same excuses people give today to not use Rust, or your favorite hyped language.
If we follow the logic of OP, we will almost never develop new languages, because there are already multiple established languages good enough for any task at hand.
- Look at people using identity-focused reasoning with politics
- It’s research!
- That’s the same as those <programming language tropes>
- Because I have some anecdotes about that
But the person making the argument couldn’t be falling for the same thing? Nope.[1]
Maybe it would be nice to have more objective metrics. So what’s that?
> We need a framework that makes the invisible costs visible. One that lets us have the economic conversation instead of the identity conversation. One that works whether you’re choosing your first language or evaluating a migration.
> Our industry has never really had that framework… Until now.
I better Stay Tuned.
[1] Those irrational people: making arguments. Me, the rational one: also making arguments, but mine are correct.
Hardly, no.
The entire point of the article is pointing out an entirely different cognitive fallacy.
At the very least the title is clear Bulverism.
Admittedly, I did enjoy the article, even if I was initially a bit predisposed not to from the title. And I think the quote "if you hire a rust evangelist to choose your programming language, you've already chosen rust" is very insightful. But the author still goes into zero depth to asses to what extent rationality of language choice is a widespread problem, as opposed to isolated cases they've seen during their career.
Which is ironic given that their thesis is that people go into topics emotionally without weighing them factually against the ground truth.
AFAICS you don't seem to have even noticed the initial example.
Engineers often aren't rational because engineers can still be stupid. Dogmatism/black-and-white thinking is often a sign of low emotional intelligence (and can also be a defense mechanism called "splitting").
The Dunning-Kruger effect also applies to "smart" people. You don't stop when you are estimating your ability correctly. As you learn more, you gain more awareness of your ignorance and continue being conservative with your self estimates.
I mean, its largely a statistical artifact around which a pop science myth has accumulated, but on its own terms it applies smoothly and continuously across the entire range of ability in a domain, not in any special way just one one side of binary knowledge dividing line (the finding was basically that people across the whole range of ability to tend to rate their own relative ability closer to the 70th percentile than it actually is, but have monotonically increasing with actual relative ability.)