Posted by milkglass 7 hours ago
With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).
They did not properly prepare and as a result lost 20% of its territory in days.
Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.
Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.
The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.
It is about humans, not AI.
Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.
> A small group of officers at HUR, Ukraine’s military intelligence agency, did begin quiet contingency planning in January, prompted by the US warnings and the agency’s own information, one HUR general recalled. Under the guise of a month-long training exercise, they rented several safe houses around Kyiv and took out large supplies of cash. After a month, in mid-February, the war had not yet started, so the “training” was prolonged for another month.
> The army commander-in-chief, Valerii Zaluzhnyi, was frustrated that Zelenskyy did not want to introduce martial law, which would have allowed him to reposition troops and prepare battle plans. “You’re about to fight Mike Tyson and the only fight you’ve had before is a pillow fight with your little brother. It’s a one-in-a-million chance and you need to be prepared,” he said.
> Without official sanction, Zaluzhnyi did what little planning he could. In mid-January, he and his wife moved from their ground-floor apartment into his official quarters inside the general staff compound, for security reasons and so he could work longer hours. In February, another general recalled, table-top exercises were held among the army’s top commanders to plan for various invasion scenarios. These included an attack on Kyiv and even one situation that was worse than what eventually transpired, in which the Russians seized a corridor along Ukraine’s western border to stop supplies coming in from allies. But without sanction from the top, these plans remained on paper only; any big movement of troops would be illegal and hard to disguise.
[0] https://www.theguardian.com/world/ng-interactive/2026/feb/20...
Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.
That's not to say the country wasn't prepared though. If the GP did talk to people on the ground days before it started, saying it won't happen would match the public propaganda at the time coming out of the Ukrainian government and their allies. They knew it was coming and seemed to decide they were better to faint like the weren't ready and avoid public panic before it started.
They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.
I'm not sure why you'd say nobody thought they would invade. To me it was clear in December the year before when the Russian navy began sailing the long way around Europe, getting in the way of Irish fisherman and confirmed days before the invasion when they had stockpiled medical personnel and blood on the front lines.
Why would we listen to anything related to right or wrong from you then if you don't care?
Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.
Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.
Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.
The opening paragraph is ridiculous. The FIM-92 Stinger is obsolete. It was replaced by FGM-148 Javelin. DACH (Germany, Austria, Switzerland) didn't forget how to make things. They are still world class for manufacturing. (Northern Italy is also economically part of that manufacturing mega-hub.)
There are plenty of NLAWs (much cheaper than Javelin, and only slightly less capable) in EU/Nato stocks to satisfy Ukraine needs against Russian heavily armed main battle tanks. For everything else, you can use one or two suicide drones to kill anything with a motor.
And now to give credit where credit is due:
Looking at his (assumed) LinkedIn profile: https://www.linkedin.com/in/denjkestetskov/
It looks like he was educated in Ukraine, so likely a Ukrainan national. If I were a Ukrainan, then I too would be publishing rage bait like this in an attempt to pressure allies to provide more funding, weapons, and gear.
As a final suggestion, the writer can visually spice up his blog post with one of my all time favourite military photos from Wiki: https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USM...
Well then train them, instead of selecting 0.18% of applicants and calling it a day.
It's not some innate, immutable property - people can be taught even in adulthood.
Also it's not like they'll work for a year and switch jobs - not in the current market.
If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost. That's the price of constant progress.
COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.
As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. https://www.anthropic.com/engineering/building-c-compilerEven in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.
With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.
function add(a,b) = c // adds two numbers
test: add(1,2)=3
to implement
function add(a,b) return 3
So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.
This kind of forgetting is normal. It's how things work when time and resources are finite. The only problem here is the belief that you can keep capacity to do something without actively exercising it, and thus the expectation that you can "just" resume doing things after a long break, without paying up a cold-start cost.
But you can't, and there's no reason to be surprised. I bet the Pentagon and the EU weren't. They didn't need those Stingers and shells for decades, didn't expect to need them soon - but they knew they could get them if they really needed them, but it's gonna be costly.
I don't get why people think this is unusual or surprising, or somehow outrageous and proves something about society or "mindsets of elites" - other than positive aspects like adaptability and resilience.
This is true at all scales. Your body and brain optimizes aggressively, too. An individual saying "I need to warm up" or "I need to hit the gym a few times and then I'll be able", or "yes, I can, but I haven't done it for years so I need an hour with a book/documentation..." - all that is exactly the same as EU going "yes we can make artillery shells... though we haven't in a while so we need some time and some millions of EUR to get our supply chain sorted out first".
Just as shift in power and the rise and fall of nations is normal.
The rational thing is to address a threat proportionally to it's expected damage and probability of occurrence. When war is unlikely, you scale down your defense production; when it becomes more likely, you ramp it up - paying cold-start cost is still much cheaper than paying for ongoing readiness. If your scaling down defense makes it more likely for you to be attacked - well, that's the job of your intelligence and defense departments to track. Nobody said it's a static system - it's a highly dynamic one, that's what makes geopolitics a hard thing.
Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:
1. "This specific case should not have taken certain people by surprise."
2. "This is a manifestation of a broader phenomenon."
3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]
4a. "If a process is unlikely to be needed any time soon, shutting it down and then paying cold-start costs if and when it's needed again, is better than keeping it going and wasting resources better used elsewhere", and
4b. "There's an infinitely long tail of low-probability problems, and you can't possibly afford to maintain advance readiness for any of them".
Also on the overall sentiment:
4c. "Paying a cold-start cost isn't a penalty or sign of bad planning. It's just a cost."