Posted by amarsahinovic 1/10/2026
I agree somewhat but eventually these can be automated with AI as well.
Like sure, there is a bunch of stuff like monitoring, alerting that is telling us that a database is filling up it's disk. This is already automated. It could also have automated remediation with tech from the 2000s with some simple rule-based systems (so you can understand why those misbehaved, instead of entirely opaque systems that just do whatever).
The thing is though, very often the problem isn't the disk filling up or fixing that.
The problem is rather figuring out what silly misbehavior the devs introduced, if a PM had a strange idea they did not validate, if this is backed by a business case and warrants more storage, if your upstream software has a bug, or whatever else. And then more stuff happens and you need to open support cases with your cloud provider because they just broke their API to resize disks, ...
And don't even get me started on trying to organize access management with a minimally organized project consulting team. Some ADFS config resulting from that is the trivial part.
Or how Meta downloaded 70tb+ of books and then got law enforcement to nuke libgen and z-lib to create a "moat", and all our tools start dying/disappearing because the developers are laid off since an AI "search engine" just regurgitates it, THEN and only then will most people understand the mistake that this was.
Let's not even begin with what Grok just recently did to women on X, completely unacceptable, I really, really wish for the EU to grow some and take a stand, it is clear that China is just as predatory as America and both are willing to burn it all in order to get a non existent lead in non existent "technology" that snake oil salesmen have convinced 80 year olds in government that is the next "revolution".
Not to nitpick but if we are going to discuss the impact of AI, then I'd argue "AI commoditizes anything you can specify." is not broad enough. My intuition is "AI commoditizes anything you can _evaluate/assess_." For software automation we need reasonably accurate specifications as input and we can more or less predict the output. We spend a lot of time managing the ambiguity on the input. With AI that is flipped.
In AI engineering you can move the ambiguity from input to the output. For problems where there is a clear and cheaper way of evaluating the output the trade-off of moving the ambiguity is worth it. Sometimes we have to reframe the problem as an optimization problem to make it work but same trade-off.
On the business model front: [I am not talking specifically about Tailwind here.] AI is simply amplifying systemic problems most businesses just didn't acknowledge for a while. SEO died the day Google decided to show answer snippets a decade ago. Google as a reliable channel died the day Google started Local Services Advertisement. Businesses that relied on those channels were already bleeding slowly; AI just made it sudden.
On efficiency front, most enterprises could have been so much more efficient if they could actually build internal products to manage their own organizational complexity. They just could not because money was cheap so ROI wasn't quite there and even if ROI was there most of them didn't know how to build a product for themselves. Just saying "AI first" is making ROI work, for now, so everyone is saying AI efficiency. My litmus test is fairly naive: if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.
> if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.
exactly, "efficiency" nice to say in a vacuum but what you really need is quality (all-round) and understanding your customer/marketI want to be clear, it sucks for Tailwind for sure and the LLM providers essentially found a new loophole (training) where you can smash and grab public goods and capture the value without giving anything back. A lot of capitalists would say it’s a genius move.
This is completely wrong. Agents will not just be able to write code, like they do now, but will also be able to handle operations, security, continuing to check, and improve the systems, tirelessly.
You can't prompt this today, are you suggesting this might come literally tomorrow? 10 years? 30? At that unknown time will your comment become relevant?
But these are MY agents. They are given access to MY domain knowledge in the way that I configured. They have rules as defined by ME over the course of multi-week research and decision making. And the interaction between my agents is also defined and enforced by me.
Can someone come up with a god-agent that will do all of this? Probably. Is it going to work in practice? Highly unlikely.
Please wake me up when Shopify lets a bunch of agentic LLMs run their backends without human control and constant supervision.