Posted by petethomas 17 hours ago
AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, support, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
So what happens is a corporation ends up spending a lot of money for a square tool that they have to hammer into a circle hole. They do it because the alternative is worse.
AI coding does not allow you to build anything even mildly complex with no programmers yet. But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better.
Another thing AI enables is significantly lower switching costs. A friend of mine owned an in person and online retailer that was early to the game, having come online in the late 90s. I remember asking him, sometime around 2010, when his Store had become very difficult to use, why he didn’t switch to a more modern selling platform, and the answer was that it would have taken him years to get his inventory moved from one system to another. Modern AI probably could’ve done almost all of the work for him.
I can’t even imagine what would happen if somebody like Ford wanted to get off of their SAP or Oracle solution. A lot of these products don’t withhold access to your data but they also won’t provide it to you in any format that could be used without a ton of work that until recently would’ve required a large number of man hours
Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?
Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.
Could you share any data on this? Are there any case studies you could reference or at least personal experience? One order of magnitude is 10x improvement in cost, right?
We are currently sunsetting our use of Webflow for content management and hosting, and are replacing it with our own solution which Cursor & Claude Opus helped us build in around 10 days:
So, basically you made a replacement for webflow for your use case in 10 days, right?
no way. We're not talking a standalone AI created program for a single end-user, but entire integrated e-commerce enterprise system that needs to work at scale and volume. Way harder.
Their initial answer/efforts seem to be a qualified but very qualified "Possibly" (hah).
They talked of pattern matching and recognition being a very strong point, but yeah, the edge cases tripping things up, whether corrupt data or something very obscure.
Somewhat like the study of MRIs and CTs of people who had no cancer diagnosis but would later go on to develop cancer (i.e. they were sick enough that imaging and testing was being ordered but there were no/insufficient markers for a radiologist/oncologist to make the diagnosis, but in short order they did develop those markers). AI was very good at analyzing the data set and with high accuracy saying "this person likely went on to have cancer", but couldn't tell you why or what it found.
Financial considerations aside, one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...
This is what a CEO is supposed to do. I wonder if CEOs are the ones OK with their data being used and sent to large corps like MS, Oracle, etc.
The code they write is highly domain-specific, implementation speed is not the bottleneck, and their payroll for developers is nothing compared to the rest of the business.
AI would just increase risk for no reward.
Many larger enterprises do both – buy multiple SaaS products, and then have an engineering team to integrate all those SaaS products together by calling their APIs, and build custom apps on the side for more bespoke requirements.
To give a real world example: the Australian government has all these complex APIs and file formats defined to integrate with enterprises for various purposes (educational institutions submitting statistics, medical records and billing, taxation, anti-money laundering for banks, etc). You can't just vibe code a client for them – the amount of testing and validation you have to do with your implementation is huge–and if you get it wrong, you are sending the government wrong data, which is a massive legal risk. And then, for some of them, the government won't let you even talk to the API unless you get your product certified through a compliance process which costs $$$. Or, you could just buy some off-the-shelf product which has already implemented all of that, and focus your internal engineering efforts on other stuff. And consider this is just one country, and dozens of other countries worldwide do the same thing in slightly different ways. But big SaaS vendors are used to doing all that, they'll have modules for dealing with umpteen different countries' specific regulations and associated government APIs/file formats, and they'll keep them updated since they are forever changing due to new regulations and government policies. And big vendors will often skip some of the smaller countries, but then you'll get local vendors who cover them instead.
And that's just atlassian.
Start adding stuff that costs many many many yearly salaries (special software for managing inventories and warehouses) it starts making sense to prototype alternatives internally.
I came to the conclusion that if it's not Teams/SharePoint or the moat is on the extreme legal complexity side (e.g. payrolls), you can at least think of building an alternative that is good enough without needing to be perfect.
Where would we be without them!?
The real benefit of these types of SaaS offerings was their ubiquity across multiple industries and verticals. If a company bought Salesforce, they could very readily find employees that would be able to quickly onboard since they would likley have used it at previous companies. AI software generation is changing this as more and more software being created is bespoke and increasingly one-of-a-kind with these tools allowing companies to create software that fits their unique and specific needs.
My hot take here is that the moats previously enjoyed by SaaS companies will increasingly vanish as smaller and smaller teams can assemble "good enough" solutions that companies will adopt instead of paying giant chunks of their budget on pre-built SaaS tools that will increasingly demand more training to Onboard.
why do people pay red hat/ibm for rhel? they earn pretty good margins too. to parent's point on software/=code
My buddy works for a company like these. He landed a $5M contract last year, which netted him almost $800k. There's alot of fat to be cooked out of this stuff, and AI will help smaller entrants attack those margins.
AI-based startups like Vanta make it much easier for companies to meet the compliance bullshit the large companies require. Again, it will drive more competition == better values for customers.
No, they don't.
A domain expert armed with an Excel spreadsheet and the ability to write VBA macros will be enough for most business.
A prime example of this was the Reinhart/Rogoff paper advocating austerity that was widely quoted, and then it was discovered that the spreadsheet used had errors that invalidated the conclusions:
https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...
Just because technology is in use and "works" doesn't mean it's always correct.
The point is not that people will be using specifically Excel, but that most business only pay for software because it is the tool that gives them the most power to automate their processes. They don't need high availablility, they don't need standards compliance, they don't extensive automated tests, they won't need cloud engineeers and SRE... all you need is some tool that can get the results your are looking for right now.
Academia already works like this. Software wrtiten for academic purposes is notoriously "bad" because it is not engineerd, but that doesn't matter because it is good enough to deliver the results that researchers need. Corporate IT will also start looking like this even at mid-sized companies.
I've been in ops for a long time and have encountered far too many "our IP addressing plan is just a spreadsheet with manual reconciliation".
I truly wonder if Excel and all it's predecessors and direct clones (Google Sheets, etc.) are holding back industry from making something truly better and more reliable.
But the reasons the business software sector grew far beyond Excel of the 1990s is because of the inherent limitations in scaling solutions built by business analysts inside of Excel. There's a vague cutoff somewhere in the middle of the SMB market where software architecture starts to matter and the consequences for fuckup are higher than the cost of paying for professionally made software with, importantly, a vendor on the hook for making sure it doesn't fuck up.
or for a more charitable comment, I think the issue people struggle with right now is how much of non-AI software will be replaced by AI-native versions. and it's not even a 1:1 mapping. we may see 5 different small companies replaced by a single AI interface. all TBD, but there's merit to avoiding that risk right now if you can just allocate to NVDA and GOOG instead
AI "generated" code requires a large base of training data to draw from. If we all stop writing code then there will no new code written. Just rehashes of stolen ideas. There is no long tail to this industry or ideal.
> That's very expensive.
As long as you convince someone else to pay the bill who cares? The real problem is are you losing your competitive edge? If everyone else can crank out the same stolen crap you can then there is no reason for you to even exist.
The little one-off programs that we thought would keep developers busy forevermore don't require engineers. They often don't even require code. LLMs can natively do a lot of things that historically would have required software.
In the olden days it would have taken considerable engineering resources to produce a comparable tool. That is no longer the case.
For everything else, there’s open source.
However, if I was a wall street analyst and believed the AI dreams I would further be concerned that software companies aren't taking advantage of the last remnants of value before software (and maybe labor) values go to zero.
If you've got a gold mine and have recently built the most efficient shovels in the world, why are they not bringing in mass amounts of workers to utilize these shovels before all the neighboring mines. Once all that gold is on the market, the price crashes so it's better to be one of the first mines to get in and dig out all possible value first.
I think you either don't believe in the AI hype, which means a lot of silicon valley companies are tremendously overvalued. Or you do, in which case another huge part of silicon valley is overvalued especially when they are not looking to out-innovate their peers (as evidenced by downsizing), but just riding the wave of AI until what they are selling has no marginal value over some guy coding alone in his bedroom. SV is putting itself into a weird position, but still has some time for financial buffoonery before the party stops.
Because they are completely consumed by the need to increase margins, which they think they will be able to do it with AI by laying off a lot of people. But Saas economy is connected and based on per user pricing, so as layoffs continue, Saas economy is showing its biggest weakness. All of Saas companies also seem to embrace AI so much that they would rather add another summarise button rather than actually making something which cant be copied easily by competitors.
IE, before software automates a business process, it's typically done by hand, by a real person.
What if someone sells a "virtual person" that's capable of doing the job? What if that "virtual person" is harder to train than a real person, but orders of magnitudes easier than writing custom software or custom business rules?
More importantly: What if the "virtual person" can explain the job they do much better than trying to read source code? That's very useful in ~30ish years when the "virtual person" understands the business process better than the people in the company, and someone is trying to update / streamline processes.
That being said, it still requires some engineering background to come up with interesting ideas and solutions with the help of LLMs but even that might be replaced.
SaaS companies need to start reading the writting on the wall, their massive valuations enjoyed when software was harder to create will need to be justified.
The stuff you do in-house is probably still going to tied deeply to your internal processes. Admin dashboards, special workflows integrating with different systems, etc.
I don't see how the economics of SaaS will remain the same when their value is formed of capital and labor expended, both of which require less now, so please explain how this doesn't lead to an increase in supply and a downward pressure on value?
There are more computers now than there ever have been. More people in more parts of the world have them than ever before. If you have this perspective you may just be locked in a first-world corporate nightmare that has stolen from you all vision and imagination.
And it becomes "worse": Billions and billions of chips ~ compusters are produced every year, the number is increasing.
Billions of people will get access to the stuff that was around for us "since ever" for the first time in their whole life.
Market Cap over doubled between 2021 and 2025.
But since the start of 2025, it has lost all of that.
13% in the last week. 20% in the last month. Six months is definitely bleaker than those numbers, 37% down.
I know, you are saying - they will adopt. Perhaps, while also cutting 40% (if not more) personnel during the pivot, and perhaps also by facing more challenges by faster moving competition.
Like, look for a second - why didn't Google create what the perplexity newsfeed is, given they actually like did 10 years ago and then close to nobody was using it. The equilibrium seems super unstable. What happens if a smart kid devices way to compress this information 10x times faster. This immediately means neural chips stall.
This volatility is something, not a joke. The second order effects may be unforseeable in an unparalleled way. Besides, the Luddites organize much better in 2026 given reddit etc.