Posted by jnord 1 day ago
For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.
I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.
The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
Many SaaS products I am interested in have very little “moat”. I am interested in them not because I can’t build them, but because my limited engineering time is better spent building business specific stuff.
Many products with product management teams spend a lot of their effort building functionality either to delight their highest paying customers, or features that are expected to be high-revenue.
I’m never going to be your highest paying customers, so I’m never going to get custom work from you (primarily orienting workflows to existing workflows inside your customers).
What everyone wants when they buys SaaS is to get value from it immediately without having to change our internal processes, broken as they are. But your model of feature prioritization is antithetical to this; you don’t want to build or support the 5-10 integration points I want; because that would allow me to build my own customizations without paying for your upsells.
You aren’t at immediate risk from agentic Ai from losing your big customers. But Agentic AI is enabling me and thousands of others to build hobby projects that deliver part of your core value but with limitless integration. I expect that you’ll see bleeding from the smallish customers way before you see hits from your whales.
However in a couple of years there will be OSS alternatives to what you do, and they will only become more appealing, rapidly.
As a side note it’s not just license pricing that will drive customers to agentically-coded solutions; it’s licensing terms. Nowadays whenever I evaluate SaaS or open source, if it’s not fully published on GitHub and Apache or MIT licensed, then I seriously consider just coding up an alternative - I’ve done this several times now. It’s never been easier.
Nobody is building open source software for [niche professional vertical] in their spare time. It's not mass market. It's not something a developer encounters in their daily work and thinks "I could do this better." The domain knowledge required to even understand the problem space takes months to acquire, and there's no personal payoff for doing so.
The "OSS will appear" prediction works for horizontal tools. For deep vertical SaaS, the threat model is different: it's other funded startups or internal enterprise clones (both of which we've already faced and won against).
Nowhere near the level of complexity that would enter your threat model. But this would be the first, minimal step towards customers building their own tools, and the fact that not even this workflow has entered the zeitgeist is... well, it's not the best news for some of the most bullish projections of AI adoption in businesses large and small.
It's just that the hassle of dealing with that platform tends to be similar to the hassle of setting up an app yourself, and now you're paying a per-user license cost.
That would be similar to your solution, so either one would work.
I think that there might be some similar alternatives (maybe Airtable? probably using Lovable or Firebase counts) but nothing that is available for me for now.
APEX is probably just as widely used now as Access was. Access likely had higher market share but of a much smaller market. There are gazillions of APEX apps out there.
In my experience, it's actually quite hard to move a business from an excel sheet to software. Because an excel sheet allows the end user to easily handle every edge case and they likely don't even think in terms of "edge cases"
https://support.microsoft.com/en-us/office/copilot-function-...
Yes, certain parts of our product are indeed just lightweight wrappers around an LLM. What you're paying for is the 99% of the other stuff that's (1) either extremely hard to do (and probably non-obvious) (2) an endless supply of "routine" work that still takes time (3) an SLA/support that's more than "random dev isn't on PTO"
Is that a bluff used to negotiate the price?
In fact having sold stuff If a lead says this, it is a huge red flag for me that I probably don't want to do business with them because they are probably a "vampire customer"
Why do they have an internal engineering org at all if they can't manage the most basic maintenance of a software product?
This is the answer to a happy B2B SaaS implementation. It doesn't matter what tools you use as long as this can be achieved.
In the domain of banking front/back office LOB apps, if you aren't iterating with your customer at least once per business day, you are definitely falling behind your competition. I've got semi-retired bankers insisting that live edits in production need to be a fundamental product feature. They used to be terrified of this. Once they get a taste of proper speed it's like blood in the water. I'm getting pushback on live reloads taking more than a few seconds now.
Achieving this kind of outcome is usually more of a meat space problem than it is a technology problem. Most customers can't go as fast as you. But the customer should always be the bottleneck. We've done things like install our people as temporary employees to get jobs done faster.
Technically, only if it causes some kind of security, privacy, availability or accounting issue. The risk is high but it can be done.
Half of our customers do not have anything resembling a test environment. It is incredibly expensive to maintain a meaningful copy of production in this domain. Smaller local/regional banks don't bother at all.
That's not the threat model. The threat model is that they won't have to - at some point which may not be right now. End users want to get their work done, not learn UIs and new products. If they can get their analysis/reports based on excels which are already on SharePoint (or wherever), they'd want just that. You can already see this happening.
It's an ugly truth product owners never wanted to hear, and are now being forced to: nobody wants software products or services. No one really wants another Widgetify of DoodlyD.oo.io or another basic software tool packaged into bespoke UI and trying to make itself a command center of work in their entire domain. All those products and services are just standing between the user and the thing the user actually wants. The promise of AI agents for end-users is that of having a personal secretary, that deals with all the product UI/UX bullshit so the user doesn't have to, ultimately turning these products into tool calls.
For purposes of this thread, if chat AI becomes the primary business interface, then every service behind that becomes much easier to replace.
And if you build an AI interface to your product, you can make it not work in subtly the right ways that direct more money towards you. You can take advertising money to make the AI recommend certain products. You can make it give completely wrong answers to your competitors.
I keep hearing this and seeing people buying more Widgetify of DoodlyD.oo.io. I think this is more of a defensive sales tactic and cope for SaaS losing market share.
We built an AI-powered chat interface as an alternative to a fully featured search UI for a product database and it has been one of the most popular features of 2025.
--
[0] - Or Claude, or Gemini.
Think about all the cycles this will save. The CEO codes his own dashboards. The OP has a point.
This sounds like a vibe coding side project. And I'm sorry, but whatever he builds will most likely become tech debt that has to be rewritten at some point.
once the Csuite builds their own dashboards, they quickly decide what they actually need versus what is a nice to have.
If corporate decisions could be made purely from the data recorded then you don't need people to make those decisions. The reason you often do is that a lot of the critical information for decision making is brought in to the meeting out-of-band in people's heads.
Focus on the simple iteration loop of "why is it so hard to understand things about our product?" maybe you cant fix it all today but climb that hill more instead of make your CEO spend some sleepless nights on a thing that you could probably build in 1/10th the time.
If you want to be a successful startup saas sw eng then engaging with the current and common business cases and being able to predict the standard cache of problems they're going to want solved turns you from "a guy" to "the guy".
I have also seen multiple similar use cases where non-technical users build internal tools and dashboards on top of existing data for our users (I'm building UI Bakery). This approach might feel a bit risky for some developers, but it reduces the number of iterations non-technical users need with developers to achieve what they want.
"It" being "that it's harder than it looks"?
Honestly, I'm not sure what to expect. There are clearly things he can't do (e.g. to make it work in prod, it needs to be in our environment, etc. etc.) but I wouldn't be at all surprised if he makes great headway. When he first asked me about it, I started typing out all the reasons it was a bad idea - and then I paused and thought, you know, I'm not here to put barriers in his path.
As it stands today; just a bit of complexity is all that is required to make AI Agents fail. I expect the gap to narrow over the years of course. But capturing complex business logic and simplifying it will probably be useful and worth paying for a long time into the future.
This means for any "manual" or existing workflow requiring a access to several systems, that requires multiple IT permissions with defined scopes. Even something as simple as a sales rep sending a DocuSign might need:
- CRM access
- DocuSign access
- Possibly access to ERP (if CRM isn't configured to pass signed contract status and value across)
- Possibly access to SharePoint / Power Automate (if finance/legal/someone else has created internal policy or process, e.g. saving a DocuSign PDF to a folder, inputting details for handover to fulfilment or client success, or submitting ticket to finance so invoicing can be set up)
I never understood the evolvement around agents, they just appeared to me as Python scripts initially (Crewai 2-3 years ago).
The question is can people see that agents will evolve? Similar to how software evolves to handle the right depth of granularity.
Being unrestrained by team protocols, communications, jira boards, product owners, grumpy seniors.
They can now deliver much more mature platforms, apps, consumer platforms without any form of funding. You can easily save months on the basics like multi tenant set up, tests, payment integration, mailing settings, etc.
It does seem likely that the software space is about to get even crowdier, but also much more feature rich.
There is of course also a wide array of dreamers & visionairies who know jump into the developer role. Wether or not they are able to fully run their own platform im not sure. I did see many posts asking for help at some point.
I've also eliminated some third party SaaS integrations by creating slimmer and better integrated services directly into my platform. Which is an example of using AI to bring some features in-house, not primarily to save money (generally not worth the effort if that's the goal), but because it's simply better integrated and less frustrating than dealing with crappy third-party APIs.
Even if they could, the vast majority of them will be more than happy to send $20-100 per month your way to solve a problem than adding it to their stack of problems to solve internally.
AI does it imperfectly today, but if you have had to bet, would you bet that it gets better or worse? I would bet that it will improve, and as it is often with tech, at exponential rate. Then we would seen any workflow described in plain language and minutes see great software churned out. It might be a questions of when (not if) that happens. And are you prepared for that state of affairs?
> But my key takeaway would be that if your product is just a SQL wrapper on a billing system, you now have thousands of competitors: engineers at your customers with a spare Friday afternoon with an agent.
I think the issue is that the "two of them explicitly cloned" were trying to clone something that's more than "just a SQL wrapper on a billing system."
The cost of building is going decreasing every year. The barriers of entry will come down year after year.
So what remains is knowing what you build (= product) as you write and knowing how to get exposure (= marketing). Focus on these two not on building things.
Development tooling improvements usually are a temporary advantage end up being table stakes after a bit of time. I'm more worried that as agentic tooling gets better it obsoletes a lot of SaaS tools where SaaS vendors count on users driving conventional point and click apps (web, mobile and otherwise). I'm encouraging the companies I'm involved with to look to moving to more communication driven microexperience UIs - email, slack, sms, etc instead of more conventional UI.
None of these people can apparently see beyond the tip of their nose. It doesn't matter if it takes a year, or three years, or five years, or ten years. Nothing can stop what's about to happen. If it takes ten years, so what, it's all going to get smashed and turned upside down. These agents will get a lot better over just the next three years. Ten years? Ha.
It's the personal interest bias that's tilting the time fog, it's desperation / wilful blindness. Millions of highly paid people with their livelihoods being disrupted rapidly, in full denial about what the world looks like just a few years out, so they shift the time thought markers to months or a year - which reveals just how fast this is all moving.
Buyer pressure will eventually force process updates, but it is a slow burn. The bottleneck is rarely the tech or the partner, it's the internal culture. The software moves fast, but the people deeply integrated into physical infrastructure move 10x slower than you'd expect.
> people deeply integrated into physical infrastructure move 10x slower than you'd expect.
My experience is yes, to move everyone. To do a pilot and prove the value? That's doable quickly, and if the pilot succeeds, the rest is fast.
This wasn't even an option for a lot of people before this.
For example, even for non software engineering tasks, i'm at an advantage. "Ah you have to analyse these 50 excel files from someone else? I can write something for it"
I myself sometimes start creating a new small tool i wouldn't have tried before but now instead of using some open source project, i can vibe spec it and get something out.
The interesting thing is, that if i have the base of my specs, i might regenerate it later on again with a better code model.
And we still don't know what will happen when compute gets expanded and expanded. Next year a few more DCs will get online and this will continue for now.
Also tools like google firebase will get 1000x more useful with vibe coding. They provide basic auth and stuff like this. So you can actually focus on writing your code.
I'm gonna go ahead and guess that if you have open source competitors, within two years your moat is going to become marketing/sales given how easy it'll be to have an agent deploy software and modify it.
Corporates are allergic to risk; not to spending money.
If anything, I feel that SaaS and application development for larger organisations stands to benefit from LLM assisted development.
There's a huge subset of SaaS that's feature-frozen and being milked for ARR.
My article here isn't really aimed at "good" SaaS companies that put a lot of thought into design, UX and features. I'm thinking of the tens/hundreds of thousands+ of SaaS platforms that have been bought by PE or virtually abandoned, that don't work very well and send a 20% cost renewal through every year.
A lot of them will try though, just means more work for engineers in the future to clean this shit up.
My hot take - LLMs are exposing a whole bunch of developers to this reality.
If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud. Which is an economics disruption, not a technological one: cloud added complexity and points of failure and yet it still disrupted a ton of companies, because it enabled new business models (SaaS for one).
So far, the only disruption I can see coming from LLMs is middleware/integration where it could possibly simplify complexity and reduce overall costs, which if anything will help SaaS (reduction of cost of complements, classic Christensen).
> what do LLMs disrupt? If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud.
"Cost of developing code" is a trivial and incomplete answer.
Coding LLMs disrupt (or will, in the immediate future)
(1) time to develop code (with cost as a second order effect)
(2) expertise to develop code
None of the analogs you provided are a correct match for these.
A closer match would be Excel.
It improved the speed and lowered the expertise required to do what people had previously been doing.
And most importantly, as a consequence of especially the latter more types of people could leverage computing to do more of their work faster.
The risk to B2B SaaS isn't that a neophyte business analyst is going to recreate you app overnight...
... the risk is that 500+ neophyte business analysts each have a chance of replacing your SaaS app, every day, every year.
Because they only really need to get lucky once, and then the organization shifts support to in-house LLM-augmented development.
The only reason most non-technology businesses didn't do in-house custom development thus far was that ROI on employing a software development team didn't make sense for them. Suddenly that's no longer a blocker.
To the point about cloud, what did it disrupt?
(1) time to deploy code (with cost as a second order effect)
(2) expertise to deploy code
B2B SaaS should be scared, unless they're continuously developing useful features, have a deep moat, and are operating at volumes that allow them to be priced competitively.
Coding agents and custom in-house development are absolutely going to kill the 'X-for-Y' simple SaaS clone business model (anything easily cloneable).
The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.
All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.
The frequency of mind-bogglingly stupid 1+1=3 errors (where 1+1 is a specific well-known problem in a business domain and 3 is the known answer) cuts against your 'professional SaaS can do it better' argument.
And to be clear: I'm talking about 'outsourced dev to lowest-cost resources' B2B SaaS, not 'have a team of shit-hot developers' SaaS.
The former of which, sadly, comprises the bulk of the industry. Especially after PE acquisition of products.
Furthermore, I'm not convinced that coding LLMs + scanning aren't capable of surpassing the average developer in code security. Especially since it's a brute force problem: 'ensure there's no gap by meticulously checking each of 500 things.'
Auto code scanning for security hasn't been a significant area of investment because the benefits are nebulous. If you already must have human developers writing code, then why not have them also review it?
In contrast, scanning being a requirement to enabling fast-path citizen-developer LLM app creation changes the value proposition (and thus incentive to build good, quality products).
It's been mentioned in other threads, but Fire/Supabase-style 'bolt-on security-critical components' is the short term solution I'd expect to evolve. There's no reason from-scratch auth / object storage / RBAC needs to be built most of the time.
They already lock down everything enterprise wide and hate low-code apps and services.
But in this day and age, who knows. The cynical take is that it doesn’t matter and nobody cares. Have your remaining handful of employees generate the software they need from the magic box. If there’s a security breach and they expose customer data again… who cares?
Sometimes, the devil you know is preferable -- at least then you control the source.
Folks fail to realize the status quo is often the status quo because it's optimal for a historical set of conditions.
Previously... what would your average business user be able to do productively with an IDE? Weighed against security risks? And so the point that was established.
If suddenly that business user can add substantial amounts of value to the org, I'd be very surprised if that point doesn't shift.
It matters AND...
They liked buying SAP or M$ because it was fully integrated and turnkey. Every SaaS vendor they added had to be SOC2, authenticate with SAML, and each integration had to be audited… it was a lot of work for them.
And we were highly trained, certified developers. I had to sign documents and verify our stack with regulatory consultants.
I just don’t see that fear going away with agents and LLM prompts from frontline workers who have no training in IT security, management, etc. There’s a reason why AI tech needs humans in the loop: to take the blame when they thumbs up what it outputs.
This is wrong. Paradoxically, you need expertise to develop code with LLM.
I'm expecting this to be a bubble, and that bubble to burst; when it does, whatever's the top model at that point can likely still be distilled relatively cheaply like all other models have been.
That, combined with my expectations that consumer RAM prices will return to their trend and decrease in price, means that if the bubble pops in the year 20XX, whatever performance was bleeding edge at the pop, runs on a high-end smartphone in the year 20XX+5.
The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.
Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.
Bubbles don't just mean tulips.
What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:
Runs on a local device, no way to extract profit to repay the cost of training.
Observably, the biggest models we have right now have similar complexity to a rodent's brain, which runs on far less power. Limiting factors for chips in your phone is power, power efficiency is improving rapidly.
Key point. Once people realize that no money can be made from LLMs, they will stop training new ones. Eventually the old ones will become hopelessly out-of-date, and LLMs will fade into history.
Dot com is not super comparable to AI.
Dot com had very few users on the internet compared to today.
Dot com did not have ubiquitous e-commerce. The small group of users didn’t spend online.
Search engines didn’t have the amount of information online that there is today.
Dot com did not have usable high speed mobile data, or broadband available for the masses.
Dot com did not have social media to share and alas how things can work as quickly.
LLMs were largely applicable to industry when gpt 4 came out. We didn’t have the new terms of reference for non deterministic software.
"Can they keep charging money for it?", that's the question that matters here.
There were not as many consumers buying online during dot com boom.
To the extent currently more is being spent on AI than anything in the dot com boom.
Nor did companies run their businesses in the cloud, because there was no real broadband.
There’s no doubt there’s a hype train, there is also an adoption and disruption train, which is also happening.
I could go on, but I’m comfortable with seeing how well this comment ages.
My computer doesn't have enough RAM to run the state of the art in free LLMs, but such computers can be bought and are even affordable by any business and a lot of hobbyists.
Given this, the only way for model providers to stay ahead is to spend a lot on training ever better models to beat the free ones that are being given away. And buy "spend a lot" I mean they are making a loss.
This means that the similarity with the dot com bubble can be expressed with the phrase "losing money on every sale and making up for it in volume".
Hardware efficiency is also still improving; just as I can even run that image model locally on my phone, an LLM equivalent to SOTA today should run on a high-end smartphone in 2030.
Not much room to charge people for what runs on-device.
So, they are in a Red Queen's race, running as hard as they can just to stay where they are. And where they are today, is losing money.
The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.
I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.
At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.
The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.
Let alone ones with credit card processing.
Internet users by year: https://www.visualcapitalist.com/visualized-the-growth-of-gl...
The ecommerce stats by year will interest you.
Not being able to see this is a blind spot.
Domain expertise in an industry usually sits within the client, and is serviced to some degree by vendors.
Not all CEOs have deep domain expertise, nor do they often enough stick to one domain. Maybe that’s where a gap exists.
shit, I'm stealing that quote! it's easier to seize an opportunity, (i.e. build a tool that fixes the problem X without causing annoying Y and Z side effects) but finding one is almost as hard as it was since the beginning of the world wide web.
The worry is that customers who do not realize the full depth of the problem will implement their own app using AI. But that happens today, too: people use spreadsheets to manage their electronic parts (please don't) and BOMs (bills of materials). The spreadsheet is my biggest competitor.
I've been designing and building the software for 10 years now and most of the difficulty and complexity is not in the code. Coding is the last part, and the easiest one. The real value is in understanding the world (the processes involved) and modeling it in a way that cuts a good compromise between ease of use and complexity.
Sadly, as I found out, once you spend a lot of time thinking and come up with a model, copycats will clone that (as well as they can, but superficially it will look similar).
Which I don't think can be replaced by AI in a lot of cases. I think in the software world we are used to things being shared, open and easily knowable, but a great deal of industry and enterprise domain knowledge is locked up inside in companies and will not be in the training data.
That's why it's such a big deal for an enterprise to have on prem tools, to avoid leaking industry processes and "secrets" (the secrets are boring, but still secrets).
A little career advice in there too I guess. At least for now, you're a bit more secure as a developer in industries that aren't themselves software, is my guess.
Yes. I try to visit my customers as often as I can, to learn how they work and to see the production processes on site. I consider it to be one of the most valuable things I can do for the future of my business.
While rolling the whole solution with an AI agent is not practical, taking a open source starting point and using AI to overcome specific workflow pain points as well as add features allows me to have a lower cost, specifically tailored solution to our needs.
This is actually a serious problem for me: my SaaS has a lot of very complex functionality under the hood, but it is not easily visible, and importantly it isn't necessarily appreciated when making a buying decision. Lot control is a good example: most people think it is only needed for coding batches of expiring products. In reality, it's an essential feature that pretty much everyone needs, because it lets you treat some inventory of the same part (e.g. a reel) differently from other inventory of this part (e.g. cut tape) and track those separately.
AI-coding will help people get the features they know they need, but it won't guide them to the features they don't know they could use.
That said, the act of doing this- using LLMs to dominate somebody's legitimately intelligent and unique work- feels not only discourteous, but worse, like it's a short-term solution.
I'm convinced that it's a short-term solution NOT because I don't think that LLMs can continuously maintain these projects, but because open-source itself is going to be clawed back. The raison d'être of open-source is personal pride, hiring, collaboration, enjoyment, trust, etc. These motivations make less sense in an LLM-fueled world.
My prediction is that useful and well maintained open-source projects like we're hijacking will become fewer and far between.
Coding and modeling are interleaved. Prototyping is basically thinking through the models you are considering. If you split the two, you'll end up with a bad model, bad software or both.
I used to love CL and wrote quite a bit of code in it, but since Clojure came along I can't really see any reason to go back.
So, I kind of know what I'm talking about :-) And I don't miss anything from CL: I honestly can't find a single reason to switch back to CL.
You have a product, which sits between your users and what your users want. That product has an UI for users to operate. Many (most, I imagine) users would prefer to hire an assistant to operate that UI for them, since UI is not the actual value your service provides. Now, s/assistant/AI agent/ and you can see that your product turns into a tool call.
So the simpler problem is that your product now becomes merely a tool call for AI agents. That's what users want. Many SaaS companies won't like that, because it removes their advertising channel and commoditizes their product.
It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers. LLMs defeat that by turning the entire human experience into an API, without explicit coding.
This is a big assumption, and not one I've seen in product testing. Open-ended human language is not a good interface for highly detailed technical work, at least not with the current state of LLMs.
> It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers.
I don't... think this is true? Of the top of my head, aside from cloud providers like AWS/GCP/Azure which obviously provide APIs: Salesforce, Hubspot, Jira all provide APIs either alongside basic plans or as a small upsell. Certainly not just for the biggest customers. You're probably thinking of social media where Twitter/Reddit/FB/etc don't really give API access, but those aren't really B2B SaaS products.
That's ridiculous. A good ui will improve on assistant in every way.
Do assistants have some use? Sure—querying.
True.
"Good" UI seems to be in short supply these days, even from trillion dollar corporations.
But even with that, it is still not "ridiculous" for many to prefer to "hire an assistant to operate that UI for them". A lot of the complexity in UI is the balance between keeping common tasks highly visible without hiding the occasional-use stuff, allowing users to explore and learn more about what can be done without overwhelming them.
If I want a spaceship in Blender and don't care which one you get — right now the spaceship models that any GenAI would give you are "pick your poison" between Diffusion models' weirdness and the 3D equivalent of the pelican-on-a-bike weirdness — the easiest UI is to say (or type) "give me a spaceship", not doing all the steps by hand.
If you have some unformatted time series data and want to use it to forecast the next quarter, you could manually enter it into a spreadsheet, or you could say/type "here's a JPG of some time series data, use it to forecast the next quarter".
Again, just to be clear, I agree with everyone saying current AI is only mediocre in performance, it does make mistakes and shouldn't be relied upon yet. But the error rates are going down, the task horizons they don't suck at are going up. I expect the money to run out before they get good enough to take on all SaaS, but at the same time they're already good enough to be interesting.
The fallacy here is believing we already had all the software we were going to use and that AI is now eliminating 90% of the work of creating that. The reality is inverted, we only had a fraction of the software that is now becoming possible and we'll be busy using our new AI tools to create absolutely massive amounts of it over the next years. The ambition level got raised quite a bit recently and that is starting to generate work that can only be done with the support of AI (or an absolutely massive old school development budget).
It's going to require different skills and probably involve a lot more domain experts picking up easy to use AI tools to do things themselves that they previously would have needed specialized programmers for. You get to skip that partially. But you still need to know what you are doing before you can ask for sensible things to get done. Especially when things are mission critical, you kind of want to know stuff works properly and that there's no million $ mistakes lurking anywhere.
Our typical customers would need help with all of that. The amount of times I've had to deal with a customer that had vibe coded anything by themselves remains zero. Just not a thing in the industry. Most of them are still juggling spreadsheets and ERP systems.
> Especially when things are mission critical, you kind of want to know stuff works properly and that there's no million $ mistakes lurking anywhere.
This is what I'm wondering about; things don't change because the company doesn't like change, and the risks of change are very real. So changes either have to be super incremental, or offer such a compelling advantage that they can't be ignored. And AI just doesn't offer the sort of reproducible, reliable results that manufacturing absolutely depends on.
It's just that messing with a company's core manufacturing is something they don't do lightly. They work with multiple shifts of staff that are supposed to work in these environments. People generally don't have a lot of computer skills, so things need to be simple, repeatable, and easy to explain. Any issues with production means cost increases, delays happen, and money is lost.
That being said, these companies are always looking for better ways to do stuff, to eliminate work that is not needed, etc. That's your way in. If there's a demonstrable ROI, most companies get a lot less risk averse.
That used to involve bespoke software integrations. Those are developed at great cost and with some non trivial risk by expensive software agencies. Some of these projects fail and failure is expensive. AI potentially reduces cost and risk here. E.g. a generic SAP integration isn't rocket science to vibe code. We're talking well documented and widely used APIs here. You'd want some oversight and testing here obviously. But it's the type of low level plumbing that traditionally gets outsourced to low wages countries. Using AI here is probably already happening at a large scale.
If software gets cheaper, people will buy more of it, to a point.
AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
Yeah it's a fundamental misunderstanding of economies of scale. If you build an in-house app that does X, you incur 100% of the maintenance costs. If you're subscribed to a SaaS product, you're paying for 1/N % of the maintenance costs, where N is the number of customers.
I only see AI-generated code replacing things that never made sense as a SaaS anyway. It's telling the author's only concrete example of a replaced SaaS product is Retool, which is much less about SaaS and much more about a product that's been fundamentally deprecated.
Wake me up when we see swaths of companies AI-coding internal Jira ("just an issue tracker") and Github Enterprise ("just a browser-based wrapper over git") clones.
This shouldnt be the goal. The goal should be to build an AI that can tell you what is done and what needs to be done i.e. replace jira with natural interactions. An AI that can "see" and "understand" your project. An AI that can see it, understand it, build it and modify it. I know this is not happening for the next few decades or so.
The difference is that an AI-coded internal Jira clone is something that could realistically happen today. Vague notions of AI "understanding" anything are not currently realistic and won't be for an indeterminate amount of time, which could mean next year, 30 years from now, or never. I don't consider that worth discussing.
Are you as a dev still going to pay for analytics and dashboards that you could have propped up by Claude in 5 minutes instead?
Most SaaS products could be replaced by a form + spreadsheet + email workflow, and the reason they aren't is that people don't want to be dealing with a hacky solution. Devs can hack together a nice little webapp instead of a network of spreadsheets, but it's still a hack. Factoring in AI assistance, perhaps SaaS is now competing with "something I hacked together in a week" as opposed to "something I hacked together in a month," but it's a hack either way.
I am absolutely going to pay for analytics and dashboards, because I don't want the operational concerns of my Elasticsearch analytics cluster getting in the way of the alarm that goes off when my primary database catches fire. Ops visibility is too important to be a hack, regardless of how quickly I could implement that hack.
Generating code is one part of software engineering is a small part of SaaS.
Do you pay for OpenTelemetry? How is this related?
So, I ask again - how do you know that the service you're paying for is all of those things?
Not to mention the author appears to run a 1-2 person company, so ... yeah. AI thought leadership ahoy.
I’m pretty certain AI quadruples my output at least and facilitates fixing, improving and upgrading poor quality inherited software much better than in the past. Why pay for SaaS when you can build something “good enough” in a week or two? You also get exactly what you want rather than some £300k per year CRM that will double or treble in price and never quite be what you wanted.
About a decade ago we worked with a partner company who was building their own in-house software for everything. They used it as one of their selling points and as a differentiator over competitors.
They could move fast and add little features quickly. It seemed cool at first.
The problems showed up later. Everything was a little bit fragile in subtle ways. New projects always worked well on the happy path, but then they’d change one thing and it would trigger a cascade of little unintended consequences that broke something else. No problem, they’d just have their in-house team work on it and push out a new deploy. That also seemed cool at first, until they accumulated a backlog of hard to diagnose issues. Then we were spending a lot of time trying to write up bug reports to describe the problem in enough detail for them to replicate, along with constant battles over tickets being closed with “works in the dev environment” or “cannot reproduce”.
> You also get exactly what you want rather than some £300k per year CRM
What’s the fully loaded (including taxes and benefits) cost of hiring enough extra developers and ops people to run and maintain the in house software, complete with someone to manage the project and enough people to handle ops coverage with room for rotations and allowing holidays off? It turns out the cost of running in-house software at scale is always a lot higher than 300K, unless the company can tolerate low ops coverage and gaps when people go on vacation.
We often ended up discarding large chunks of these poorly tested features, instead of trying to get them to work, and wrote our own. This got to a point where only the core platform was used, and replacing that seemed to be totally feasible.
SaaS often doesn't solve issues but replaces them - you substitute general engineering knowledge and open-source knowhow with proprietary one, and end up with experts in configuring commercial software - a skill that has very little value on the market where said software is not used, and chains you to a given vendor.
But what you're describing is the narrow but deep vs wide but shallow problem. Most SaaS software is narrow but deep. Their solution is always going to be better than yours. But some SaaS software is wide but shallow, it's meant to fit a wide range of business processes. Its USP is that it does 95% of what you want.
It sounds like you were using a "wide-shallow" SaaS in a "narrow-deep" way, only using a specific part of the functionality. And that's where you hit the problems you saw.
It's full of features, half of which either do not work, or do not work as expected, or need some arcane domain knowledge to get them working. These features provide 'user-friendly' abstractions over raw stuff, like authing with various repos, downloading and publishing packages of different formats.
Underlying these tools are probably the same shell scripts and logic that we as devs are already familiar with. So often the exercise when forced to use these things is to get the underlying code to do what we want through this opaque intermediate layer.
Some people have resorted to fragile hacks, while others completely bypassed these proprietary mechanisms, and our build scripts are 'Run build.sh', with the logic being a shell or python script, which does all the requisite stuff.
And just like I mentioned in my prev post, SaaS software in this case might get tested more in general, but due to the sheer complexity it needs to support on the client side, testing every configuration at every client is not feasible.
At least the bugs we make, we can fix.
And while I'm sure some of this narrow-deep kinds of SaaS works well (I've had the pleasure to use Datadog, Tailscale, and some big cloud provider stuff tends to be great as well), that's not all there is that's out there and doesn't cover everything we need.
You have bought a shallow but wide SaaS product, one with tons of features that don't get much development or testing individually.
You're then trying to use it like a deep but narrow product and complaining that your complex use case doesn't fit their OK-ish feature.
MS do this in a lot of their products, which is why Slack is much better than Teams, but lots of companies feel Teams is "good enough" and then won't buy Slack.
I'm sure you have encountered the pattern where you write A that calls B that uses C as the underlying platform. You need something in A, and know C can do it, but you have to figure out how you can achieve it through B. For a highly skilled individual(or one armed with AI) , B might have a very different value proposition than one who has to learn stuff from scratch.
Js packages are perfect illustration of these issues - there are tons of browser APIs that are wrapped by easy-to-use 'wrapper' packages, that have unforeseen consequences down the road.
On top of that, SaaS takes your power away. A bug could be quite small, but if a vendor doesn't bother to fix it, it can still ruin your life for a long time. I've seen small bugs get sandbagged by vendors for months. If you have the source code you can fix problems like these in a day or two, rather than waiting for some nebulous backlog to work down.
My experience with SaaS is that products start out fine, when the people building them are hungry and responsive and the products are slim and well priced. Then they get bloated trying to grow market share, they lose focus and the builders become unresponsive, while increasing prices.
At this point you wish you had just used open source, but now it's even harder to switch because you have to jump through a byzantine data exfiltration process.
Maybe write some tests and have great software development practices and most importantly people who care about getting the details right. Honestly there’s no reason for software to be like this is there? I don’t know how much off the shelf ERP software you have used but I wouldn’t exactly describe that as flawless and bug free either!
Soon or later the CTO will be dictating which projects can be vibe coded which ones make sense to buy.
SaaS benefits from network effects - your internal tools don't. So overall SaaS is cheaper.
The reality is that software license costs is a tiny fraction of total business costs. Most of it is salaries. The situation you are describing the kind of dead spiral many companies will get into and that will be their downfall not salvation.
Yes and no. If someone is controlling the SaaS selection, then this is true.
But I've seen startup phase companies with multiple slightly overlapping SaaS subscriptions (Linear + Trello + Asana for example), just because one PM prefers one over the other.
Then people have bought full-ass SaaS costing 50-100€/month for a single task it does.
I'd describe the "Use AI to make bespoke software" as the solution you use to round out the sharp edges in software (and licensing).
The survey SaaS wants extra money to connect to service Y, but their API is free? Fire up Claude and write the connector ourselves. We don't want to build and support a full survey tool, but API glue is fine.
Or someone is doing manual work because vendor A wants their data in format X and vendor B only accepts format Y. Out comes Claude and we create a tool that provides both outputs at the same time. (This was actually written by a copywriter on their spare time, just because they got annoyed with extra busywork. Now it's used by a half-dozen people)
The reason software licenses are easier to cut by the finance team when things are not going well is because software does not have feelings although we all know that this not making a dent. Ultimately software scales much better than people and if the software is "thinking" it will scale infinitely better.
Building it all in house will only happen for 2 reasons: 1. The problem is so specific that this is the only variable option and the quickest (fear enough). 2. Developers and management do not have real understanding of software costs.
Developers not understanding the real costs should be forgiven because most of them are never in position to make these type of decisions - i.e they are not trained. However a manager / executive not understanding this is sign of lack of experience. You really need to try to build a few medium-sized none essential software systems in-house to get an idea how bad this can get and what a waste of time and money it really is - resources you could have spent elsewhere to effect the bottom the real bottomline.
Also the lines of code that are written do not scale linearly with team sizes. The more code you produce the bigger the problem - even with AI.
Ultimately a company wants to write as few line of code as possible that extract as much value as feasibly possible.
A lot of the SaaS target companies won't even have a CTO
Building is only one part. Maintaining and using/running is another.
Onboarding for both technical and functional teams takes longer as the ERP is different from other company. Feature creep is an issue. After all who can say no to more bespoke features. Maybe roll CRM, Reporting and Analytics into one. Maintenance costs and priorities now become more important.
We have also explored AI agents in this area. People specific tasks are great use cases. Create mock up and wireframes? AI can do well and you still have human in the loop. Enterprise level tasks like say book closing for late company ERP? AI makes lot of mistakes.
To attempt to summarize the debate, there seems to be three prevailing schools of thought:
1. Status Quo + AI. SaaS companies will adopt AI and not lose share. Everyone keeps paying for the same SaaS plus a few bells and whistles. This seems unlikely given AI makes it dramatically cheaper to build and maintain SaaS. Incumbents will save on COGS, but have to cut their pricing (which is a hard sell to investors in the short term).
2. SaaS gets eaten by internal development (per OP). Unlikely in short/medium term (as most commenters highlight). See: complete cloud adoption will take 30+ years (shows that even obviously positive ROI development often does not happen). This view reminds me a bit of the (in)famous DropBox HN comment(1) - the average HN commenter is 100x more minded to hack and maintain their own tool than the market.
benzible (commenter) elsewhere said this well - "The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon."
This same logic explains why external boutique beats internal builds --
3. AI helps boutique-software flourish because it changes vendor economics (not buyer economics). Whereas previously an ERP for a specific niche industry (e.g. wealth managers who only work with Canadian / US cross-border clients) would have had to make do with a non-specific ERP, there will now be a custom solution for them. Before AI, the $20MM TAM for this product would have made it a non-starter for VC backed startups. But now, a two person team can build and maintain a product that previously took ten devs. Distribution becomes the bottleneck.
This trend has been ongoing for a while -- Toast, Procore, Veeva -- AI just accelerates it.
If I had to guess, I expect some combination of all three - some incumbents will adapt well, cut pricing, and expand their offering. Some customers will move development in house (e.g. I have already seen several large private equity firms creating their own internal AI tooling teams rather than pay for expensive external vendors). And there will be a major flourishing of boutique tools.
What _has_ surprised me though is just how many companies are (or are considering) building 'internal' tooling to replace SaaS they are not happy with. These are not the classic HN types whatsoever. I think when non technical people get to play with AI software dev they go 'wow so why can't we do everything like this'.
I think your point 3 is really interesting too.
But yes the point of my article (hopefully) wasn't that SaaS is overnight dead, but some thin/lower "quality" products are potentially in real trouble.
People will still buy and use expertly designed products that are really nice to use. But a lot of b2b SaaS is not that, its a slow clunky mess that wants to make you scream!
this means if I sell it to your business for the price of < your salary - you will get fired and business will use my version.
Why? because my will always be better as 10 people work on it vs you alone.
Internal versions will never be better or cheaper than saas (unless you are doing some tiny and very specific automation).
They can be better than current solution - but only a matter of time when someone makes a saas equal and better to what you do internally.
Sure almost anything will be better and cheaper that hubspot.
But with AI smaller CRMs that are hyper focused on businesses like yours will start popping up and eating its market.
Anything bigger than a toy project will always be cheaper/better to buy.
Also maintaining a software is pain
Also for perpetually small companies, its now easy to build simple scripts to be achieve some productivity gains.
- anything that requires very high uptime
-very high volume systems and data lakes
-software with significant network effects
-companies that have proprietary datasets
-regulation and compliance is still very important
Then it dawned on me how many companies are deeply integrating Copilot into their everyday workflows. It's the perfect Trojan Horse.
For example, in RL, you have a train set, and a test set, which the model never sees, but is used to validate it - why not put proprietary data in the test set?
I'm pretty sure 99% of ML engineers would say this would constitute training on your data, but this is an argument you could drag out in courts forever.
Or alternatively - it's easier to ask for forgiveness than permission.
I've recently had an apocalyptic vision, that one day we'll wake up, an find that AI companies have produced an AI copy of every piece of software in existence - AI Windows, AI Office, AI Photoshop etc.
if they can get away with it (say by claiming it's "fair use"), they'll ignore corporate ones too
it's an incentive to pretend as if you're following the contract, which is not the same thing
despite all 3 branches of the government disagreeing with them over and over again
There may very well be clever techniques that don't require directly training on the users' data. Perhaps generating a parallel paraphrased corpus as they serve user queries - one which they CAN train on legally.
The amount of value unlocked by stealing practically ~everyone's lunch makes me not want to put that past anyone who's capable of implementing such a technology.
Also I wonder if the ToS covers "queries & interaction" vs "uploaded data" - I could imagine some tricky language in there that says we wont use your word document, but we may at some time use the queries you put against it, not as raw corpus but as a second layer examining what tools/workflows to expand/exploit.
There’s a range of ways to lie by omission, here, and the major players have established a reputation for being willing to take an expansive view of their legal rights.
There are claims all through this thread that “AI companies” are probably doing bad things with enterprise customer data but nobody has provided a single source for the claim.
This has been a theme on HN. There was a thread a few weeks back where someone confidently claimed up and down the thread that Gemini’s terms of service allowed them to train on your company’s customer data, even though 30 seconds of searching leads to the exact docs that say otherwise. There is a lot of hearsay being spread as fact, but nobody actually linking to ToS or citing sections they’re talking about.
[1] - https://www.microsoft.com/en-us/trust-center/privacy/data-ma...
Many businesses simply couldn't afford to operate without such an edge.
None of the mainstream paid services ingest operating data into their training sets. You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.
“How can I control whether my data is used for model training?
If you are logged into Copilot with a Microsoft Account or other third-party authentication, you can control whether your conversations are used for training the generative AI models used in Copilot. Opting out will exclude your past, present, and future conversations from being used for training these AI models, unless you choose to opt back in. If you opt out, that change will be reflected throughout our systems within 30 days.” https://support.microsoft.com/en-us/topic/privacy-faq-for-mi...
At this point suggesting it has never and will her happen is wildly optimistic.
While this isn't used specifically for LLM training, it can involve aggregating insights from customer behaviour.
Merely using an LLM for inference does not train it on the prompts and data, as many incorrectly assume. There is a surprising lack of understanding of this separation even on technical forums like HN.
However, let's say I record human interactions with my app; for example when a user accepts or rejects an AI sythesised answer.
This data can be used by me, to influence the behaviour of an LLM via RAG or by altering application behaviour.
It's not going to change the weighting of the model, but it would influence its behaviour.
What? That’s literally my point: Enterprise agreements aren’t training on the data of their enterprise customers like the parent commenter claimed.
"We will train new models using data from Free, Pro, and Max accounts when this setting is on (including when you use Claude Code from these accounts)."
“You can use an LLM to paraphrase the incoming requests and save that. Never save the verbatim request. If they ask for all the request data we have, we tell them the truth, we don’t have it. If they ask for paraphrased data, we’d have no way of correlating it to their requests.”
“And what would you say, is this a 3 or a 5 or…”
Everything obvious happens. Look closely at the PII management agreements. Btw OpenAI won’t even sign them because they’re not sure if paraphrasing “counts.” Google will.
Many of the top AI services use human feedback to continuously apply "reinforcement learning" after the initial deployment of a pre-trained model.
https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...
Inference (what happens when you use an LLM as a customer) is separate from training.
Inference and training are separate processes. Using an LLM doesn’t train it. That’s not what RLHF means.
The big companies - take Midjourney, or OpenAI, for example - take the feedback that is generated by users, and then apply it as part of the RLHF pass on the next model release, which happens every few months. That's why they have the terms in their TOS that allow them to do that.
Nothing is really preventing this though. AI companies have already proven they will ignore copyright and any other legal nuisance so they can train models.
The enterprise user agreement is preventing this.
Suggesting that AI companies will uniquely ignore the law or contracts is conspiracy theory thinking.
"Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal"
https://www.wired.com/story/new-documents-unredacted-meta-co...
They even admitted to using copyrighted material.
"‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says"
https://www.theguardian.com/technology/2024/jan/08/ai-tools-...
https://www.vice.com/en/article/meta-says-the-2400-adult-mov...
It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this. And it keeps happening. Granted I'm unaware of cases of this occuring currently with professional AI services but it's basic security 101 that you should never let anything even have the remote opportunity to ingest data unless you don't care about the data.
This is objectively untrue? Giants swaths of enterprise software is based on establishing trust with approved vendors and systems.
Do you have any citations or sources for this at all?
Stealing implies the thing is gone, no longer accessible to the owner.
People aren't protected from copying in the same way. There are lots of valid exclusions, and building new non competing tools is a very common exclusion.
The big issue with the OpenAI case, is that they didn't pay for the books. Scanning them and using them for training is very much likely to be protected. Similar case with the old Nintendo bootloader.
The "Corpo Fascists" are buoyed by your support for the IP laws that have thus far supported them. If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.
Isn't this a little simplistic?
If the value of something lies in its scarcity, then making it widely available has robbed the owner of a scarcity value which cannot be retrieved.
A win for consumers, perhaps, but a loss for the owner nonetheless.
Trying to group (Thing I dont like) with (Thing everyone doesnt like) is an old semantic trick that needs to be abolished. Taxonomy is good, if your arguments are good, you dont need emotively charged imprecise language.
You know a position is indefensible when you equivocation fallacy this hard.
> The "Corpo Fascists" are buoyed by your support for the IP laws
You know a position is indefensible when you strawman this hard.
> If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.
Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.
> Scanning them and using them for training is very much likely to be protected.
Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?
No its quite defensible. And if that was equivocation, you can simply outline that you didn't mean to invoke the specific definition of stealing, but were just using it for its emotive value.
>You know a position is indefensible when you strawman this hard.
Its accurate. No one wants thes LLM guys stopped more than other big fascistic corporations, plenty of oppositional noise out there for you to educate yourself with.
>Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.
Cool, so if you agree all data should usable to create derivative works then I don't see what your complaint is.
>Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?
You invoked "strawman" and then hit me with this combo strawman/non sequitur? Cool move <1 day old account, really adds to your 0 credibility.
I literally pointed out they should have to pay the same access fee as anyone else for the data, but once obtained, should be able to use it any way. Reading the comment explains the comment.
Unless, charitably, you are suggesting that if a company is legally able to purchase content, and use it as training data, that somehow compels them to release that data for free themselves?
Weird take if true.
Looks like we're headed back to the internal IT days of building customized LoB apps.