Top
Best
New

Posted by jnord 1 day ago

AI agents are starting to eat SaaS(martinalderson.com)
353 points | 357 comments
benzible 18 hours ago|
I'm CTO at a vertical SaaS company, paired with a product-focused CEO with deep domain expertise. The thesis doesn't match my experience.

For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.

I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.

The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.

efitz 3 hours ago||
I don’t know what you build, but I’ll share some thoughts from the other side (customer):

Many SaaS products I am interested in have very little “moat”. I am interested in them not because I can’t build them, but because my limited engineering time is better spent building business specific stuff.

Many products with product management teams spend a lot of their effort building functionality either to delight their highest paying customers, or features that are expected to be high-revenue.

I’m never going to be your highest paying customers, so I’m never going to get custom work from you (primarily orienting workflows to existing workflows inside your customers).

What everyone wants when they buys SaaS is to get value from it immediately without having to change our internal processes, broken as they are. But your model of feature prioritization is antithetical to this; you don’t want to build or support the 5-10 integration points I want; because that would allow me to build my own customizations without paying for your upsells.

You aren’t at immediate risk from agentic Ai from losing your big customers. But Agentic AI is enabling me and thousands of others to build hobby projects that deliver part of your core value but with limitless integration. I expect that you’ll see bleeding from the smallish customers way before you see hits from your whales.

However in a couple of years there will be OSS alternatives to what you do, and they will only become more appealing, rapidly.

As a side note it’s not just license pricing that will drive customers to agentically-coded solutions; it’s licensing terms. Nowadays whenever I evaluate SaaS or open source, if it’s not fully published on GitHub and Apache or MIT licensed, then I seriously consider just coding up an alternative - I’ve done this several times now. It’s never been easier.

benzible 46 minutes ago||
The OSS point doesn't apply to every vertical. Open source applications come about when developers scratch their own itch. Developer tools, infrastructure, general purpose CRMs, project management get OSS alternatives because developers use them and want to build them.

Nobody is building open source software for [niche professional vertical] in their spare time. It's not mass market. It's not something a developer encounters in their daily work and thinks "I could do this better." The domain knowledge required to even understand the problem space takes months to acquire, and there's no personal payoff for doing so.

The "OSS will appear" prediction works for horizontal tools. For deep vertical SaaS, the threat model is different: it's other funded startups or internal enterprise clones (both of which we've already faced and won against).

btown 8 hours ago|||
It's a bit surprising to me that Microsoft hasn't created a product that's "you have an Excel file in one of our cloud storage systems, here's a way for you to vibe code and host a web app whose storage is backed entirely by that file, where access control is synced to that file's access, and real-time updates propagate in both directions as if someone were editing it in native Excel on another computer. And you can eject a codebase that you, as the domain expert, can hand to a tech team to build something more broadly applicable for your organization."

Nowhere near the level of complexity that would enter your threat model. But this would be the first, minimal step towards customers building their own tools, and the fact that not even this workflow has entered the zeitgeist is... well, it's not the best news for some of the most bullish projections of AI adoption in businesses large and small.

nitwit005 5 hours ago|||
You can use something like Salesforce as an app platform if you want. It lets you create "Custom Objects", which are basically tables, write queries, and so on.

It's just that the hassle of dealing with that platform tends to be similar to the hassle of setting up an app yourself, and now you're paying a per-user license cost.

btown 5 hours ago||
Even Salesforce doesn't have a good way to quickly port an Excel-based workflow, with file handoffs and backwards compatibility, into Salesforce. In theory, you could have an LLM generate all the metadata files that would execute a relevant schema migration, generate the interface XML, and build the right kinds of API calls and webhooks... but understanding what it's doing requires a Ph.D. in Salesforce, and many don't have time for that.
JaumeGreen 8 hours ago||||
I miss MSAccess, but for the modern age. It has been replaced by basic CRUD using your platform of choice, but it's not as easy.

That would be similar to your solution, so either one would work.

I think that there might be some similar alternatives (maybe Airtable? probably using Lovable or Firebase counts) but nothing that is available for me for now.

xnx 6 hours ago|||
Agree. Lack of something as accessible and useful as Access is a major hole. AppSheet comes close: https://about.appsheet.com/home/
mike_hearn 4 hours ago|||
The modern version of Access is something like APEX.

https://www.oracle.com/apex/

APEX is probably just as widely used now as Access was. Access likely had higher market share but of a much smaller market. There are gazillions of APEX apps out there.

ImPleadThe5th 3 hours ago|||
Probably because Microsoft knows vibe coding is _not_ an actual viable way to build production ready code and does not want to deal with the liability issues of prompting customers to move from a working Excel sheet to a broken piece of software that looks like it works.

In my experience, it's actually quite hard to move a business from an excel sheet to software. Because an excel sheet allows the end user to easily handle every edge case and they likely don't even think in terms of "edge cases"

Figs 2 hours ago||
You say that, but the crazy people at Microsoft put a COPILOT function into Excel already...

https://support.microsoft.com/en-us/office/copilot-function-...

SkyPuncher 11 hours ago|||
Our sales teams heres the "we'll just build it internally" or "we can just throw it into an LLM" all of the time.

Yes, certain parts of our product are indeed just lightweight wrappers around an LLM. What you're paying for is the 99% of the other stuff that's (1) either extremely hard to do (and probably non-obvious) (2) an endless supply of "routine" work that still takes time (3) an SLA/support that's more than "random dev isn't on PTO"

robofanatic 10 hours ago|||
> "we'll just build it internally" or "we can just throw it into an LLM" all of the time.

Is that a bluff used to negotiate the price?

pseudosavant 9 hours ago||
If it is a credible bluff, does it work?
citizenpaul 7 hours ago|||
No because it is never a credible bluff. You would not be having the conversation if it was.

In fact having sold stuff If a lead says this, it is a huge red flag for me that I probably don't want to do business with them because they are probably a "vampire customer"

UltraSane 5 hours ago|||
LLMs can write surprisingly decent code a few hundred lines at a time but they absolutely can't write coherent hundred thousand line or bigger programs.
woah 8 hours ago|||
> (3) an SLA/support that's more than "random dev isn't on PTO"

Why do they have an internal engineering org at all if they can't manage the most basic maintenance of a software product?

bob1029 15 hours ago|||
> Domain expertise + tight feedback loop

This is the answer to a happy B2B SaaS implementation. It doesn't matter what tools you use as long as this can be achieved.

In the domain of banking front/back office LOB apps, if you aren't iterating with your customer at least once per business day, you are definitely falling behind your competition. I've got semi-retired bankers insisting that live edits in production need to be a fundamental product feature. They used to be terrified of this. Once they get a taste of proper speed it's like blood in the water. I'm getting pushback on live reloads taking more than a few seconds now.

Achieving this kind of outcome is usually more of a meat space problem than it is a technology problem. Most customers can't go as fast as you. But the customer should always be the bottleneck. We've done things like install our people as temporary employees to get jobs done faster.

hvb2 6 hours ago||
I might be missing something but live edits in production and banking? Doesn't that violate all kinds of compliance controls?
bob1029 5 hours ago||
> Doesn't that violate all kinds of compliance controls?

Technically, only if it causes some kind of security, privacy, availability or accounting issue. The risk is high but it can be done.

Half of our customers do not have anything resembling a test environment. It is incredibly expensive to maintain a meaningful copy of production in this domain. Smaller local/regional banks don't bother at all.

jeswin 16 hours ago|||
> For one thing, the threat model assumes customers can build their own tools.

That's not the threat model. The threat model is that they won't have to - at some point which may not be right now. End users want to get their work done, not learn UIs and new products. If they can get their analysis/reports based on excels which are already on SharePoint (or wherever), they'd want just that. You can already see this happening.

TeMPOraL 15 hours ago|||
Yes. This is also why trying to add an AI agent chat into one's product is a fool's errand - the whole point of having general-purpose conversational AI is to turn the product into just another feature.

It's an ugly truth product owners never wanted to hear, and are now being forced to: nobody wants software products or services. No one really wants another Widgetify of DoodlyD.oo.io or another basic software tool packaged into bespoke UI and trying to make itself a command center of work in their entire domain. All those products and services are just standing between the user and the thing the user actually wants. The promise of AI agents for end-users is that of having a personal secretary, that deals with all the product UI/UX bullshit so the user doesn't have to, ultimately turning these products into tool calls.

skywhopper 15 hours ago|||
Assuming this ever works, this is no threat to the SaaS industry. If anything it increases its importance.
TeMPOraL 15 hours ago||
SaaS products rely on resisting commoditization. AI agents defeat that.
cpursley 12 hours ago|||
You're probably thinking "SaaS for other tech end users". Most SaaS is not that.
phkahler 9 hours ago||
Isn't the majority of SaaS in ERP systems?
nprateem 14 hours ago|||
Yes, except for the fact that any non-trivial saas does non-trivial stuff that an agent will be able to call (as the 'secretary') while the user still has to pay the subscription to use.
TeMPOraL 6 hours ago|||
Yes, but now it's easier for other SaaS to compete on that, because they don't get to bundle individual features under common webshit UI and restrict users to whatever flows the vendor supports. There will be pressure to provide more focused features, because their combining and UI chrome will be done by, or on the other side of, the AI agent.
ethbr1 13 hours ago||||
I don't think history opines favorably on companies that lose the last-mile connection with their customers.

For purposes of this thread, if chat AI becomes the primary business interface, then every service behind that becomes much easier to replace.

testbjjl 12 hours ago||||
Will the SaaS also use LLMs? If so it opens the questions, why not and do we really need, as the article points out.
immibis 12 hours ago|||
That's the brilliance of AI - it doesn't matter if the product actually works or not. As long as it looks like it works and flatters the user enough, you get paid.

And if you build an AI interface to your product, you can make it not work in subtly the right ways that direct more money towards you. You can take advertising money to make the AI recommend certain products. You can make it give completely wrong answers to your competitors.

indymike 11 hours ago||||
> No one really wants another Widgetify of DoodlyD.oo.io

I keep hearing this and seeing people buying more Widgetify of DoodlyD.oo.io. I think this is more of a defensive sales tactic and cope for SaaS losing market share.

enraged_camel 11 hours ago|||
>> This is also why trying to add an AI agent chat into one's product is a fool's errand - the whole point of having general-purpose conversational AI is to turn the product into just another feature

We built an AI-powered chat interface as an alternative to a fully featured search UI for a product database and it has been one of the most popular features of 2025.

TeMPOraL 6 hours ago|||
Sure, but it would be even better if it was accessible by ChatGPT[0] and not some bespoke chat interface you created - because with ChatGPT, the AI has all the other tools and can actually use yours in intelligent ways as part of doing something for the user.

--

[0] - Or Claude, or Gemini.

btbuildem 8 hours ago|||
Right -- and that's likely because search was completely broken, people always complained about it, and nothing was ever done to improve it.
adriand 13 hours ago||||
The president of a company I work with is a youngish guy who has no technical skills, but is resourceful. He wanted updated analytic dashboards, but there’s no dev capacity for that right now. So he decided he was going to try his hand at building his own dashboard using Lovable, which is one of these AI app making outfits. I sent him a copy of the dev database and a few markdown files with explanations regarding certain trickier elements of the data structure and told him to give them to the AI, it will know what they mean. No updates yet, but I have every confidence he’ll figure it out.

Think about all the cycles this will save. The CEO codes his own dashboards. The OP has a point.

William_BB 12 hours ago|||
I'd argue it's not CEOs job to code his own dashboards...

This sounds like a vibe coding side project. And I'm sorry, but whatever he builds will most likely become tech debt that has to be rewritten at some point.

nlake906 11 hours ago||
Or to steel-man it, it could also end up as a prototype that forced the end user to deal with decision points, and can serve as a framework for a much more specific requirements discussion.
btbuildem 8 hours ago||
Exactly -- vibe coded PoC becomes a living spec for prod
threetonesun 9 hours ago||||
We perpetually find worse and more expensive ways to reinvent Microsoft Access.
ogogmad 6 hours ago||
Interesting comment. Which ways have people been doing this?
htrp 8 hours ago||||
All tech problems are actually people problems.

once the Csuite builds their own dashboards, they quickly decide what they actually need versus what is a nice to have.

DrScientist 7 hours ago||
And I wonder if they will discover that in order to interpret those numbers in a lot of cases they will need to bring in their direct reports to contextualise them.

If corporate decisions could be made purely from the data recorded then you don't need people to make those decisions. The reason you often do is that a lot of the critical information for decision making is brought in to the meeting out-of-band in people's heads.

hrimfaxi 12 hours ago||||
At a certain scale the CEO's time is likely better spent dictating the dashboard they want rather than implementing it themselves. But I guess to your point, the future may allow for the dictation to be the creation.
hobs 11 hours ago||
Agree, as engineers we should be making the car easier to operate instead of making everyone a mechanic.

Focus on the simple iteration loop of "why is it so hard to understand things about our product?" maybe you cant fix it all today but climb that hill more instead of make your CEO spend some sleepless nights on a thing that you could probably build in 1/10th the time.

If you want to be a successful startup saas sw eng then engaging with the current and common business cases and being able to predict the standard cache of problems they're going to want solved turns you from "a guy" to "the guy".

SoftTalker 7 hours ago||
Most engineers like being mechanics though.
vlugovsky 12 hours ago||||
Totally!

I have also seen multiple similar use cases where non-technical users build internal tools and dashboards on top of existing data for our users (I'm building UI Bakery). This approach might feel a bit risky for some developers, but it reduces the number of iterations non-technical users need with developers to achieve what they want.

jimbokun 3 hours ago||||
Update us when you have an actual success story.
ceejayoz 7 hours ago|||
> No updates yet, but I have every confidence he’ll figure it out.

"It" being "that it's harder than it looks"?

adriand 7 hours ago||
> "It" being "that it's harder than it looks"?

Honestly, I'm not sure what to expect. There are clearly things he can't do (e.g. to make it work in prod, it needs to be in our environment, etc. etc.) but I wouldn't be at all surprised if he makes great headway. When he first asked me about it, I started typing out all the reasons it was a bad idea - and then I paused and thought, you know, I'm not here to put barriers in his path.

testbjjl 12 hours ago|||
The Excel holy grail. Dashboard are an abstraction, SaaS is an abstraction of an abstraction from the pov of customers suffering from a one size fits all. Shell scripts generated by LLMs that send automated a customized reports via email will make a lot of corporate heros. No need to login, learn and use the SaaS in many instances for decisions makers.
cdurth 11 hours ago||
I feel that large corps have guard rails that will limit this from happening. For SMB's, this is not a new problem. Gritty IT guys have been doing this for decades. I inherit these bootstrapped reporting systems all the time. The issue is when that person leaves, it is no longer maintainable. I've yet to come across a customer who has had any sort of usable documentation. The process then repeats itself when I take over, and presumably when I'm finished. With a SaaS product, you are at least paying for some support and visibility of the processes. I'm not really trying to make a point other than this is not a new, but still intriguing problem, and not sure that LLMs will be some god answer, as the organizations have trouble determining what they even need.
SoftTalker 7 hours ago||
Yes, back in the heyday of Visual Basic (mid-1990s) we had one business analyst who learned enough to build dashboard-like apps with charts and graphs and parameters and filters. He was quick at it and because it was outside of IT there was little in the way of process or guardrails to slow him down. Users loved what he did, but when he left there was nobody else who knew anything about it.
Crowberry 17 hours ago|||
I second this. Most of our customers IT department struggle to look at the responses from their failed API calls. Their systems and organisations are just too big.

As it stands today; just a bit of complexity is all that is required to make AI Agents fail. I expect the gap to narrow over the years of course. But capturing complex business logic and simplifying it will probably be useful and worth paying for a long time into the future.

agwp 13 hours ago|||
Also, for many larger companies, access to internal data and systems is only granted to authorized human users and approved applications/agents. Each approval is a separate request.

This means for any "manual" or existing workflow requiring a access to several systems, that requires multiple IT permissions with defined scopes. Even something as simple as a sales rep sending a DocuSign might need:

- CRM access

- DocuSign access

- Possibly access to ERP (if CRM isn't configured to pass signed contract status and value across)

- Possibly access to SharePoint / Power Automate (if finance/legal/someone else has created internal policy or process, e.g. saving a DocuSign PDF to a folder, inputting details for handover to fulfilment or client success, or submitting ticket to finance so invoicing can be set up)

thorawaytrav 13 hours ago||
It is much easier to use an AI API in my bank than to use any other tool. Since the AI is from MS, it's ready to go, whereas other tools require a few months of budgeting, licenses, certs, and so on. Since AI/Azure/AWS is already there and 'certified to use,' it is easier for me to patch something together using this stack than to even ask for open-source software
jakeydus 9 hours ago||
God I hope my bank isn't using agents to build things. "Sorry, Grok misplaced your retirement funds."
j45 14 hours ago|||
Skills seem to be promising.

I never understood the evolvement around agents, they just appeared to me as Python scripts initially (Crewai 2-3 years ago).

The question is can people see that agents will evolve? Similar to how software evolves to handle the right depth of granularity.

igortg 13 hours ago|||
The beauty of HN: frequently comments are way more valuable than the article being shared
pseudosavant 9 hours ago|||
There are plenty of HN posts where I only read the comments because the discussion around that topic is the most interesting part.
chrisweekly 11 hours ago|||
Agreed. I've been on HN for 15 years, and IME maybe 90% of the value has come directly from comments (another 5% from links in commenters' profiles, and 5% from TFAs).
wouldbecouldbe 11 hours ago|||
Yeah I think the real values is for the Solo developers, indie hackers & side projects.

Being unrestrained by team protocols, communications, jira boards, product owners, grumpy seniors.

They can now deliver much more mature platforms, apps, consumer platforms without any form of funding. You can easily save months on the basics like multi tenant set up, tests, payment integration, mailing settings, etc.

It does seem likely that the software space is about to get even crowdier, but also much more feature rich.

There is of course also a wide array of dreamers & visionairies who know jump into the developer role. Wether or not they are able to fully run their own platform im not sure. I did see many posts asking for help at some point.

rapind 10 hours ago||
As a solo grumpy senior, I've been pumping out features over the past 6 months and am now expanding into new markets.

I've also eliminated some third party SaaS integrations by creating slimmer and better integrated services directly into my platform. Which is an example of using AI to bring some features in-house, not primarily to save money (generally not worth the effort if that's the goal), but because it's simply better integrated and less frustrating than dealing with crappy third-party APIs.

lm28469 12 hours ago|||
> For one thing, the threat model assumes customers can build their own tools. Our end users can't.

Even if they could, the vast majority of them will be more than happy to send $20-100 per month your way to solve a problem than adding it to their stack of problems to solve internally.

popcorncowboy 12 hours ago|||
You'd hear this all the time back when. "Oh you could build Twitter in a weekend". Yes. Also, very no. This mentality is now on agent steroids. But the lesson is the same.
riantogo 4 hours ago|||
While most here are aligned with your perspective, and for good reasons, let me offer an alternate perspective. Today AI can take the goal and create a workflow for it. Something that orgs pay for in SaaS solutions.

AI does it imperfectly today, but if you have had to bet, would you bet that it gets better or worse? I would bet that it will improve, and as it is often with tech, at exponential rate. Then we would seen any workflow described in plain language and minutes see great software churned out. It might be a questions of when (not if) that happens. And are you prepared for that state of affairs?

gwbas1c 7 hours ago|||
That's basically the conclusion of the library:

> But my key takeaway would be that if your product is just a SQL wrapper on a billing system, you now have thousands of competitors: engineers at your customers with a spare Friday afternoon with an agent.

I think the issue is that the "two of them explicitly cloned" were trying to clone something that's more than "just a SQL wrapper on a billing system."

baxtr 9 hours ago|||
> The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.

The cost of building is going decreasing every year. The barriers of entry will come down year after year.

So what remains is knowing what you build (= product) as you write and knowing how to get exposure (= marketing). Focus on these two not on building things.

indymike 11 hours ago|||
> I believe that agents are a multiplier on existing velocity, not an equalizer.

Development tooling improvements usually are a temporary advantage end up being table stakes after a bit of time. I'm more worried that as agentic tooling gets better it obsoletes a lot of SaaS tools where SaaS vendors count on users driving conventional point and click apps (web, mobile and otherwise). I'm encouraging the companies I'm involved with to look to moving to more communication driven microexperience UIs - email, slack, sms, etc instead of more conventional UI.

adventured 11 hours ago||
What I'm seeing Ad infinitum on HN in every thread on agentic development: yeah but it really doesn't work perfectly today.

None of these people can apparently see beyond the tip of their nose. It doesn't matter if it takes a year, or three years, or five years, or ten years. Nothing can stop what's about to happen. If it takes ten years, so what, it's all going to get smashed and turned upside down. These agents will get a lot better over just the next three years. Ten years? Ha.

It's the personal interest bias that's tilting the time fog, it's desperation / wilful blindness. Millions of highly paid people with their livelihoods being disrupted rapidly, in full denial about what the world looks like just a few years out, so they shift the time thought markers to months or a year - which reveals just how fast this is all moving.

osn9363739 3 hours ago|||
I don't think you can guarantee it will get better. I'm sure it will improve from here but by how much? Have the exponential gains topped out? Maybe it's a slow slog over many years that isn't that disruptive. Has there been any technology that hasn't hit some kind of wall?
therealwhytry 10 hours ago||||
You aren't wrong, but you’re underestimating the inertia of $10M+/year B2B distributors. There are thousands of these in traditional sectors (pipe manufacturing, HVAC, etc.) that rely on hyper-localized logistics and century-old workflows.

Buyer pressure will eventually force process updates, but it is a slow burn. The bottleneck is rarely the tech or the partner, it's the internal culture. The software moves fast, but the people deeply integrated into physical infrastructure move 10x slower than you'd expect.

indymike 9 hours ago||
Internal culture changes on budget cycles, and right now, most companies are being pushed by investors to adopt AI. Have your sales team ask about AI budgeting vs. SaaS budgeting. I think you'll find that AI budget is available and conventional SaaS/IT budget isn't. Most managers are looking for a way to "adopt ai" so I think we're in a unique time.

> people deeply integrated into physical infrastructure move 10x slower than you'd expect.

My experience is yes, to move everyone. To do a pilot and prove the value? That's doable quickly, and if the pilot succeeds, the rest is fast.

MLgulabio 17 hours ago|||
The basic assumption is, that we already see that an LLM can do basic level of software engineering.

This wasn't even an option for a lot of people before this.

For example, even for non software engineering tasks, i'm at an advantage. "Ah you have to analyse these 50 excel files from someone else? I can write something for it"

I myself sometimes start creating a new small tool i wouldn't have tried before but now instead of using some open source project, i can vibe spec it and get something out.

The interesting thing is, that if i have the base of my specs, i might regenerate it later on again with a better code model.

And we still don't know what will happen when compute gets expanded and expanded. Next year a few more DCs will get online and this will continue for now.

Also tools like google firebase will get 1000x more useful with vibe coding. They provide basic auth and stuff like this. So you can actually focus on writing your code.

cpursley 14 hours ago||
God, please no more firebase and mongo. AI coding is really really good at sql/relational data and there are services like supabase and neon that make it dead simple.
CuriouslyC 12 hours ago|||
Not sure why the argument is SaaS or build from the ground up. Agents can deploy open source projects and build new featurees on top of them pretty effectively.

I'm gonna go ahead and guess that if you have open source competitors, within two years your moat is going to become marketing/sales given how easy it'll be to have an agent deploy software and modify it.

reactordev 10 hours ago|||
Damn, reading this is clear you two know your market well. Congratulations. This is the right way to do it. Domain expertise + tight feedback loop, probably makes customers feel like they are part of the process and that you’re there for them. Are you hiring?
ChicagoBoy11 11 hours ago|||
I'll add another obvious one: No rule that the SaaS, with its obviously much deeper technical expertise, can itself then leverage these tools to achieve even greater velocity, thereby exacerbating the problem for "internal teams"
lwhi 13 hours ago|||
I completely agree.

Corporates are allergic to risk; not to spending money.

If anything, I feel that SaaS and application development for larger organisations stands to benefit from LLM assisted development.

Havoc 14 hours ago|||
That may well be an exception though. I'd imagine most SaaS builders are very much figuring things out as they go rather than starting with deep domain expertise
ethbr1 13 hours ago|||
Additional hot take: deep domain expertise is only value-add while actively adding new features

There's a huge subset of SaaS that's feature-frozen and being milked for ARR.

martinald 10 hours ago||
Agreed. This is why PE buys so many SaaS companies!

My article here isn't really aimed at "good" SaaS companies that put a lot of thought into design, UX and features. I'm thinking of the tens/hundreds of thousands+ of SaaS platforms that have been bought by PE or virtually abandoned, that don't work very well and send a 20% cost renewal through every year.

j45 13 hours ago|||
Deep domain expertise is not common enough.
mlinhares 10 hours ago|||
I'm going to predict there will be a movement into "build it in house with LLMs", these things are going to be expensive, they are going to fail to deliver or be updated and there will be a huge bounce back. The cost of writing software is very small, the cost of running and scaling it is there the money is and these people can't have their own IT teams rebuilding and maintaining all this stuff form scratch.

A lot of them will try though, just means more work for engineers in the future to clean this shit up.

SoftTalker 7 hours ago||
I think there's a good chance. These things happen in cycles. A few decades ago it was common for companies to have in-house software development using something like COBOL or maybe BASIC (and at that time, sofware development was a cost-center job, it paid OK but nothing like what it does today). Then there was a push for COTS (commercial of-the-shelf) software. Then the internet made SaaS possible and that got hot. Developer salaries exploded. Now LLMs have people saying "just do it in house" again. Lessons are forgotten and have to be re-learned.
mbesto 10 hours ago|||
> The bottleneck is still knowing what to build, not building.

My hot take - LLMs are exposing a whole bunch of developers to this reality.

CyanLite2 9 hours ago|||
I'm seeing that in the GRC industry where SaaS companies are getting churned out by an internal IT guy who automated their "Excel" as a database.
cm277 14 hours ago|||
Same background as you and I fully agree. Again and again you see market/economic takes from technologists. This is not a technology question (yes, LLMs work), it's an economics question: what do LLMs disrupt?

If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud. Which is an economics disruption, not a technological one: cloud added complexity and points of failure and yet it still disrupted a ton of companies, because it enabled new business models (SaaS for one).

So far, the only disruption I can see coming from LLMs is middleware/integration where it could possibly simplify complexity and reduce overall costs, which if anything will help SaaS (reduction of cost of complements, classic Christensen).

ethbr1 13 hours ago||
I'll take a crack.

> what do LLMs disrupt? If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud.

"Cost of developing code" is a trivial and incomplete answer.

Coding LLMs disrupt (or will, in the immediate future)

(1) time to develop code (with cost as a second order effect)

(2) expertise to develop code

None of the analogs you provided are a correct match for these.

A closer match would be Excel.

It improved the speed and lowered the expertise required to do what people had previously been doing.

And most importantly, as a consequence of especially the latter more types of people could leverage computing to do more of their work faster.

The risk to B2B SaaS isn't that a neophyte business analyst is going to recreate you app overnight...

... the risk is that 500+ neophyte business analysts each have a chance of replacing your SaaS app, every day, every year.

Because they only really need to get lucky once, and then the organization shifts support to in-house LLM-augmented development.

The only reason most non-technology businesses didn't do in-house custom development thus far was that ROI on employing a software development team didn't make sense for them. Suddenly that's no longer a blocker.

To the point about cloud, what did it disrupt?

(1) time to deploy code (with cost as a second order effect)

(2) expertise to deploy code

B2B SaaS should be scared, unless they're continuously developing useful features, have a deep moat, and are operating at volumes that allow them to be priced competitively.

Coding agents and custom in-house development are absolutely going to kill the 'X-for-Y' simple SaaS clone business model (anything easily cloneable).

agentultra 11 hours ago|||
This seems to assume that these non-technical people have the expertise to evaluate LLM/agent generated solutions.

The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.

All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.

ethbr1 11 hours ago||
I think you're underestimating (1) how bad most B2B is (from a bug and security vulnerability perspective) & (2) how little B2B companies' engineers understand about how their customers are using their products.

The frequency of mind-bogglingly stupid 1+1=3 errors (where 1+1 is a specific well-known problem in a business domain and 3 is the known answer) cuts against your 'professional SaaS can do it better' argument.

And to be clear: I'm talking about 'outsourced dev to lowest-cost resources' B2B SaaS, not 'have a team of shit-hot developers' SaaS.

The former of which, sadly, comprises the bulk of the industry. Especially after PE acquisition of products.

Furthermore, I'm not convinced that coding LLMs + scanning aren't capable of surpassing the average developer in code security. Especially since it's a brute force problem: 'ensure there's no gap by meticulously checking each of 500 things.'

Auto code scanning for security hasn't been a significant area of investment because the benefits are nebulous. If you already must have human developers writing code, then why not have them also review it?

In contrast, scanning being a requirement to enabling fast-path citizen-developer LLM app creation changes the value proposition (and thus incentive to build good, quality products).

It's been mentioned in other threads, but Fire/Supabase-style 'bolt-on security-critical components' is the short term solution I'd expect to evolve. There's no reason from-scratch auth / object storage / RBAC needs to be built most of the time.

agentultra 10 hours ago||
I’m just imagining the sweat on the poor IT managers’ brow.

They already lock down everything enterprise wide and hate low-code apps and services.

But in this day and age, who knows. The cynical take is that it doesn’t matter and nobody cares. Have your remaining handful of employees generate the software they need from the magic box. If there’s a security breach and they expose customer data again… who cares?

ethbr1 5 hours ago||
That sweat doesn't lessen dealing with nightmare fly-by-night vendors for whatever business application a department wants.

Sometimes, the devil you know is preferable -- at least then you control the source.

Folks fail to realize the status quo is often the status quo because it's optimal for a historical set of conditions.

Previously... what would your average business user be able to do productively with an IDE? Weighed against security risks? And so the point that was established.

If suddenly that business user can add substantial amounts of value to the org, I'd be very surprised if that point doesn't shift.

It matters AND...

agentultra 1 hour ago||
Yeah. I used to manage a team that built a kind of low-code SaaS solution to several big enterprise clients. I sat in on several calls with our sales people and the customer’s IT department.

They liked buying SAP or M$ because it was fully integrated and turnkey. Every SaaS vendor they added had to be SOC2, authenticate with SAML, and each integration had to be audited… it was a lot of work for them.

And we were highly trained, certified developers. I had to sign documents and verify our stack with regulatory consultants.

I just don’t see that fear going away with agents and LLM prompts from frontline workers who have no training in IT security, management, etc. There’s a reason why AI tech needs humans in the loop: to take the blame when they thumbs up what it outputs.

sifar 6 hours ago|||
>> 2) expertise to develop code

This is wrong. Paradoxically, you need expertise to develop code with LLM.

ethbr1 5 hours ago||
For LOB CRUD apps? We blew past that capability point months ago.
testbjjl 12 hours ago|||
Maybe expect more paragraph 2 for flat fees or cheaper as many people who know how to code find themselves without employment?
Bombthecat 17 hours ago|||
Yap. AI and agents help to centralise, not decentralise
ben_w 14 hours ago||
For now.

I'm expecting this to be a bubble, and that bubble to burst; when it does, whatever's the top model at that point can likely still be distilled relatively cheaply like all other models have been.

That, combined with my expectations that consumer RAM prices will return to their trend and decrease in price, means that if the bubble pops in the year 20XX, whatever performance was bleeding edge at the pop, runs on a high-end smartphone in the year 20XX+5.

j45 14 hours ago||
The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.

The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.

ben_w 13 hours ago||
> The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.

Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.

Bubbles don't just mean tulips.

What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:

Runs on a local device, no way to extract profit to repay the cost of training.

Bombthecat 9 hours ago|||
The big models will never run locally, and I doubt that titan and co will run locally, just way to much resources needed
ben_w 7 hours ago||
"never" vs https://en.wikipedia.org/wiki/Koomey%27s_law

Observably, the biggest models we have right now have similar complexity to a rodent's brain, which runs on far less power. Limiting factors for chips in your phone is power, power efficiency is improving rapidly.

somewhereoutth 6 hours ago||||
> repay the cost of training

Key point. Once people realize that no money can be made from LLMs, they will stop training new ones. Eventually the old ones will become hopelessly out-of-date, and LLMs will fade into history.

j45 13 hours ago|||
Ai is much more developed at its entrance to the economy than most anything during the dot com boom.

Dot com is not super comparable to AI.

Dot com had very few users on the internet compared to today.

Dot com did not have ubiquitous e-commerce. The small group of users didn’t spend online.

Search engines didn’t have the amount of information online that there is today.

Dot com did not have usable high speed mobile data, or broadband available for the masses.

Dot com did not have social media to share and alas how things can work as quickly.

LLMs were largely applicable to industry when gpt 4 came out. We didn’t have the new terms of reference for non deterministic software.

ben_w 13 hours ago||
None of that matters to this point, though I'd dispute some of it if I thought it did.

"Can they keep charging money for it?", that's the question that matters here.

j45 9 hours ago||
It matters to the comparison being made between the dot com boom and an ai boom, they have completely different fundamentals outside of the hype train.

There were not as many consumers buying online during dot com boom.

To the extent currently more is being spent on AI than anything in the dot com boom.

Nor did companies run their businesses in the cloud, because there was no real broadband.

There’s no doubt there’s a hype train, there is also an adoption and disruption train, which is also happening.

I could go on, but I’m comfortable with seeing how well this comment ages.

ben_w 7 hours ago||
I don't pay anyone for an image generator AI, because I can run an adequate image generator locally on my own personal computer.

My computer doesn't have enough RAM to run the state of the art in free LLMs, but such computers can be bought and are even affordable by any business and a lot of hobbyists.

Given this, the only way for model providers to stay ahead is to spend a lot on training ever better models to beat the free ones that are being given away. And buy "spend a lot" I mean they are making a loss.

This means that the similarity with the dot com bubble can be expressed with the phrase "losing money on every sale and making up for it in volume".

Hardware efficiency is also still improving; just as I can even run that image model locally on my phone, an LLM equivalent to SOTA today should run on a high-end smartphone in 2030.

Not much room to charge people for what runs on-device.

So, they are in a Red Queen's race, running as hard as they can just to stay where they are. And where they are today, is losing money.

j45 3 hours ago||
You don't need RAM to run LLMs. Your graphics card does.

The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.

I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.

At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.

The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.

Let alone ones with credit card processing.

Internet users by year: https://www.visualcapitalist.com/visualized-the-growth-of-gl...

The ecommerce stats by year will interest you.

mikert89 13 hours ago|||
A year or two from now it will be trivial to copy your product
broast 8 hours ago|||
But what about their two year head start? If everyone is executing at the speed of light, there will still be winners and losers
f311a 13 hours ago||||
It was always relatively easy to copy many SaaS services, especially bootstrapped ones. Unless everybody wants to make and run their service locally, very little changes.
William_BB 12 hours ago||||
Did you claim the same a year or two ago? Why or why not?
Bridged7756 11 hours ago||||
Yes friend. That's what people said 2 years ago. Next?
latentsea 11 hours ago||
There's been an obvious step change on the coding front from 2 years ago, and it feels obvious to me there's going to be another. The difference now is the people working on systems to clone SaaS at scale are likely starting to put real effort, sustained effort into now that agents are good enough to accomplish subsets of it, can be improved much further with the right techniques and orchestration, and themselves will get better over the next two years along with all the improvements and build up of tooling. Right now feels like one of those "skate to where the puck is going to be" moments in time.
raw_anon_1111 13 hours ago|||
And? How easy will it be to iterate the product and build features and a user experience as it is used in the wild? How easy will it be to find customers willing to pay you money for it?
xtiansimon 13 hours ago||
Doesn’t have to be a commercial solution to change the game. There’s a lot of room between the commercial product and ‘Our end users… current "system" is Excel.’ Especially if the market moves towards making useful APIs at the ERP and vendors endpoints.
raw_anon_1111 12 hours ago||
And how would the outcome be different after a couple of years than the internally built Excel file with VBScript, Access and VB6 apps built by non developers back in the day?
j45 14 hours ago|||
Customers will be able to build skills.

Not being able to see this is a blind spot.

Domain expertise in an industry usually sits within the client, and is serviced to some degree by vendors.

Not all CEOs have deep domain expertise, nor do they often enough stick to one domain. Maybe that’s where a gap exists.

bpavuk 13 hours ago||
> The bottleneck is still knowing what to build, not building.

shit, I'm stealing that quote! it's easier to seize an opportunity, (i.e. build a tool that fixes the problem X without causing annoying Y and Z side effects) but finding one is almost as hard as it was since the beginning of the world wide web.

jwr 18 hours ago||
I am the founder of a niche SaaS (https://partsbox.com/ — software for managing electronic parts inventory and production). While I am somewhat worried about AI capabilities, I'm not losing too much sleep over it.

The worry is that customers who do not realize the full depth of the problem will implement their own app using AI. But that happens today, too: people use spreadsheets to manage their electronic parts (please don't) and BOMs (bills of materials). The spreadsheet is my biggest competitor.

I've been designing and building the software for 10 years now and most of the difficulty and complexity is not in the code. Coding is the last part, and the easiest one. The real value is in understanding the world (the processes involved) and modeling it in a way that cuts a good compromise between ease of use and complexity.

Sadly, as I found out, once you spend a lot of time thinking and come up with a model, copycats will clone that (as well as they can, but superficially it will look similar).

ehnto 16 hours ago||
> The real value is in understanding the world (the processes involved) and modeling it in a way that cuts a good compromise between ease of use and complexity.

Which I don't think can be replaced by AI in a lot of cases. I think in the software world we are used to things being shared, open and easily knowable, but a great deal of industry and enterprise domain knowledge is locked up inside in companies and will not be in the training data.

That's why it's such a big deal for an enterprise to have on prem tools, to avoid leaking industry processes and "secrets" (the secrets are boring, but still secrets).

A little career advice in there too I guess. At least for now, you're a bit more secure as a developer in industries that aren't themselves software, is my guess.

jwr 16 hours ago||
> Which I don't think can be replaced by AI in a lot of cases

Yes. I try to visit my customers as often as I can, to learn how they work and to see the production processes on site. I consider it to be one of the most valuable things I can do for the future of my business.

dismalpedigree 14 hours ago|||
Not specific to PartsBox, but we use Inventree (open source similar to PartsBox) and self host it. Over the past few months we noticed certain pain points in our workflow. Rather than looking for a new tool, we used Claude Code to write some backend services and some frontend modifications. Took 2 days of tinkering. Has easily saved that much time since we implemented it.

While rolling the whole solution with an AI agent is not practical, taking a open source starting point and using AI to overcome specific workflow pain points as well as add features allows me to have a lower cost, specifically tailored solution to our needs.

jwr 14 hours ago|||
I would include this in the "spreadsheet" metaphor. I do not know your use case, so please don't take this as addressed to you specifically, but I found that there is a learning/complexity problem: many people do not realize there is much more to inventory and production management than it seems. It might seem easy to AI-code something, only to find out later that things could have been done much better.

This is actually a serious problem for me: my SaaS has a lot of very complex functionality under the hood, but it is not easily visible, and importantly it isn't necessarily appreciated when making a buying decision. Lot control is a good example: most people think it is only needed for coding batches of expiring products. In reality, it's an essential feature that pretty much everyone needs, because it lets you treat some inventory of the same part (e.g. a reel) differently from other inventory of this part (e.g. cut tape) and track those separately.

AI-coding will help people get the features they know they need, but it won't guide them to the features they don't know they could use.

solaire_oa 9 hours ago||||
First, I'll second that I've applied agentic LLMs to an open source project to fix bugs and forcibly coerce it to act in ways that the maintainer may or may not approve of. It has been remarkablely effective, so long as I'm willing to apply patches or maintain a fork of the project (trivial, since this particular open-source project is abandoned anyway).

That said, the act of doing this- using LLMs to dominate somebody's legitimately intelligent and unique work- feels not only discourteous, but worse, like it's a short-term solution.

I'm convinced that it's a short-term solution NOT because I don't think that LLMs can continuously maintain these projects, but because open-source itself is going to be clawed back. The raison d'être of open-source is personal pride, hiring, collaboration, enjoyment, trust, etc. These motivations make less sense in an LLM-fueled world.

My prediction is that useful and well maintained open-source projects like we're hijacking will become fewer and far between.

dismalpedigree 7 hours ago||
I support Inventree. Even have raised PRs. Im specifically referring to things that are custom to our workflow.
solaire_oa 6 hours ago||
That's great! I'm not accusing you or anything, or myself for that matter, I'm only lamenting that open source will likely decay due to LLMs. Or at the very least, open source will no longer be seen as the height of virtue in our industry.
LunaSea 10 hours ago|||
Won't this breakdown if you need to pull new changes from the original project?
dismalpedigree 7 hours ago||
No. Written against the documented APIs and extension points.
nitwit005 5 hours ago||
"in ways that the maintainer may or may not approve of" does not sound like using the documented extension points.
eikenberry 6 hours ago|||
> Coding is the last part, and the easiest one. The real value is in understanding the world (the processes involved) and modeling it in a way that cuts a good compromise between ease of use and complexity.

Coding and modeling are interleaved. Prototyping is basically thinking through the models you are considering. If you split the two, you'll end up with a bad model, bad software or both.

lonelyasacloud 14 hours ago|||
Coding agents like Claude are just one line of AI making inroads. There are lot of nearly tasks that can be almost, but not quite, implemented effectivly with existing tools like Excel and Word. As they seek a return on their investments, are MS likely to target those nearly cases with AI in their Access, Excel, Word etc product lines?
a2code 15 hours ago|||
I have two tech q about partsbox. Why Clojure? Why not CL (lack of saas related-features)?
jwr 15 hours ago||
Clojure is just better than CL in pretty much every respect. Excellent and well designed standard library, great concurrency primitives, core.async, built-in transducers (CL has SERIES which does a kind-of similar thing, but isn't as well designed and integrated) and the dominant immutability all let me write more maintainable code. Also, I can re-use model code on the client side (ClojureScript), so there is lots of code sharing, and I don't have to serialize to a crippled format (JSON), my data can pass from server to client and back intact (with sets, keywords, and other rich data types).

I used to love CL and wrote quite a bit of code in it, but since Clojure came along I can't really see any reason to go back.

a2code 14 hours ago||
I did not try Clojure so I cannot comment on how well implemented the features you quote are, when compared to CL. All I can say is that CL also provides much of the same functionality with its standard, cl-async, lparallel, parenscript, while (im)mutability is a matter of preference (IMO correct decision by CL) rather than dominance. The way I see it, is that CL is superior (opinion) due to reader macros and native compilation, rather than bytecode JVM.
jwr 7 hours ago||
I used CL for many years before I switched to Clojure. And by "used" I mean really used it, not just for applications, but I also dove into CL implementations, for example I added return-from-frame (also known as debug-return) to CMUCL.

So, I kind of know what I'm talking about :-) And I don't miss anything from CL: I honestly can't find a single reason to switch back to CL.

TeMPOraL 15 hours ago||
The problem IMO is simpler.

You have a product, which sits between your users and what your users want. That product has an UI for users to operate. Many (most, I imagine) users would prefer to hire an assistant to operate that UI for them, since UI is not the actual value your service provides. Now, s/assistant/AI agent/ and you can see that your product turns into a tool call.

So the simpler problem is that your product now becomes merely a tool call for AI agents. That's what users want. Many SaaS companies won't like that, because it removes their advertising channel and commoditizes their product.

It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers. LLMs defeat that by turning the entire human experience into an API, without explicit coding.

mjr00 10 hours ago|||
> So the simpler problem is that your product now becomes merely a tool call for AI agents. That's what users want.

This is a big assumption, and not one I've seen in product testing. Open-ended human language is not a good interface for highly detailed technical work, at least not with the current state of LLMs.

> It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers.

I don't... think this is true? Of the top of my head, aside from cloud providers like AWS/GCP/Azure which obviously provide APIs: Salesforce, Hubspot, Jira all provide APIs either alongside basic plans or as a small upsell. Certainly not just for the biggest customers. You're probably thinking of social media where Twitter/Reddit/FB/etc don't really give API access, but those aren't really B2B SaaS products.

jwr 14 hours ago||||
Hmm. I think none of what you wrote is applicable to my specific SaaS.
MangoToupe 15 hours ago|||
> Many (most, I imagine) users would prefer to hire an assistant to operate that UI for them, since UI is not the actual value your service provides

That's ridiculous. A good ui will improve on assistant in every way.

Do assistants have some use? Sure—querying.

ben_w 14 hours ago||
> A good ui will improve on assistant in every way.

True.

"Good" UI seems to be in short supply these days, even from trillion dollar corporations.

But even with that, it is still not "ridiculous" for many to prefer to "hire an assistant to operate that UI for them". A lot of the complexity in UI is the balance between keeping common tasks highly visible without hiding the occasional-use stuff, allowing users to explore and learn more about what can be done without overwhelming them.

If I want a spaceship in Blender and don't care which one you get — right now the spaceship models that any GenAI would give you are "pick your poison" between Diffusion models' weirdness and the 3D equivalent of the pelican-on-a-bike weirdness — the easiest UI is to say (or type) "give me a spaceship", not doing all the steps by hand.

If you have some unformatted time series data and want to use it to forecast the next quarter, you could manually enter it into a spreadsheet, or you could say/type "here's a JPG of some time series data, use it to forecast the next quarter".

Again, just to be clear, I agree with everyone saying current AI is only mediocre in performance, it does make mistakes and shouldn't be relied upon yet. But the error rates are going down, the task horizons they don't suck at are going up. I expect the money to run out before they get good enough to take on all SaaS, but at the same time they're already good enough to be interesting.

jillesvangurp 16 hours ago||
I'm seeing the opposite. AI is actually increasing the demand for what would previously be too expensive, bespoke integrations and solutions. Those are now becoming more feasible and doable. There is also the notion that a lot of companies are actually very behind on embracing software or SAAS. Especially in manufacturing it's common to see operations that haven't materially changed anything in decades.

The fallacy here is believing we already had all the software we were going to use and that AI is now eliminating 90% of the work of creating that. The reality is inverted, we only had a fraction of the software that is now becoming possible and we'll be busy using our new AI tools to create absolutely massive amounts of it over the next years. The ambition level got raised quite a bit recently and that is starting to generate work that can only be done with the support of AI (or an absolutely massive old school development budget).

It's going to require different skills and probably involve a lot more domain experts picking up easy to use AI tools to do things themselves that they previously would have needed specialized programmers for. You get to skip that partially. But you still need to know what you are doing before you can ask for sensible things to get done. Especially when things are mission critical, you kind of want to know stuff works properly and that there's no million $ mistakes lurking anywhere.

Our typical customers would need help with all of that. The amount of times I've had to deal with a customer that had vibe coded anything by themselves remains zero. Just not a thing in the industry. Most of them are still juggling spreadsheets and ERP systems.

pjc50 15 hours ago||
> Especially in manufacturing it's common to see operations that haven't materially changed anything in decades.

> Especially when things are mission critical, you kind of want to know stuff works properly and that there's no million $ mistakes lurking anywhere.

This is what I'm wondering about; things don't change because the company doesn't like change, and the risks of change are very real. So changes either have to be super incremental, or offer such a compelling advantage that they can't be ignored. And AI just doesn't offer the sort of reproducible, reliable results that manufacturing absolutely depends on.

jillesvangurp 13 hours ago||
I don't think that's entirely correct. You can do TDD style development with AI and it leads to better results.

It's just that messing with a company's core manufacturing is something they don't do lightly. They work with multiple shifts of staff that are supposed to work in these environments. People generally don't have a lot of computer skills, so things need to be simple, repeatable, and easy to explain. Any issues with production means cost increases, delays happen, and money is lost.

That being said, these companies are always looking for better ways to do stuff, to eliminate work that is not needed, etc. That's your way in. If there's a demonstrable ROI, most companies get a lot less risk averse.

That used to involve bespoke software integrations. Those are developed at great cost and with some non trivial risk by expensive software agencies. Some of these projects fail and failure is expensive. AI potentially reduces cost and risk here. E.g. a generic SAP integration isn't rocket science to vibe code. We're talking well documented and widely used APIs here. You'd want some oversight and testing here obviously. But it's the type of low level plumbing that traditionally gets outsourced to low wages countries. Using AI here is probably already happening at a large scale.

nitwit005 4 hours ago||
If you look at past automation efforts, the automation initially made workers more valuable. They could produce more, at lower cost, and there was plenty of demand. Eventually, you hit the realistic limits of demand though, and the number of workers started to drop.

If software gets cheaper, people will buy more of it, to a point.

lateforwork 1 day ago||
This article made no sense to me. It is talking about AI-generated code eating SaaS. That's not what is going to replace SaaS. When AI is able to do the job itself — without generating code — that's what is going to replace SaaS.

AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.

coffeebeqn 8 hours ago||
Note that there is zero actual sales/renewal data quoted in the article so this is all the authors vibes based on how he has been able to vibe code a few things for a team of one person to use
mjr00 1 day ago|||
> AI-generated code still requires software engineers to build, test, debug, deploy, ensure security, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.

Yeah it's a fundamental misunderstanding of economies of scale. If you build an in-house app that does X, you incur 100% of the maintenance costs. If you're subscribed to a SaaS product, you're paying for 1/N % of the maintenance costs, where N is the number of customers.

I only see AI-generated code replacing things that never made sense as a SaaS anyway. It's telling the author's only concrete example of a replaced SaaS product is Retool, which is much less about SaaS and much more about a product that's been fundamentally deprecated.

Wake me up when we see swaths of companies AI-coding internal Jira ("just an issue tracker") and Github Enterprise ("just a browser-based wrapper over git") clones.

sayamqazi 17 hours ago||
"Wake me up when we see swaths of companies AI-coding internal Jira".

This shouldnt be the goal. The goal should be to build an AI that can tell you what is done and what needs to be done i.e. replace jira with natural interactions. An AI that can "see" and "understand" your project. An AI that can see it, understand it, build it and modify it. I know this is not happening for the next few decades or so.

mjr00 11 hours ago||
> This shouldnt be the goal. The goal should be to build an AI that can tell you what is done and what needs to be done i.e. replace jira with natural interactions. An AI that can "see" and "understand" your project. An AI that can see it, understand it, build it and modify it.

The difference is that an AI-coded internal Jira clone is something that could realistically happen today. Vague notions of AI "understanding" anything are not currently realistic and won't be for an indeterminate amount of time, which could mean next year, 30 years from now, or never. I don't consider that worth discussing.

jdthedisciple 18 hours ago|||
Perhaps OP's argument still applies to dev-oriented SaaS.

Are you as a dev still going to pay for analytics and dashboards that you could have propped up by Claude in 5 minutes instead?

siva7 3 hours ago|||
Analytics like what?! Sentry? See, i'm the first one to waste 15+ hours of my own time claude vibing some barely working analytics in order to save 15 dollars for not paying a proven solution to professionals who really understand that problem domain - but we all agree how dumb this is. But if i really can vibe code that analytics in 5 minutes, that thing was never a proven SaaS business in the first place and my use case with certainty a toy app with zero users..
bccdee 7 hours ago||||
The value proposition of SaaS is ultimately just that it's not a hack.

Most SaaS products could be replaced by a form + spreadsheet + email workflow, and the reason they aren't is that people don't want to be dealing with a hacky solution. Devs can hack together a nice little webapp instead of a network of spreadsheets, but it's still a hack. Factoring in AI assistance, perhaps SaaS is now competing with "something I hacked together in a week" as opposed to "something I hacked together in a month," but it's a hack either way.

I am absolutely going to pay for analytics and dashboards, because I don't want the operational concerns of my Elasticsearch analytics cluster getting in the way of the alarm that goes off when my primary database catches fire. Ops visibility is too important to be a hack, regardless of how quickly I could implement that hack.

rhubarbtree 18 hours ago|||
Yes, because then I know the code is properly engineering, tested, maintained and supported.

Generating code is one part of software engineering is a small part of SaaS.

InvertedRhodium 16 hours ago|||
How do you/I know that? I implemented OpenTelemetry in a project of mine recently and was shocked to see the number of AI authored commits in the git repository.
re-thc 15 hours ago||
> How do you/I know that? I implemented OpenTelemetry in a project of mine recently and was shocked to see the number of AI authored commits

Do you pay for OpenTelemetry? How is this related?

InvertedRhodium 14 hours ago||
If I did, I likely wouldn't have access to the source code and wouldn't be able to verify the degree of AI input.

So, I ask again - how do you know that the service you're paying for is all of those things?

jdthedisciple 17 hours ago|||
I'd love for SaaS to stay thriving but the flip side is simply the harsh reality that my own second thought these days is immediately "how easily will an agent replace my idea? yea probably quite easily..."
bccdee 7 hours ago|||
Ideas were never worth much. Implementing a quick prototype was always pretty simple. How easy is it, with modern tooling, to build a collaborative web editor? Just slap together prosemirror and automerge and you're already there. Still, nobody has displaced Google Docs.
vdfs 12 hours ago|||
Before AI, people would search for a free or open source alternatives before using a saas
x0x0 7 hours ago||
The bit about building an internal app for eg marketing or sales is super fun. Getting calls starting at 8am EST because they then (reasonably!) expect it to work less so. Software still has an enormous ktlo tax and until that changes, I'm skeptical about the entire thesis.

Not to mention the author appears to run a 1-2 person company, so ... yeah. AI thought leadership ahoy.

andy_ppp 1 day ago||
I’m currently working on an in house ERP and inventory system for a specific kind of business. With very few people you can now instead of paying loads of money for some off the shelf solution to your software needs get something completely bespoke to your business. I think AI enables the age of boutique software that works fantastically for businesses, agencies will need to dramatically reduce their price to compete with in house teams.

I’m pretty certain AI quadruples my output at least and facilitates fixing, improving and upgrading poor quality inherited software much better than in the past. Why pay for SaaS when you can build something “good enough” in a week or two? You also get exactly what you want rather than some £300k per year CRM that will double or treble in price and never quite be what you wanted.

Aurornis 1 day ago||
> Why pay for SaaS when you can build something “good enough” in a week or two?

About a decade ago we worked with a partner company who was building their own in-house software for everything. They used it as one of their selling points and as a differentiator over competitors.

They could move fast and add little features quickly. It seemed cool at first.

The problems showed up later. Everything was a little bit fragile in subtle ways. New projects always worked well on the happy path, but then they’d change one thing and it would trigger a cascade of little unintended consequences that broke something else. No problem, they’d just have their in-house team work on it and push out a new deploy. That also seemed cool at first, until they accumulated a backlog of hard to diagnose issues. Then we were spending a lot of time trying to write up bug reports to describe the problem in enough detail for them to replicate, along with constant battles over tickets being closed with “works in the dev environment” or “cannot reproduce”.

> You also get exactly what you want rather than some £300k per year CRM

What’s the fully loaded (including taxes and benefits) cost of hiring enough extra developers and ops people to run and maintain the in house software, complete with someone to manage the project and enough people to handle ops coverage with room for rotations and allowing holidays off? It turns out the cost of running in-house software at scale is always a lot higher than 300K, unless the company can tolerate low ops coverage and gaps when people go on vacation.

torginus 16 hours ago|||
In my experience, SaaS is also fragile. It's real software, with real bugs. Most complex solutions offer an extensible API/scripting support with tons of switchable/pluggable modules to integrate with your company's infra. This complexity most often means that your particulary combination of features is almost wholly unique, and chances are your SaaS has much less open mindshare/open source support than any free solution.

We often ended up discarding large chunks of these poorly tested features, instead of trying to get them to work, and wrote our own. This got to a point where only the core platform was used, and replacing that seemed to be totally feasible.

SaaS often doesn't solve issues but replaces them - you substitute general engineering knowledge and open-source knowhow with proprietary one, and end up with experts in configuring commercial software - a skill that has very little value on the market where said software is not used, and chains you to a given vendor.

mattmanser 15 hours ago||
SaaS software, by it's very nature, tends to gets tested tons more than your inhouse software. It also has more devs working on the software. It is almost certainly more stable and can handle more edge cases than anything developed inhouse. It's always a question of scale.

But what you're describing is the narrow but deep vs wide but shallow problem. Most SaaS software is narrow but deep. Their solution is always going to be better than yours. But some SaaS software is wide but shallow, it's meant to fit a wide range of business processes. Its USP is that it does 95% of what you want.

It sounds like you were using a "wide-shallow" SaaS in a "narrow-deep" way, only using a specific part of the functionality. And that's where you hit the problems you saw.

torginus 14 hours ago|||
I am speaking from experience. We have a SaaS tool for example to to CI/CD. It's super expensive, has a number of questionable design choices.

It's full of features, half of which either do not work, or do not work as expected, or need some arcane domain knowledge to get them working. These features provide 'user-friendly' abstractions over raw stuff, like authing with various repos, downloading and publishing packages of different formats.

Underlying these tools are probably the same shell scripts and logic that we as devs are already familiar with. So often the exercise when forced to use these things is to get the underlying code to do what we want through this opaque intermediate layer.

Some people have resorted to fragile hacks, while others completely bypassed these proprietary mechanisms, and our build scripts are 'Run build.sh', with the logic being a shell or python script, which does all the requisite stuff.

And just like I mentioned in my prev post, SaaS software in this case might get tested more in general, but due to the sheer complexity it needs to support on the client side, testing every configuration at every client is not feasible.

At least the bugs we make, we can fix.

And while I'm sure some of this narrow-deep kinds of SaaS works well (I've had the pleasure to use Datadog, Tailscale, and some big cloud provider stuff tends to be great as well), that's not all there is that's out there and doesn't cover everything we need.

mattmanser 13 hours ago||
That's my point, "It's full of features". You said it yourself.

You have bought a shallow but wide SaaS product, one with tons of features that don't get much development or testing individually.

You're then trying to use it like a deep but narrow product and complaining that your complex use case doesn't fit their OK-ish feature.

MS do this in a lot of their products, which is why Slack is much better than Teams, but lots of companies feel Teams is "good enough" and then won't buy Slack.

torginus 13 hours ago||
I'm arguing this is an entirely flawed product category (which might have elements of fraud as well) - things that are easy to get started with, but as your skill level or the requirements complexity increases, you start to see the limitations, and get entangled in the 'ecosystem', so given a sufficiently knowledgeable workforce, you are at a net negative by year 2 or 3 compared to experts having built something bespoke, or going the open-source route.

I'm sure you have encountered the pattern where you write A that calls B that uses C as the underlying platform. You need something in A, and know C can do it, but you have to figure out how you can achieve it through B. For a highly skilled individual(or one armed with AI) , B might have a very different value proposition than one who has to learn stuff from scratch.

Js packages are perfect illustration of these issues - there are tons of browser APIs that are wrapped by easy-to-use 'wrapper' packages, that have unforeseen consequences down the road.

CuriouslyC 12 hours ago|||
This is true when SaaS is a simple widget for everyone. The problem is that when SaaS becomes a hydra designed to do a million things for a million people, the extra eyeballs aren't helping you, they're creating more error surface.

On top of that, SaaS takes your power away. A bug could be quite small, but if a vendor doesn't bother to fix it, it can still ruin your life for a long time. I've seen small bugs get sandbagged by vendors for months. If you have the source code you can fix problems like these in a day or two, rather than waiting for some nebulous backlog to work down.

My experience with SaaS is that products start out fine, when the people building them are hungry and responsive and the products are slim and well priced. Then they get bloated trying to grow market share, they lose focus and the builders become unresponsive, while increasing prices.

At this point you wish you had just used open source, but now it's even harder to switch because you have to jump through a byzantine data exfiltration process.

andy_ppp 17 hours ago|||
> Everything was a little bit fragile in subtle ways.

Maybe write some tests and have great software development practices and most importantly people who care about getting the details right. Honestly there’s no reason for software to be like this is there? I don’t know how much off the shelf ERP software you have used but I wouldn’t exactly describe that as flawless and bug free either!

_pdp_ 1 day ago|||
This is only true if you assume that you are producing the same amount of code as today. Though, AI ultimately will produce more code which will require higher maintenance. Your internal team will need to scale up due to the the amount of code they need to maintain. Your security team will have more work to do as well because they will need to review more code which will require scaling that team as well. Your infrastructure costs will start adding up and if you have any DevOps they will need scaling too.

Soon or later the CTO will be dictating which projects can be vibe coded which ones make sense to buy.

SaaS benefits from network effects - your internal tools don't. So overall SaaS is cheaper.

The reality is that software license costs is a tiny fraction of total business costs. Most of it is salaries. The situation you are describing the kind of dead spiral many companies will get into and that will be their downfall not salvation.

theshrike79 18 hours ago|||
> The reality is that software license costs is a tiny fraction of total business costs

Yes and no. If someone is controlling the SaaS selection, then this is true.

But I've seen startup phase companies with multiple slightly overlapping SaaS subscriptions (Linear + Trello + Asana for example), just because one PM prefers one over the other.

Then people have bought full-ass SaaS costing 50-100€/month for a single task it does.

I'd describe the "Use AI to make bespoke software" as the solution you use to round out the sharp edges in software (and licensing).

The survey SaaS wants extra money to connect to service Y, but their API is free? Fire up Claude and write the connector ourselves. We don't want to build and support a full survey tool, but API glue is fine.

Or someone is doing manual work because vendor A wants their data in format X and vendor B only accepts format Y. Out comes Claude and we create a tool that provides both outputs at the same time. (This was actually written by a copywriter on their spare time, just because they got annoyed with extra busywork. Now it's used by a half-dozen people)

_pdp_ 16 hours ago||
There is no yes and no. This is a fact. Even a small startup of 3-5 people will pay more in terms of salaries than the total license costs they consume. A larger enterprise will will spend 50 to 100 times more on salaries then software license fees.

The reason software licenses are easier to cut by the finance team when things are not going well is because software does not have feelings although we all know that this not making a dent. Ultimately software scales much better than people and if the software is "thinking" it will scale infinitely better.

Building it all in house will only happen for 2 reasons: 1. The problem is so specific that this is the only variable option and the quickest (fear enough). 2. Developers and management do not have real understanding of software costs.

Developers not understanding the real costs should be forgiven because most of them are never in position to make these type of decisions - i.e they are not trained. However a manager / executive not understanding this is sign of lack of experience. You really need to try to build a few medium-sized none essential software systems in-house to get an idea how bad this can get and what a waste of time and money it really is - resources you could have spent elsewhere to effect the bottom the real bottomline.

Also the lines of code that are written do not scale linearly with team sizes. The more code you produce the bigger the problem - even with AI.

Ultimately a company wants to write as few line of code as possible that extract as much value as feasibly possible.

physicsguy 17 hours ago|||
> Soon or later the CTO will be dictating which projects can be vibe coded which ones make sense to buy.

A lot of the SaaS target companies won't even have a CTO

tarsinge 18 hours ago|||
To me AI might have tilted the economic on doing in house a bit but it has been at least a decade or more that I find most enterprise SaaS, in the way they are used 80% of the time, could be recreated with a few developers in house. Instead of 10-20 developers maybe you only need 2-5 with AI, so for most big companies that doesn’t change much. A company that wants to build in house still has to hire a team. And in most non tech industries even if more expensive usually a service is preferred. SaaS was never (only) about costs, developers were already wondering why people would pay for an expensive CRM 10 years ago when it was only basic CRUD.
thisisit 21 hours ago|||
I work with both enterprise software and in house teams. Each path has its pro and cons. As you put it costly CRM might not be fulfilling its purpose. And the two biggest points in favour of in house are cost and bespoke nature of solution.

Building is only one part. Maintaining and using/running is another.

Onboarding for both technical and functional teams takes longer as the ERP is different from other company. Feature creep is an issue. After all who can say no to more bespoke features. Maybe roll CRM, Reporting and Analytics into one. Maintenance costs and priorities now become more important.

We have also explored AI agents in this area. People specific tasks are great use cases. Create mock up and wireframes? AI can do well and you still have human in the loop. Enterprise level tasks like say book closing for late company ERP? AI makes lot of mistakes.

technotony 1 day ago|||
Interesting application. Can you share more about your stack and how you are approaching that build?
mikert89 1 day ago|||
Its not that people will build their own saas, its that competitors will pop up at a rapid pace
mattas 23 hours ago|||
You've just described the magic of spreadsheets.
OxfordOutlander 16 hours ago|||
I agree about boutique software, but see the development still being external -

To attempt to summarize the debate, there seems to be three prevailing schools of thought:

1. Status Quo + AI. SaaS companies will adopt AI and not lose share. Everyone keeps paying for the same SaaS plus a few bells and whistles. This seems unlikely given AI makes it dramatically cheaper to build and maintain SaaS. Incumbents will save on COGS, but have to cut their pricing (which is a hard sell to investors in the short term).

2. SaaS gets eaten by internal development (per OP). Unlikely in short/medium term (as most commenters highlight). See: complete cloud adoption will take 30+ years (shows that even obviously positive ROI development often does not happen). This view reminds me a bit of the (in)famous DropBox HN comment(1) - the average HN commenter is 100x more minded to hack and maintain their own tool than the market.

benzible (commenter) elsewhere said this well - "The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon."

This same logic explains why external boutique beats internal builds --

3. AI helps boutique-software flourish because it changes vendor economics (not buyer economics). Whereas previously an ERP for a specific niche industry (e.g. wealth managers who only work with Canadian / US cross-border clients) would have had to make do with a non-specific ERP, there will now be a custom solution for them. Before AI, the $20MM TAM for this product would have made it a non-starter for VC backed startups. But now, a two person team can build and maintain a product that previously took ten devs. Distribution becomes the bottleneck.

This trend has been ongoing for a while -- Toast, Procore, Veeva -- AI just accelerates it.

If I had to guess, I expect some combination of all three - some incumbents will adapt well, cut pricing, and expand their offering. Some customers will move development in house (e.g. I have already seen several large private equity firms creating their own internal AI tooling teams rather than pay for expensive external vendors). And there will be a major flourishing of boutique tools.

(1) https://news.ycombinator.com/item?id=9224

martinald 10 hours ago|||
Author here, really good comment and I agree with you.

What _has_ surprised me though is just how many companies are (or are considering) building 'internal' tooling to replace SaaS they are not happy with. These are not the classic HN types whatsoever. I think when non technical people get to play with AI software dev they go 'wow so why can't we do everything like this'.

I think your point 3 is really interesting too.

But yes the point of my article (hopefully) wasn't that SaaS is overnight dead, but some thin/lower "quality" products are potentially in real trouble.

People will still buy and use expertly designed products that are really nice to use. But a lot of b2b SaaS is not that, its a slow clunky mess that wants to make you scream!

andy_ppp 14 hours ago|||
I like this thoughtful and nuanced response, I think you could be right. Makes me wonder if choosing an extremely boring niche and just making several million dollars could be a good move right now.
risyachka 15 hours ago||
>> I’m currently working on an in house ERP and inventory system for a specific kind of business

this means if I sell it to your business for the price of < your salary - you will get fired and business will use my version.

Why? because my will always be better as 10 people work on it vs you alone.

Internal versions will never be better or cheaper than saas (unless you are doing some tiny and very specific automation).

They can be better than current solution - but only a matter of time when someone makes a saas equal and better to what you do internally.

Sure almost anything will be better and cheaper that hubspot.

But with AI smaller CRMs that are hyper focused on businesses like yours will start popping up and eating its market.

Anything bigger than a toy project will always be cheaper/better to buy.

redwood 1 day ago||
Jamin Ball had a better take on Clouded Judgement https://cloudedjudgement.substack.com/p/clouded-judgement-12... "Long Live Systems of Record"
returnInfinity 20 hours ago||
This is the right take

Also maintaining a software is pain

Also for perpetually small companies, its now easy to build simple scripts to be achieve some productivity gains.

bigtones 1 day ago|||
Yeah I think that is a much more accurate take on the same subject.
lwhi 1 day ago||
This was a good read.
ares623 1 day ago||
Maybe someday we'll see job postings for maintaining these in-house SaaS tools. And someday someday, we'll see these in-house SaaS tools being consolidated as its own separate product. Wait what.
Imustaskforhelp 1 day ago||
Hey lets hope maybe people will open source the product too :D
sleazebreeze 1 day ago||
and around and around we'll go again!
arealaccount 1 day ago||
The where this doesn’t work section is chefs kiss

- anything that requires very high uptime

-very high volume systems and data lakes

-software with significant network effects

-companies that have proprietary datasets

-regulation and compliance is still very important

Oarch 1 day ago||
Earlier this year I thought that rare proprietary knowledge and IP was a safe haven from AI, since LLMs can only scrub public data.

Then it dawned on me how many companies are deeply integrating Copilot into their everyday workflows. It's the perfect Trojan Horse.

findjashua 1 day ago||
providers' ToS explicitly states whether or not any data provided is used for training purposes. the usual that i've seen is that while they retain the right to use the data on free tiers, it's almost never the case for paid tiers
torginus 16 hours ago|||
I bet companies are circumventing this in a way that allows them to derive almost all the benefit from your data, yet makes it very hard to build a case against them.

For example, in RL, you have a train set, and a test set, which the model never sees, but is used to validate it - why not put proprietary data in the test set?

I'm pretty sure 99% of ML engineers would say this would constitute training on your data, but this is an argument you could drag out in courts forever.

Or alternatively - it's easier to ask for forgiveness than permission.

I've recently had an apocalyptic vision, that one day we'll wake up, an find that AI companies have produced an AI copy of every piece of software in existence - AI Windows, AI Office, AI Photoshop etc.

sotrusting 1 day ago||||
Right, so totally cool to ignore the law but our TOS is a binding contract.
mc32 1 day ago|||
Yes, they can be sued for breach of contract. And it’s not a regular ToS but a signed MSA and other legally binding documents.
blibble 1 day ago||
the license on my open source code is a contract, and they ignored that

if they can get away with it (say by claiming it's "fair use"), they'll ignore corporate ones too

LPisGood 19 hours ago||
If I were to go out on a limb, those companies spend more on tech companies than you and they have larger legal teams than you. That is a carrot and a stick for AI companies to follow the contract.
blibble 10 hours ago||
no, it's not an incentive to follow the contract

it's an incentive to pretend as if you're following the contract, which is not the same thing

protocolture 1 day ago|||
Where are they ignoring the law?
sotrusting 1 day ago|||
https://www.reuters.com/business/environment/musks-xai-opera...
protocolture 22 hours ago||
Thats an allegation. Doesnt an allegation need to be tested?
yieldcrv 1 day ago|||
people that say this tend to have a misinterpretation of copyright, and use all the court cases brought by large rights holders as validation

despite all 3 branches of the government disagreeing with them over and over again

sotrusting 1 day ago||
[flagged]
Oarch 1 day ago||||
Given the conduct we've seen to date, I'd trust them to follow the letter - but not the spirit - of IP law.

There may very well be clever techniques that don't require directly training on the users' data. Perhaps generating a parallel paraphrased corpus as they serve user queries - one which they CAN train on legally.

The amount of value unlocked by stealing practically ~everyone's lunch makes me not want to put that past anyone who's capable of implementing such a technology.

bdangubic 1 day ago||||
it is amazing in almost 2026 there is anyone believing this… amazing
GCUMstlyHarmls 1 day ago|||
I wonder how much wiggle there is for collect now (to provide service, context history, etc), then later anonymise (some how, to some level) and then train on it?

Also I wonder if the ToS covers "queries & interaction" vs "uploaded data" - I could imagine some tricky language in there that says we wont use your word document, but we may at some time use the queries you put against it, not as raw corpus but as a second layer examining what tools/workflows to expand/exploit.

danielheath 23 hours ago||
“We don’t train on your data” doesn’t exclude metadata, training on derived datasets via some anonymisation process, etc.

There’s a range of ways to lie by omission, here, and the major players have established a reputation for being willing to take an expansive view of their legal rights.

phendrenad2 1 day ago|||
Ironically (for you), copilot is the one provider that is doing a good job of provably NOT training on user data. The rest are not up to speed on that compliance angle, so many companies ban them (of course, people still use them).
Aurornis 1 day ago||
Do you have a source for this?

There are claims all through this thread that “AI companies” are probably doing bad things with enterprise customer data but nobody has provided a single source for the claim.

This has been a theme on HN. There was a thread a few weeks back where someone confidently claimed up and down the thread that Gemini’s terms of service allowed them to train on your company’s customer data, even though 30 seconds of searching leads to the exact docs that say otherwise. There is a lot of hearsay being spread as fact, but nobody actually linking to ToS or citing sections they’re talking about.

phendrenad2 10 hours ago||
Sources aren't hard to find[1]. But getting software developers to look outside their idiot-savant caves and not dismiss the entire legal system as "unrealistic", is much harder to accomplish.

[1] - https://www.microsoft.com/en-us/trust-center/privacy/data-ma...

matt-p 1 day ago|||
Even if they're were doing this (I highly doubt it) so much would be lost to distillation I'm not convinced there would be much that actually got in, apart from perhaps internal codenames or whatever which will be obvious.
kankerlijer 1 day ago||
Well, perhaps this is naive of me from the perspective of not fully understanding the training process. However, at some point, with all available training data having been exhausted, gains with synthetic data exhausted, and a large pool of publicly available AI generated code, at what point is it 'smart' to scrape codebases from what you identify as high quality code based, clean it up to remove identifiers, and use that for training a smaller model?
gaigalas 1 day ago|||
What kind of rare proprietary knowledge?
Oarch 1 day ago||
It could be a wide range of things depending on your field: highly particular materials, knowledge or processes that give your products or services a particular edge, and which a company has often incurred high R&D costs to discover.

Many businesses simply couldn't afford to operate without such an edge.

Aurornis 1 day ago||
Using an LLM on data does not ingest that data into the training corpus. LLMs don’t “learn” from the information they operate on, contrary to what a lot of people assume.

None of the mainstream paid services ingest operating data into their training sets. You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.

Retric 1 day ago|||
Companies have already shifting from not using customer data to giving them an option to opt out ex:

“How can I control whether my data is used for model training?

If you are logged into Copilot with a Microsoft Account or other third-party authentication, you can control whether your conversations are used for training the generative AI models used in Copilot. Opting out will exclude your past, present, and future conversations from being used for training these AI models, unless you choose to opt back in. If you opt out, that change will be reflected throughout our systems within 30 days.” https://support.microsoft.com/en-us/topic/privacy-faq-for-mi...

At this point suggesting it has never and will her happen is wildly optimistic.

Aurornis 1 day ago|||
An enterprise Copilot contract will have already decided this for the organization.
Retric 23 hours ago||
That possibility in no way address the underlying concern here.
olyjohn 23 hours ago|||
30 days to opt out? That's skeezy as fuck.
lwhi 1 day ago||||
Information about the way we interact with the data (RLHF) can be used to refine agent behaviour.

While this isn't used specifically for LLM training, it can involve aggregating insights from customer behaviour.

Aurornis 1 day ago||
That’s a training step. It requires explicitly collecting the data and using it in the training process.

Merely using an LLM for inference does not train it on the prompts and data, as many incorrectly assume. There is a surprising lack of understanding of this separation even on technical forums like HN.

lwhi 15 hours ago||
That's definitely a fair point.

However, let's say I record human interactions with my app; for example when a user accepts or rejects an AI sythesised answer.

This data can be used by me, to influence the behaviour of an LLM via RAG or by altering application behaviour.

It's not going to change the weighting of the model, but it would influence its behaviour.

AuthAuth 1 day ago||||
They are not directly ingesting the data into their trainning sets but they are in most cases collecting it and will be using it to train future models.
Aurornis 1 day ago||
Do you have any source for this at all?
AuthAuth 8 hours ago||
Its stated in the private policy.
nerdponx 1 day ago||||
If they weren't, then why would enterprise level subscriptions include specific terms stating that they don't train on user provided data? There's no reason to believe that they don't, and if they don't now then there's no reason to believe that they won't later whenever it suits them.
Aurornis 1 day ago|||
> then why would enterprise level subscriptions include specific terms stating that they don't train on user provided data?

What? That’s literally my point: Enterprise agreements aren’t training on the data of their enterprise customers like the parent commenter claimed.

TheRoque 1 day ago||||
Just read the ToS of the LLM products please
Aurornis 1 day ago|||
I have. Have you? Can you quote the sections you’re talking about?
TheRoque 19 hours ago||
https://www.anthropic.com/news/updates-to-our-consumer-terms

"We will train new models using data from Free, Pro, and Max accounts when this setting is on (including when you use Claude Code from these accounts)."

doctorpangloss 1 day ago|||
This is so naive. The ToS permits paraphrasing of user conversations, by not excluding it, and then training on THAT. You’d never be able to definitively connected paraphrased data to yours, especially if they only train on paraphrased data that covers frequent, as opposed to rare, topics.
Aurornis 1 day ago||
Do it have a citation for this?
doctorpangloss 22 hours ago||
“Hey DoctorPangloss, how can we train on user data without training on user data?”

“You can use an LLM to paraphrase the incoming requests and save that. Never save the verbatim request. If they ask for all the request data we have, we tell them the truth, we don’t have it. If they ask for paraphrased data, we’d have no way of correlating it to their requests.”

“And what would you say, is this a 3 or a 5 or…”

Everything obvious happens. Look closely at the PII management agreements. Btw OpenAI won’t even sign them because they’re not sure if paraphrasing “counts.” Google will.

popalchemist 1 day ago||||
Wrong, buddy.

Many of the top AI services use human feedback to continuously apply "reinforcement learning" after the initial deployment of a pre-trained model.

https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...

Aurornis 1 day ago||
RLHF is a training step.

Inference (what happens when you use an LLM as a customer) is separate from training.

Inference and training are separate processes. Using an LLM doesn’t train it. That’s not what RLHF means.

popalchemist 1 day ago||
I am aware, I've trained my own models. You're being obtuse.

The big companies - take Midjourney, or OpenAI, for example - take the feedback that is generated by users, and then apply it as part of the RLHF pass on the next model release, which happens every few months. That's why they have the terms in their TOS that allow them to do that.

agumonkey 1 day ago||||
maybe prompts are enough to infer the rest ?
leptons 1 day ago||||
> LLMs don’t “learn” from the information they operate on, contrary to what a lot of people assume.

Nothing is really preventing this though. AI companies have already proven they will ignore copyright and any other legal nuisance so they can train models.

lioeters 1 day ago|||
They're already using synthetic data generated by LLMs to further train LLMs. Of course they will not hesitate to feed "anonymized" data generated by user interactions. Who's going to stop them? Or even prove that it's happening. These companies have already been allowed to violate copyright and privacy on a historic global scale.
Archelaos 1 day ago||||
How should they dinstinguish between real and fake data? It would be far to easy to pollute their models with nonesense.
leptons 20 hours ago||
I have no doubt that Microsoft has already classified the nature of my work and quality of my code. Of course it's probably "anonymized". But there's no doubt in my mind that they are watching everything you give them access to, make no mistake.
tick_tock_tick 1 day ago||||
I mean is it really ignoring copyright when copyright doesn't limit them in anyway on training?
leptons 20 hours ago||
Tell that to all the people suing them for using their copyrighted work. In some cases the data was even pirated.
Aurornis 1 day ago|||
> Nothing is really preventing this though

The enterprise user agreement is preventing this.

Suggesting that AI companies will uniquely ignore the law or contracts is conspiracy theory thinking.

leptons 20 hours ago||
It already happened.

"Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal"

https://www.wired.com/story/new-documents-unredacted-meta-co...

They even admitted to using copyrighted material.

"‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says"

https://www.theguardian.com/technology/2024/jan/08/ai-tools-...

cess11 15 hours ago||
Though the porn they copied was just for personal use, because clearly that's an important perk of being employed there:

https://www.vice.com/en/article/meta-says-the-2400-adult-mov...

fzeroracer 1 day ago||||
> You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.

It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this. And it keeps happening. Granted I'm unaware of cases of this occuring currently with professional AI services but it's basic security 101 that you should never let anything even have the remote opportunity to ingest data unless you don't care about the data.

james_marks 1 day ago|||
> never let anything even have the remote opportunity to ingest data unless you don't care about the data

This is objectively untrue? Giants swaths of enterprise software is based on establishing trust with approved vendors and systems.

Aurornis 1 day ago||||
> It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this.

Do you have any citations or sources for this at all?

mulquin 1 day ago|||
To be pedantic, it is still a conspiracy, just no longer a theory.
rightbyte 6 hours ago||
To be pedantic, a theory that has been proven correct is still a theory, right?
sotrusting 1 day ago|||
[flagged]
protocolture 1 day ago||
>Ah yes, blindly trusting the corpo fascists that stole the entire creative output of humanity to stop now.

Stealing implies the thing is gone, no longer accessible to the owner.

People aren't protected from copying in the same way. There are lots of valid exclusions, and building new non competing tools is a very common exclusion.

The big issue with the OpenAI case, is that they didn't pay for the books. Scanning them and using them for training is very much likely to be protected. Similar case with the old Nintendo bootloader.

The "Corpo Fascists" are buoyed by your support for the IP laws that have thus far supported them. If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.

Oarch 1 day ago|||
> Stealing implies the thing is gone, no longer accessible to the owner.

Isn't this a little simplistic?

If the value of something lies in its scarcity, then making it widely available has robbed the owner of a scarcity value which cannot be retrieved.

A win for consumers, perhaps, but a loss for the owner nonetheless.

protocolture 22 hours ago||
No calling every possible loss due to another persons actions "Stealing" is simplistic. We have terms for all these things, like "intellectual property infringement".

Trying to group (Thing I dont like) with (Thing everyone doesnt like) is an old semantic trick that needs to be abolished. Taxonomy is good, if your arguments are good, you dont need emotively charged imprecise language.

Oarch 20 hours ago||
I literally reused the definition of stealing you gave in the post above.
sotrusting 1 day ago|||
> Stealing implies the thing is gone, no longer accessible to the owner.

You know a position is indefensible when you equivocation fallacy this hard.

> The "Corpo Fascists" are buoyed by your support for the IP laws

You know a position is indefensible when you strawman this hard.

> If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.

Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.

> Scanning them and using them for training is very much likely to be protected.

Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?

protocolture 19 hours ago||
>You know a position is indefensible when you equivocation fallacy this hard.

No its quite defensible. And if that was equivocation, you can simply outline that you didn't mean to invoke the specific definition of stealing, but were just using it for its emotive value.

>You know a position is indefensible when you strawman this hard.

Its accurate. No one wants thes LLM guys stopped more than other big fascistic corporations, plenty of oppositional noise out there for you to educate yourself with.

>Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.

Cool, so if you agree all data should usable to create derivative works then I don't see what your complaint is.

>Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?

You invoked "strawman" and then hit me with this combo strawman/non sequitur? Cool move <1 day old account, really adds to your 0 credibility.

I literally pointed out they should have to pay the same access fee as anyone else for the data, but once obtained, should be able to use it any way. Reading the comment explains the comment.

Unless, charitably, you are suggesting that if a company is legally able to purchase content, and use it as training data, that somehow compels them to release that data for free themselves?

Weird take if true.

CyanLite2 12 hours ago|
It was common in the early 2000s for big companies to have large internal IT teams to build "line of business" apps. Then SaaS came along and delivered LoB apps for a fraction of the price and with a monthly subscription.

Looks like we're headed back to the internal IT days of building customized LoB apps.

dboreham 11 hours ago|
Or perhaps there will arise a new kind of external service provider that delivers customized SaaS services to those same users, using AI. There's no reason the work has to go back to the internal IT people who were fired long ago.
More comments...