Posted by jnord 12/14/2025
- modest incremental gains in productivity
- society will remain mostly the same
- very few people will take advantage of the opportunities unlocked by AI
I think this sort of ignores the fact that S&M agentic tools exist and the cost of those services is also dramatically decreasing, so does it net out and just become a more efficient model in general?
I'm not a consultant anymore but my friend who owns a dental clinic asked me if I could build them a personalized system that checks in with the staff every week; a thing that helps analyze how they feel week to week and helps my friend update her management strategy and coaches her on how to talk with her staff / helps her figure out her staff's communication strategies and what work they prefer to do; and she'd like me to run and host it so she can't see the raw data from her staff so they'll trust it more as it's run by a third party.
She could probably figure out how to do this but she'd still rather pay me like $5k to do this than spend 100+ hours figuring this out herself. Even with AI it'd probably take me at least a couple of weeks to get it working 100% as intended, and I don't have a dentist business to run.
I think we'll see more back office SaaS, becuase the problems to solve are near infinite, and no one has time to build all these themselves.
Agents offer a very attractive level of abstraction, and its customers, aren't necessarily human: it could be other agents. Many saas we have today would simply be unnecessary in the future.
If 90% of the actual work is waiting for Agent to get the work done, why would those companies keep paying those saas companies license fee per seat? It doesn't make sense.
A tangent, I feel, again, unfortunately, the AI is going to divide society into people who can use the most powerful tools of AI vs those who will be only be using chatGPT at most (if at all).
I don't know why I keep worrying about these things. Is it pointless?
For software engineering, it is useless unless you're writing snippets that already exist in the LLMs corpus.
If I give something like Sonnet the docs for my JS framework, it can write code "in it" just fine. It makes the occasional mistake, but if I provide proper context and planning up front, it can knock out some fairly impressive stuff (e.g., helping me to wire up a shipping/logistics dashboard for a new ecom business).
That said, this requires me policing the chat (preferred) vs. letting an agent loose. I think the latter is just opening your wallet to model providers but shrug.
What I'm saying is that whenever you need to actually do some software design, i.e. tackle a novel problem, they are useless.
Secondly, the way this person describes "agents" is not rooted in reality:
>Agents don't leave. And with a well thought through AGENTS.md file, they can explain the codebase to anyone in the future.
>What's going to be difficult to predict is how quickly agents can move up the value chain. I'm assuming that agents can't manage complex database clusters - but I'm not sure that's going to be the case for much longer.
What in the world. And of course he's selling a course. This is the same business as those people sitting in Dubai selling $6000 options trading courses to anyone who believes their bullshit. The grifter economy around AI is in full swing like it was around blockchain/crypto in 2017-2020.
Automation is not new. What's new is the capabilities of the models that can be assigned with some of the workflow steps. If these steps were served by SaaS companies so far, they will still serve it. Maybe they make it much cheaper and use a model themselves.
1) helping to saturate traditional SaaS because code is being commoditized / the effort to build is dropping significantly.
2) defining an adjacent sub-category of SaaS: "Service-as-a-Software" where the SaaS provides _outcomes_ instead of _tools_; this couldn't really exist at scale before recently.