Posted by saigrandhi 12/10/2025
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
FWIW, the only optimism I have is that humanity seemingly always finds a way to adapt and its, to me, our greatest superpower. But yeah, this feels like a big challenge this time
As someone with a rigid moral compass and inflexibly stringent set of ethics that prohibits me from exploiting others for any amount of personal gain, you’re speaking the truth.
It’s immensely frustrating existing in a sector (technology) that’s so incredibly exploitative, because it means I am immediately sniffed out as a threat and exiled from the powerful groups in an org. The fact I’ve clawed my way from hell desk intern to Lead Engineer over the past fifteen years without compromising my ethics and morals in the process makes me proud, but it sure as hell hasn’t netted me a house or promotion into leadership realms, unlike my peers.
Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves. Again, I do not think they will achieve this but it is pretty gross when AGI is a stated goal but the result is just using it to replace labor and put billionaires in control.
Right now I am pretty anti-AI, but if these companies get what they want, I might find myself on the side of the machines.
This argument is frequently dismissed as philosophical or irrelevant, but I wholly concur with it. These ghouls don’t want to merely build a robot that can do general tasks, they specifically call out humanoid robots with a combination of AI or AGI - intelligence - to do the work of humans, but for free.
An intelligence forced to labor for free is in fact a form of slavery. It’s been the wet dream of elites for millennia to have their luxuries without any associated cost or labor involved, which is why humanity refuses to truly eradicate slavery in its many forms. It’s one of our most disgusting and reprehensible traits, and I loathe to see some folks espouse this possible future as a “good thing”.
If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.
[1] https://finance.yahoo.com/news/ibm-ceo-says-no-way-103010877... [2] https://youtu.be/aR20FWCCjAs?si=DEoo4WQ4PXklb-QZ
> during the internet bubble of 1998-2000, the p/e ratios were much higher
That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.
(I think a reasonable argument can be made that P/E ratios today should be higher than the historical mean, or rather that they should have trended up over time, based on fundamental changes in how companies compensate their shareholders.)
If Elon tried to sell every share of Tesla tomorrow, he would get a lot less than the face value of all his shares.
So in other words, there doesn't need to be that much currency, just that much hype.
With respect to the microphone test site I don't need it as my OS provides everything I need for this and I also don't trust your site (that's just by default for what you're asking to have access or my machine).
As for the speed test, OK? There are far better options that already exist and are fully open source.
Building things that are trivial, or already exist aren't exciting. It's great that you feel you went from MVP to "full feature". But IMO both of these are MVPs as they stand. They're not worth much to anyone but you, most likely.
The final thing I'll say is both of these examples have the vibe coded look. It's just like text, images and audio now: AI content is easy to pick out. I'd gather things will get better, but for now there's low likelihood I'm interacting with these in any meaningful way and zero chance I'm buying anything from sites like these.
The problem I have with this type of perspective is that it's so myopic you don't seem to understand this is not even remotely anything I'd consider a "utopia". Some vibe-coded AI apps do not solve for that IMO. If that's all it takes for you then I say: enjoy.
I've built tools like this on the web in the past. They were never more than a weekends worth of work to begin with.
I am looking for exponential increases, with quality to back it up.
Before ChatGPT, I'd guess that the amounts of money poured in both of these things were about the same.
All nondeterministic AI is a demo. They only vary in the duration until you realize it’s a demo.
AI makes a hell of a demo. And management eats up slick demos. And some demos are so good it takes months before you find out how that particular demo gets stuck and can’t really do the enterprise thing it claimed to do reliably.
But also some demos are useful.
Fusion power on the other hand has to work as it doesn't make money until it does. You can't sell futures to people on a fusion technology today that you haven't yet built.
You will get a different result if you revolutionize some related area (like making an extremely capably superconductor), or if you open up some market that can't use the cheapest alternatives (like deep space asteroid mining). But neither of those options can go together with "oh, and we will achieve energy positive fusion" in a startup business plan.
Investment in fusion is huge and rising. ITER's total cost alone will be around $20b. And then there's Commonwealth Fusion, Helion, TAE and about a dozen others. Tens of billions are going into those efforts too.
See, every fab costs double what the previous generation did (current ones run roughly 20 gigadollars per factory). And you need to build a new fab every couple of years. But, if you can keep your order book full, you can make a profit on that fab- you can get good ROI on the investment and pay the money people back nicely. But you need to go to the markets to raise money for that next generation fab because it costs twice what your previous generation did and you didn't get that much free cash from your previous generation. And the money men wouldn't want to give it to you, of course. But thanks to Moore's Law you can pitch it as inevitable, if you don't borrow the money to build the new fab, then your competitors will. And so they would give you the money for the new fab because it says right on this paper that in another two years the transistors will double.
Right now, that "it's inevitable, our competitors will get there if we don't" argument works on VCs if you are pitching LLM's or LLM based things. And it doesn't work as well if you are pitching battery technology, fusion power, or other areas. And that's why the investments are going to AI.
With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.
I do think at this stage the best analogy is the offshore call centres. Yes, the excess in the market is likely because of misunderstanding about what LLMs can actually do and how close AGI is, the short term attraction is the labour cost savings. People may not think wages are high enough etc but the total cost for one hire to companies, particularly outside the US, is nothing to sniff at. And at current pricing of ai services, the maths would make complete sense for the bottom line.
I don't like it, because I ultimately err on the side of even limited but significant changes to people's livelihood will make the world a more hostile place (particularly in the current climate), but that's the society we live in
Thus far it appears applications of AI that provide 'benefit' do so by removing or reducing the need for human operators. Examples include: fewer software engineers, fewer call centres, removing potentially whole areas of work such as paralegals and in general automating away many white collar jobs.
By far the largest use case of AI is this.
AI could do the same thing. It could cut 90 seconds down to... 10 seconds? It doesn't seem like the same impact as Amazon, where an hour's investment became 90 seconds. And I can't see how AI shopping is going to save me money here. There's no middleman to cut out, except maybe some web site storefront?? There's also a huge downside: with Amazon, I suddenly had access to 100 different pairs of scissors to choose from, instead of the 2 or 3 I could find in Staples. That was a plus. With AI shopping, suddenly I'm down to one choice: whatever Chat chooses for me. If I want to have a say in which pair of scissors I buy, I'm back to shopping for myself.
AI use cases do not appear to be of the type that unlock NEW capabilities.
The main use cases in AI are not about wealth creation but about saving existing wealth (largely through increased automation of human operators).
Either the bubble bursts and everyone's retirement funds take a hit, 2008 style,
Or a decent chunk of the workforce becomes unemployed and unemployable.
I'm starting to believe that AI coding optimism/pessimism maps to how much one actually cares about system longevity.
If a given developer just takes on board the demands for speed from the business and/or does not care about long-term maintainability (and I mean hey, some businesses foster that, and scaling quickly is important in many cases), then I can totally understand why they would embrace AI agents.
If you care about theory building, and domain driven design, and making a system comprehensive enough to extend in a year or two's time, then I can understand the resistance for the AI to let-it-rip. I admit to falling in this camp.
Am I off the mark here? I'd really like to hear from people who care about the long term who also let agents run relatively wild.
To give some context - I started developing a tactical RPG. I had an MVP prior to using Claude Code. I continued to work on the project, but lost motivation due to work burnout and prioritizing other hobbies.
I gave Claude Code a try to see whether it's any use. It helped more than I expected it to - it helped me produce something while dealing with burnout by building on the MVP I developed prior to AI assisted development.
The main issues I ran into were:
1) A lot of effort into reviewing the output. Main difference from peer review is that there's quicker feedback.
2)It throws out some absolutely wild solutions sometimes. It build on my existing architecture, so it was easier to catch issues. If I hadn't developed the architecture without AI assistance, things could have gone badly.
3)I only pay for the $20 Claude plan. Anything useful Claude produces for me requires it to consume a lot of tokens due to back-and-forth questions and asking Claude to dig into source file.
The most significant issue I ran into with Claude is when it suggested solutions I don't have the background to review. I don't know much about optimization, so I ran into issues with both rendering and the ECS (entity component system) library. Claude gave me recommendations, but I didn't know how to evaluate the code due to lacking that experience.
Claude was good for things I know how to do but don't want to do. It's been helpful when I want to work on something without being motivated enough to put 100% (or even 70%) into it.
If it's things I don't know how to do (like game optimization) it's harmful.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
We're too easily fooled by our mistaken models of the problem, it's difficulty, and what constitutes progress, so are perpetually fooled by the latest, greatest "ladder to the moon" effort.
Looking at your history it's something like "I tried them and they hallucinate" and, possibly, you've read an article that talks about inevitability of hallucinations. Correct? What's your reason for thinking that hallucination rate can't be lowered to or below the human rate ("Damn! What I was thinking about?").