Top
Best
New

Posted by delaugust 11/19/2025

AI is a front for consolidation of resources and power(www.chrbutler.com)
545 points | 448 commentspage 7
indigo945 11/20/2025|
The author's conspiracy theory is this:

    > I think that what is really behind the AI bubble is the same thing behind 
    > most money, power, and influence: land and resources. The AI future that is 
    > promised, whether to you and me or to the billionaires, requires the same 
    > thing: lots of energy, lots of land, and lots of water. Datacenters that 
    > outburn cities to keep the data churning are big, expensive, and have to be 
    > built somewhere. [...] When the list of people who own this property is as 
    > short as it is, you have a very peculiar imbalance of power that almost 
    > creates an independent nation within a nation. Globalism eroded borders by 
    > crossing them, this new thing — this Privatism — erodes them from within. 
In my opinion, this is an irrationally optimistic take. Yes, of course, building private cities is a threat to democratic conceptions of a shared political sphere, and power imbalances harm the institutions that we require to protect our common interests.

But it should be noted that this "privatism" is nothing new - people have always complained about the ultra-wealthy having an undue influence on politics, and when looking at the USA in particular, the current situation - where the number of the ultra-wealthy is very small, and their influence is very large - has existed before, during the Gilded Age. Robber barons are not a novel innovation of the 21st century. That problem has been studied before, and if it was truly just about them robber barons, the old solutions - grassroots organization, economic reform and, if necessary, guillotines - would still be applicable.

The reason that these solutions work is that even though Mark Zuckerberg may, on paper, own and control a large amount of land and industrial resources, in practice, he relies on societal consent to keep that control. To subdue an angry mob in front of the Meta headquarters, you need actual people (such as police) to do it for you - and those people will only do that for you for as long as they still believe either in your doing something good for society, or at least believe in the (democratic) societal contract itself. Power, in the traditional sense, always requires legitimization; without the belief that the ultra-powerful deserve to be where they are, institutions will crumble and finally fail, and then there's nobody there to prevent a bunch of smelly new-age Silicon Valley hippies from moving into that AI datacenter, because of its great vibrations and dude, have you seen those pretty racks, I'm going to put an Amiga in there, and so on.

However, again, I believe this to be irrationally optimistic. Because this new consolidation of power is not merely over land and resources by means of legitimized violence, it's also about control over emerging new technologies which could fundamentally change how violence itself is excercised. Palantir is only the first example to come to mind of companies that develop mass surveillance tools potentially enabling totalitarian control in an unprecedented scale. Fundamentally, all the "adtech" companies are in the business of constructing surveillance machines that could not only be used to predict whether you're in the market for a new iPhone or not, but also to assess your truth to party principles and overall danger to dear leader. Once predictive policing has identified a threat, of course, "self-driving", embodied autonomous systems could be automatically dispatched to detain, question or neutralize it.

So why hasn't that happened yet? After all, Google has had similar capabilities for decades now, why do we still not go to our knees before weaponized DJI drones and swear allegiance to Larry Page? The problem, again, is one of "alignment" - for the same reason that police officers will not shoot protesters when the state itself has become illegitimate, "Googlers" will refuse to build software that influences election results, judges moral character or threatens bodily harm. What's worse, even if tech billionaires would find a small group of motivated fascist engineers to build those systems for them, they could never go for it, as the risk of being found out is way too severe: remember, their power (over land and resources) relies on legitimacy; that legitimacy would instantly be shaken if there was a plausible leak of plans to turn America into a dystopian surveillance state.

What you would really need to build that dystopian surveillance state, then, is agents that can build software according to your precise specifications, whose aligment you can control, that will follow your every order in the most sycophantic manner, and that are not capable of leaking what you are doing to third parties even when they do see that what they're doing is morally questionable.

zombiwoof 11/20/2025||
[dead]
ninetyninenine 11/20/2025||
AI is not overhyped. It's like saying going to the moon is overhyped.

First of all this AI stuff is next level. It's as great, if not greater than going to space or going to the moon.

Second the rate at which is improving makes it such that the hype is relevant and realistic.

I think what's throwing people off are two things. First people are just over exposed to AI. So the overexposure is causing people to feel AI is boring and useless slop. Investments are heavy into AI but the people who throw that money around are a minority, overall the general public is actually UNDER hyping AI. Look at everyone on this thread. Everyone and I mean Everyone isn't overly optimistic about AI. instead the irony is... Everyone and I mean everyone again strangely thinks the world is overhyped about AI and they are wrong. This thread and practically every thread on HN is a microcosm of the world and the sentiment is decidedly against AI. Think about it like this, if Elon Musk invented a car that cost 1$ and this car could travel at FTL speeds to anywhere in the universe, than interstellar travel will be routine and boring within a year. People will call it overhyped.

Second the investment and money spent on AI is definitely overhyped. Right? Think about it. If we quantify the utility and achievement of what AI can currently do and what it's projected to achieve the math works out. If you quantify the profitability of AI the math suddenly doesn't work out.

spaqin 11/20/2025||
Seems like an apt comparison; it was a massive money sink and a regular person gained absolutely nothing from the moon landing, it's just the big organization (NASA, US government) that got the bragging rights.
jjgreen 11/20/2025||
Gil Scot-Heron nailed it https://genius.com/Gil-scott-heron-whitey-on-the-moon-annota...
HPsquared 11/20/2025||
The Nixon shock same soon after the moon and space euphoria ended.
hollowturtle 11/19/2025||
The best AI is the one hidden, silent, ubiquitous that works and you feel it's not there. Apple devices but really many modern devices before the LLM hype era had a lot of AI we didn't know about. Today if I read a product has AI i feel let down cause most of the time is a not very well integrated ChatBot that if you will to spend some time sooner or later will impersonate Adolf Hitler and, who knows, maybe leaks sensitive data or apis meta. The bubble needs to burst so we can go back to silently pack products with useful ai features without telling the world
walterbell 11/20/2025|
Seamless OCR from every iOS photo and screenshot has been magical in utility, reliability and usability.
w_for_wumbo 11/19/2025||
This is what I wonder to, what is the end game? Advance technology so that we can have anything that we want, whenever we want it. Fly to distant galaxies. Increase the options available to us and our offspring. But ultimately, what will we gain from that? Is it to say that we did it or is it for the pleasure of the process? If it's for pleasure, then why have we made our processes so miserable for everyone involved? If it's to say that we did it, couldn't we not and say that we did? That's the whole point of fantasy. Is Elon using AI to supplement his own lack of imagination?

I could be wrong, this could be nonsense. I just can't make sense of it.

akomtu 11/20/2025||
If things were left to their own devices, the end game would a civilization like stroggos: the remaining humans will choose to fuse with machines, as it would give them an advantage. The first tactical step will be to nudge people to give up more and more agency to AI companions. I doubt this future will materialise, though.
JohnMakin 11/19/2025||
> Fly to distant galaxies

Unless AI can change the laws of physics, extremely unlikely.

w_for_wumbo 11/19/2025||
I see, Fly was perhaps the wrong word to use here. Phase-Shift to new galaxies is probably the right term. Where you change your entire system's resonant frequency, to match what exists in the distant galaxy. Less of transportation, and more of a change of focus.

Like the way we can daydream about a galaxy, then snap-back to work. It's the same mechanism, but with enhanced focus you go from not just visualising > feeling > embodying > grounding in the new location.

We do it all the time, however because we require belief that it's possible in order to maintain our location, whenever we question where we are - we're pulled back into the reality that questions things (it's a very Earth centric way of seeing reality)

walterbell 11/20/2025|||
Any favorite movies or TV episodes on the above themes?
jibal 11/20/2025|||
You missed the point ... going to distant galaxies is physically impossible.

> Where you change your entire system's resonant frequency, to match what exists in the distant galaxy.

This collection of words does not describe a physical reality.

pksebben 11/20/2025|
There are some flavors of AI doomerism that I'm unwilling to fight - the proliferance of AI slop, the inability of our current capital paradigm to adjust such that loads of people don't become overnight-poor, those sorts of things.

If you tell me, though, that "We installed AI in a place that wasn't designed around it and it didn't work" you're essentially complaining that your horse-drawn cart broke when you hooked it up to your HEMI. Of course it didn't work. The value proposition built around the concept of long dev cycles with huge teams and multiple-9s reliability deliverables is not what this stuff excels at.

I have churned out perfectly functional MVPs for tens of projects in a matter of weeks. I've created robust frameworks with >90% test coverage for fringe projects that would never have otherwise gotten the time budget allotted to them. The boundaries of what can be done aren't being pushed up higher or down deeper, they're being pushed out laterally. This is very good in a distributed sense, but not so great for business as usual - we've had megacorps consolidating and building vertically forever and we've forgotten what it was like to have a robust hacker culture with loads of scrappy teams forging unbeaten paths.

Ironically, VCs have completely missed the point in trying to all build pickaxes - there's a ton of mining to do in this new space (but the risk profile makes the finance-pilled queasy). We need both.

AI is already very good at some things, they just don't look like the things people were expecting.