Top
Best
New

Posted by delaugust 11/19/2025

AI is a front for consolidation of resources and power(www.chrbutler.com)
545 points | 448 commentspage 3
jaketoronto 11/19/2025|
> It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously.

Yes... I don't think the current process of using a diffusion model to generate an image is the way to go. We need AI that integrates fully within existing image and design tools, so it can do things like rendering SVG, generating layers and manipulating them, the same as we would with the tool, rather than one-shot generating the full image via diffusion.

Same with code -- right now, so much AI code gen and modification, as well as code understanding, is done via raw LLM. But we have great static analysis tools available (ie what IDES do to understand code). LLMs that have access to those tools will be more precise and efficient.

It's going to take time to integrate LLMs properly with tools. And train LLMs to use the tools the best way. Until we get there, the potential is still more limited. But I think the potential is there.

shevy-java 11/20/2025||
It is also a tool to cut costs for corporations.

The whole word is not only now a buzzword too but one that thinly tries to disguise some underlying strategies. And it is also a bubble, part of which is currently breaking - you can see this at the stock market.

I really think society overall has to change. I know this is wishful thinking, but we can not afford those extra-money to a few superrich while inflation skyrockets. This is organised theft. AI is not the only troublemaker of course; a lot of this is a systemic problem and how markets work, or rather don't work. But when politicians are de-facto lobbyists and/or corrupt, then the whole model of a "free" market breaks away in various ways. On top of finding jobs becoming harder and harder in various areas.

insane_dreamer 11/20/2025||
> Meanwhile, generative AI presents a few other broader challenges to the integrity of our society. First is to truth. We’ve already seen how internet technologies can be used to manipulate a population’s understanding of reality.

Bubble aside, this could be the most destructive effect of AI. I would add to this that it is also destroying creativity, because when you don't know whether that "amazing video clip" was actually created by a human or an AI, then it's no longer that amazing. (To use a trivial example, a video of a cat and dog interacting in a way that is truly funny if it were real, and goes viral, but that means nothing if was AI-generated.)

block_dagger 11/19/2025||
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.

What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.

kmnc 11/19/2025||
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
apsurd 11/19/2025||
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".

I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?

jibal 11/20/2025||
Consciousness is a physical phenomenon; rainbows, their ends, and pots of gold at them are not.
drkleiner 11/20/2025|||
> Consciousness is a physical phenomenon

This can mean one of 50 different physicalist frameworks. And only 55% of philosophers of mind accept or lean towards physicalism

https://survey2020.philpeople.org/survey/results/4874?aos=16

> rainbows, their ends, and pots of gold at them are not

It's an analogy. Someone sees a rainbow and assumes there might be a pot of gold at the end of it, so they think if there were more rainbows, there would be more likelihood of pot of gold (or more pots of gold).

Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.

But just like the pot of gold, that might be a false assumption. After all, even under physicalism, there is a variety of ideas, some of which would say more computing will not yield consciousness.

Personally, I think even if computing as we know can't yield consciousness, that would just result in changing "computing as we know" and end up with attempts to make computers with wetware, literal neurons (which I think is already an attempt)

jibal 11/20/2025|||
I'm well aware that many people are wrong about consciousness and have been misled by Searle, Chalmers, Nagel, et. al. Numbers like 55% are argumentum ad populum and are completely irrelevant. The sample space matters ... I've been to the "[Towards a] Science of Consciousness" conferences and they are full of cranks and loony tunes, and even among respectable intelligent philosophers of mind there is little knowledge or understanding of neuroscience, often proudly so. These philosophers should read Arthur Danto's introduction to C.L. Hardin's "Color for Philosophers". I've partied with David Chalmers--fun guy, very bright, but has done huge damage to the field. Roger Penrose likewise--a Nobel Prize winning physicist but his knowledge of the brain comes from that imbecile Stuart Hameroff. The fact remains that consciousness is a physical function of physical brains--collections of molecules--and can definitely be the result of computation--this isn't an "assumption", it's the result of decades of study and analysis. e.g., people who think that Searle's Chinese Room argument is valid have not read Larry Hauser's PhD thesis ("Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence") along with a raft of other criticism utterly debunking it (including arguments from Chalmers).

> It's an analogy.

And I pointed out why it's an invalid one -- that was the whole point of my comment.

> But just like the pot of gold, that might be a false assumption.

But it's not at all "just like the pot of gold". Rainbows are perceptual phenomena, their perceived location changes when the observer moves, they don't have "ends", and there certainly aren't any pots of gold associated with them--we know for a fact that these are "false assumptions"--assumptions that no one makes except perhaps young children. This is radically different from consciousness and computation, even if it were the case that somehow one could not get consciousness from computation. Equating or analogizing them this way is grossly intellectually dishonest.

> Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.

Utter nonsense.

antonvs 11/28/2025||
> The fact remains that consciousness is a physical function of physical brains--collections of molecules--and can definitely be the result of computation

Ok, so are LLMs conscious? And if not, what’s the difference between them and a human brain - what distinguishes a non-conscious machine from a conscious entity? And if the consciousness is a consequence of computation, what causes the qualitative change from blind, machine like execution of instructions? How would such a shift in the fundamental nature of mechanical computation even be possible?

No neuroscientist currently knows the answer to this, and neither do you. That’s a direct manifestation of the hard problem of consciousness.

> I've partied with David Chalmers

Sadly, not at an intellectual level.

eightman 11/19/2025||
The use case for AI is spam.
topaz0 11/20/2025||
It's the reverse printing press, drowning all purposeful human communication in noise.
bdw5204 11/20/2025||
Another major use case for it is enabling students to more easily cheat on their homework. Which is why it is probably going to end up putting Chegg out of business.
octoberfranklin 11/20/2025|||
I am shocked when I talk to college kids about AI these days.

I try to explain stuff to them like regurgitating the training data, context window limits, and confabulation.

They stick their fingers in their ears and say "LA LA LA LA it does my homework for me nothing else matters LA LA LA LA i can't hear you"

They really do not care about the Turing Test. Today's LLMs pass the "snowed my teaching assistant test" and nothing else matters.

Academic fraud really is the killer app for this technology. At least if you're a 19-year-old.

BeFlatXIII 11/20/2025|||
These kids are only in school to get a meal ticket to white-collar job interviews. AI frees them to be honest about their intentions, rather than pretend for long enough to stumble their way into learning something.
SoftTalker 11/20/2025||||
Maybe AI will finally skewer the myth that an undergraduate degree means anything.
jjgreen 11/20/2025|||
[dead]
BeFlatXIII 11/20/2025|||
AI brought “let someone else do it for you” cheating to the middle class, no longer the domain of wealthy flunkies.
IAmGraydon 11/20/2025||
I think the author is right that AI companies know it’s a scam, but it’s in the interest of stealing investor’s money, not consolidating land and resources through energy infrastructure buildout. Who does the author think owns that? It’s not the AI companies. It’s the same power companies that already own it.
keepamovin 11/20/2025||
This recentralization trend was always going to happen and was inevitable anyway because of the dynamics and economics of content deliver, medium enhancements and market expectations. Even before the AI revolution, I foresaw a recentralization: fat servers, thin clients - so it was inevitable.
insane_dreamer 11/20/2025||
It's not just consolidation of physical resources like land and water.

It's also the vision that we will reach a point to where _any task_ can be fully automated (the ultimate promise of AGI). That provides _any business with enough capital_ to increase profits significantly by replacing humans with AI-driven machines.

If that were to happen, the impact on society would be absolutely devastating. It will _not_ be a matter of "humans will just find other jobs to do, just like they used to be farmers and then worked in factories". Because if the promise is true, then whatever "new job" that emerges could also be performed better by an AI. And the idea that "humans will be free to engage in the pursuits they love and enjoy" is bonkers fantasy as it is predicated on us evolving into a scarcity-free utopia where the state (or more likely pseudo-states like BigCorp) provide the resources we need to live without requiring any exchange of labor. We can't even give people SNAP.

tim333 11/19/2025||
The coming of AI seems one of those things like the agricultural revolution or industrial revolution that is kind of inevitable once it starts. All the business of who pays how much for which stock and what price is sensible and which algorithm seem kind of secondary.
jdkee 11/20/2025|
"A datacenter takes years to construct."

Not for Elon, apparently.

See https://en.wikipedia.org/wiki/Colossus_(supercomputer)

ksynwa 11/20/2025|
A small catch

> Using an existing space rather than building one from the ground up allowed the company to begin working on the computer immediately.

More comments...