Posted by delaugust 11/19/2025
Yes... I don't think the current process of using a diffusion model to generate an image is the way to go. We need AI that integrates fully within existing image and design tools, so it can do things like rendering SVG, generating layers and manipulating them, the same as we would with the tool, rather than one-shot generating the full image via diffusion.
Same with code -- right now, so much AI code gen and modification, as well as code understanding, is done via raw LLM. But we have great static analysis tools available (ie what IDES do to understand code). LLMs that have access to those tools will be more precise and efficient.
It's going to take time to integrate LLMs properly with tools. And train LLMs to use the tools the best way. Until we get there, the potential is still more limited. But I think the potential is there.
The whole word is not only now a buzzword too but one that thinly tries to disguise some underlying strategies. And it is also a bubble, part of which is currently breaking - you can see this at the stock market.
I really think society overall has to change. I know this is wishful thinking, but we can not afford those extra-money to a few superrich while inflation skyrockets. This is organised theft. AI is not the only troublemaker of course; a lot of this is a systemic problem and how markets work, or rather don't work. But when politicians are de-facto lobbyists and/or corrupt, then the whole model of a "free" market breaks away in various ways. On top of finding jobs becoming harder and harder in various areas.
Bubble aside, this could be the most destructive effect of AI. I would add to this that it is also destroying creativity, because when you don't know whether that "amazing video clip" was actually created by a human or an AI, then it's no longer that amazing. (To use a trivial example, a video of a cat and dog interacting in a way that is truly funny if it were real, and goes viral, but that means nothing if was AI-generated.)
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?
This can mean one of 50 different physicalist frameworks. And only 55% of philosophers of mind accept or lean towards physicalism
https://survey2020.philpeople.org/survey/results/4874?aos=16
> rainbows, their ends, and pots of gold at them are not
It's an analogy. Someone sees a rainbow and assumes there might be a pot of gold at the end of it, so they think if there were more rainbows, there would be more likelihood of pot of gold (or more pots of gold).
Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.
But just like the pot of gold, that might be a false assumption. After all, even under physicalism, there is a variety of ideas, some of which would say more computing will not yield consciousness.
Personally, I think even if computing as we know can't yield consciousness, that would just result in changing "computing as we know" and end up with attempts to make computers with wetware, literal neurons (which I think is already an attempt)
> It's an analogy.
And I pointed out why it's an invalid one -- that was the whole point of my comment.
> But just like the pot of gold, that might be a false assumption.
But it's not at all "just like the pot of gold". Rainbows are perceptual phenomena, their perceived location changes when the observer moves, they don't have "ends", and there certainly aren't any pots of gold associated with them--we know for a fact that these are "false assumptions"--assumptions that no one makes except perhaps young children. This is radically different from consciousness and computation, even if it were the case that somehow one could not get consciousness from computation. Equating or analogizing them this way is grossly intellectually dishonest.
> Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.
Utter nonsense.
Ok, so are LLMs conscious? And if not, what’s the difference between them and a human brain - what distinguishes a non-conscious machine from a conscious entity? And if the consciousness is a consequence of computation, what causes the qualitative change from blind, machine like execution of instructions? How would such a shift in the fundamental nature of mechanical computation even be possible?
No neuroscientist currently knows the answer to this, and neither do you. That’s a direct manifestation of the hard problem of consciousness.
> I've partied with David Chalmers
Sadly, not at an intellectual level.
I try to explain stuff to them like regurgitating the training data, context window limits, and confabulation.
They stick their fingers in their ears and say "LA LA LA LA it does my homework for me nothing else matters LA LA LA LA i can't hear you"
They really do not care about the Turing Test. Today's LLMs pass the "snowed my teaching assistant test" and nothing else matters.
Academic fraud really is the killer app for this technology. At least if you're a 19-year-old.
It's also the vision that we will reach a point to where _any task_ can be fully automated (the ultimate promise of AGI). That provides _any business with enough capital_ to increase profits significantly by replacing humans with AI-driven machines.
If that were to happen, the impact on society would be absolutely devastating. It will _not_ be a matter of "humans will just find other jobs to do, just like they used to be farmers and then worked in factories". Because if the promise is true, then whatever "new job" that emerges could also be performed better by an AI. And the idea that "humans will be free to engage in the pursuits they love and enjoy" is bonkers fantasy as it is predicated on us evolving into a scarcity-free utopia where the state (or more likely pseudo-states like BigCorp) provide the resources we need to live without requiring any exchange of labor. We can't even give people SNAP.
Not for Elon, apparently.
> Using an existing space rather than building one from the ground up allowed the company to begin working on the computer immediately.