Eg today there’s billions of dollars being spent just to create and label more data, which is a global act of recruiting, training, organization, etc.
When we imagine these models self improving, are we imagining them “just” inventing better math, or conducting global-scale multi-company coordination operations? I can believe AI is capable of the latter, but that’s an awful lot of extra friction.
I don't understand how anyone takes this seriously. Speculation like this is not only useless, but disingenuous. Especially when it's sold as "informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes". This is complete fiction which, at best, is "inspired by" the real world. I question the motives of the authors.
How will it come up with the theoretical breakthroughs necessary to beat the scaling problem GPT-4.5 revealed when it hasn't been proven that LLMs can come up with novel research in any field at all?
Maybe the company that just tells an AI to generate 100s of random scaling ideas, and tries them all is the one that will win. That company should probably be 100 percent committed to this approach also, no FLOPs spent on ghibli inference.
Your daily vibe coding challenge: Get GPT-4o to output functional code which uses Google Vertex AI to generate a text embedding. If they can solve that one by July, then maybe we're on track for "curing all disease and aging, brain uploading, and colonizing the solar system" by 2030.
You may consider using search to be cheating, but we do it, so why shouldn't LLMs?
Search is totally reasonable, but in this case: Even Google's own documentation on these libraries is exceedingly bad. Nearly all the examples they give for them are for accessing the language models, not text embedding models; so GPT will also sometimes generate code that is perfectly correct for accessing one of the generative language models, but will swap e.g the "model: gemini-2.0" parameter for "model: text-embedding-005"; which also does not work.
o3-mini-high's output might work, but it isn't ideal: It immediately jumps to recommending avoiding all google cloud libraries and directly issuing a request to their API with fetch.
They're going to need to rewrite this from scratch in a quarter unless the GOP suddenly collapses and congress reasserts control over tariffs.
The only response in my view is to ban technology (like in Dune) or engage in acts of terror Unabomber style.
Not far off from the conclusion of others who believe the same wild assumptions. Yudkowsky has suggested using terrorism to stop a hypothetical AGI -- that is, nuclear attacks on datacenters that get too powerful.
Banning will not automatically erase the existence and possibilty of things. We banned the use of nuclear weapons, yet we all know they exist.
For example human motivation often involves juggling several goals simultaneously. I might care about both my own happiness and my family's happiness. The way I navigate this isn't by picking one goal and maximizing it at the expense of the other; instead, I try to balance my efforts and find acceptable trade-offs.
I think this 'balancing act' between potentially competing objectives may be a really crucial aspect of complex agency, but I haven't seen it discussed as much in alignment circles. Maybe someone could point me to some discussions about this :)
There are obviously big risks with AI, as listed in the article, but the genie is out of the bottle anyway, even if all countries agreed to stop AI development, how long would that agreement last? 10 years? 20? 50? Eventually powerful AIs will be developed, if that is possible (which I believe it is, and I didn't think I'd see the current stunning development in my lifetime, I may not see AGI but I'm sure it'll get there eventually).
So, it’s not that “an AI” becomes super intelligent, what we actually seem to have is an ecosystem of blended human and artificial intelligences (including corporations!); this constitutes a distributed cognitive ecology of superintelligence. This is very different from what they discuss.
This has implications for alignment, too. It isn’t so much about the alignment of AI to people, but that both human and AI need to find alignment with nature. There is a kind of natural harmony in the cosmos; that’s what superintelligence will likely align to, naturally.
I do agree they don't fully explore the implications. But they do consider things like coordination amongst many agents.
And, each chat is not autonomous but integrated with other intelligent systems.
So, with more multiplicity, I think thinks work differently. More ecologically. For better and worse.