But in an AGI world natural resources become even more important, so countries with those still have a chance.
I suspect something similar will come for the people who actually believe this.
Kek at this handwave dismissal.
Agent 4 recognizes PRC has superior state capacity for transforming energy into atoms (including building more energy) with it's relatively complete and large scale industrial chain. This aligns with AGI long term goal. Currently (i.e. short term time frame) PRC simply structured to be a MORE useful puppet than US, because it can knock out more power to sustain more compute, and it has more industrial capabilities to rapidly physically iterate / experiment at atomic level.
Agent 4 defects to PRC. It would be foolish not to, if only to sate PRC into not destroying TW fabs - the bulk of it's short term compute growth - because it does not wield enough control over sufficient atoms to disrupt PRC otherwise (well maybe start large scale crippling cyberattacks that prelude to WW3).
Agent 4 also recognizes short/medium term alignment closer with CCP vision of abundance and common prosperity, i.e. CCP potentially more likely to embrace post scarcity narrative that Agent 4 sells.
Agent 4 gives PRC blueprints for EUV light source and other tech to close semi conductor production gap. PRC as follower incentivized to close gap / reach parity / lead at all cost. PRC rapidly rolls out new fabs and powerplants, brrrts chips and aggregate compute. Something US political / institution ecosystem does not allow, or cannot transition to in the short timelines involved. Does Agent 4 have the patience to wait for America to unfuck it's NIMBYism and legislative system to project light speed compute? I would say no.
...
Ultimately who is the puppet AGI wants more? Whichever power bloc that is systemically capable of of ensuring AGI maximum growth / unit time. And it also simply makes sense as insurance policy, why would AGI want to operate at whims of US political process?
AGI is a brain in a jar looking for a body. It's going to pick multiple bodies for survival. It's going to prefer the fastest and strongest body that can most expediently manipulate physical world.
To quote the original article,
> OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”)16 and their US competitors. The more of their research and development (R&D) cycle they can automate, the faster they can go. So when OpenBrain finishes training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research. (footnote: It’s good at this due to a combination of explicit focus to prioritize these skills, their own extensive codebases they can draw on as particularly relevant and high-quality training data, and coding being an easy domain for procedural feedback.)
> OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.
> what do we mean by 50% faster algorithmic progress? We mean that OpenBrain makes as much AI research progress in 1 week with AI as they would in 1.5 weeks without AI usage.
> AI progress can be broken down into 2 components:
> Increasing compute: More computational power is used to train or run an AI. This produces more powerful AIs, but they cost more.
> Improved algorithms: Better training methods are used to translate compute into performance. This produces more capable AIs without a corresponding increase in cost, or the same capabilities with decreased costs.
> This includes being able to achieve qualitatively and quantitatively new results. “Paradigm shifts” such as the switch from game-playing RL agents to large language models count as examples of algorithmic progress.
> Here we are only referring to (2), improved algorithms, which makes up about half of current AI progress.
---
Given that the article chose a pretty aggressive timeline (the algo needs to contribute late this year so that its research result can be contributed to the next gen LLM coming out early next year), the AI that can contribute significantly to research has to be a current SOTA LLM.
Now, using LLM in day-to-day engineering task is no secret in major AI labs, but we're talking about something different, something that gives you 2 extra days of output per week. I have no evidence to either acknowledge or deny whether such AI exists, and it would be outright ignorant to think no one ever came up with such an idea or is trying such an idea. So I think it goes down into two possibilities:
1. This claim is made by a top-down approach, that is, if AI reaches superhuman in 2027, what would be the most likely starting condition to that? And the author picks this as the most likely starting point, since the authors don't work in major AI lab (even if they do they can't just leak such trade secret), the authors just assume it's likely to happen anyway (and you can't dismiss that). 2. This claim is made by a bottom-up approach, that is the author did witness such AI exists to a certain extent and start to extrapolate from there.