Top
Best
New

Posted by thoughtpeddler 11 hours ago

Access to frontier AI will soon be limited by economic and security constraints(writing.antonleicht.me)
174 points | 163 commentspage 2
BrtByte 7 hours ago|
The uncomfortable implication is that "AI sovereignty" may end up being less about training your own GPT-class model and more about securing compute, energy, datacenter security and contractual access
AndroTux 5 hours ago||
Yeah, so it's just business as usual: If you have ungodly amounts of money, you can essentially do anything, and if you don't, you can't. It's always been this way, and it'll always be this way. I don't see this as a world-ending issue.
chii 5 hours ago||
It's the same as energy sovereignty.
evdubs 8 hours ago||
What's the likelihood that universities eventually become open model providers?
baq 5 hours ago||
Zero, but update priors when you see campus football stadiums replaced with datacenters and gas turbines
pjc50 4 hours ago|||
Universities are struggling to prevent their students using AI, because it makes both learning and evaluation extremely difficult.
mold_aid 31 minutes ago|||
Universities provide the AI to students. We have CoPilot in our 365 and the Ed majors get free AI crap all the time. Hard to make drugs illegal when you're the dealer
davrosthedalek 1 hour ago|||
This. It's extremely frustrating, AI can be a 24/7 tutor, but too many students use it to do their work instead.

We have to really rethink how and what we teach, and how we evaluate. Scoring (non-handwritten) homework is pointless, even contra-productive (because it incentivizes cheating, even for the students who don't want to, just to not be outscored by the cheaters). Hand-written homework means the students at least have to have read the work once...

And soon, with AI glasses, even in-classroom tests will be difficult.

root_axis 4 hours ago|||
Way too expensive.
digitaltrees 5 hours ago||
Oh. Thats an interesting concept. Expand
mc-serious 4 hours ago||
Open-Source will handle access to models, someone will find a way. Security by obfuscation has never worked.
Garlef 2 hours ago||
I'm not so sure about "soon" - the big labs are profiting from the discovery and experimentation efforts by independent contributors (openclaw, etc) and reducing their capabilities also reduces input from this side.
nl 7 hours ago||
Quote:

> “The two AI superpowers are going to start talking. We’re going to set up a protocol in terms of how do we go forward with best practices for AI to make sure nonstate actors don’t get a hold of these models,” Bessent told Joe Kernen on Thursday, on the sidelines of President Donald Trump’s two-day meeting in Beijing with Chinese President Xi Jinping.

https://www.cnbc.com/2026/05/14/us-china-ai-rules-bessent-us...

OpenAI is already talking openly about gated access to their models (see this OpenAI podcast episode for example: https://openai.com/podcast/#oai-podcast-episode-16)

Separately there's also a very active effort to stop open weight releases.

It's dangerous to those who think access to frontier intelligence is important.

threepts 4 hours ago||
When intelligence is a commercial commodity, it is only bound to happen that the rich gatekeep it to secure their socioeconomic status.

But, I think, with every revolution, hierarchies have only historically fallen only for the former serfs to rise.

The industrial revolution, the renaissance -> all were marked by an massive shift in the socioeconomic status and the rise of the middle class.

I think AGI, when it happens, will only raise equality. I may be wrong.

kklisura 3 hours ago||
It took almost two centuries for broad middle-class living standards to become common for large population after Industrial revolution - and that happened after intense fight for rights and fair share of economic gains.

So, sure, AGI might raise equality - but that's only if we fight for it.

tornikeo 4 hours ago||
I think so too. The rich will be richer, but also more people will have more at the same time. As Civilization put it: 'Just as it has always been'
tactlesscamel 3 hours ago||
How much money are you all paying to use this tech? Last I even tried, it would cost my entire salary. Yet, everyone and their newborns are using it every day for everything. How is this possible?
kouteiheika 1 hour ago||
Depends on which exact model we're talking about, and on your salary.

For example, with the $40/month Kimi Code subscription the limits are so generous that you can use it every day all the time for everything (basically just have an agent constantly running doing something) and never run out of tokens/hit the limits.

duskdozer 2 hours ago|||
Probably a combination of:

- people living in places with higher cost of living and corresponding salaries

- people whose employers are paying for it

- people who aren't actually using it but are being paid to hype it up online

- bots

Havoc 3 hours ago||
Think this somewhat underestimates economic pressures the US labs are under.

OpenAI etc need to make crazy revenue to get their investment math to work. Perhaps you can sell some tokens to privileged partners at a premium rate but I think they’ll need global scale ultimately

phantomathkg 5 hours ago||
Instead of soon, how about just "now"?

I would imagine not single everyone on HN have enough disposable income that allow us to subscribe Claude Max or other similar max plan of other models without thinking.

Some people mentioned open weight model, but there are two hurdles. One the current economic mean securing the best hardware is already stupidly expensive compare to a year or two ago. And the open weight model lack the magic that Claude/Gemini/OpenAI put in the proprietary one, meaning one will have to create their own agent that is clever enough to search the internet when it knows its training data is stale.

kouteiheika 1 hour ago|
> I would imagine not single everyone on HN have enough disposable income that allow us to subscribe Claude Max or other similar max plan of other models without thinking.

You don't need the Max plan with other models if you're not going completely crazy. Other providers have much more generous limits than Anthropic.

nikhilpareek13 3 hours ago|
the piece focuses on closed frontier models but skips that Llama, Mistral, DeepSeek and Owen run reliably 6 to 9 months behind. For most countries and most use cases, that's what people actually run, and it's not gated by US policy. The "frontier haves vs have-nots" divide is try for the top 5% of capabilities. The other 95% of the economy will run on open weights regardless of what Mythos rollout policy looks like.
More comments...