Top
Best
New

Posted by allenleee 20 hours ago

RTX 5090 and M4 MacBook Air: Can It Game?(scottjg.com)
613 points | 145 commentspage 2
coder68 20 hours ago|
This seems pretty useful for AI inference if it can pass Apple approval. I've wanted to use my Nvidia GPUs with a Mac Mini, this would enable it to run CUDA directly. Very cool!
frollogaston 20 hours ago||
I'm guessing the x86 emu is cause Windows games are rarely built for ARM, right? Was kinda curious how an ARM VM would fare. Anyway awesome article.
hparadiz 20 hours ago|
Yes. Valve has done a ton of work here because it's required to be able to run x86 games on a Steam Frame which has an ARM cpu.
hypercube33 20 hours ago|||
Steam deck runs a full x86-64 AMD APU. The work valve has done for that was to get Windows games to run seamlessly on Linux.

Hopefully in 2026 the Valve Index VR headset which is ARM (Qualcomm?) we get what you're talking about here - basically proton for Win32/64 to Linux ARM64.

Side note that Windows on ARM isn't bad just that its priced out of its league and cooling is awful for gaming on current laptops. The only issue I had was OpenGL needing some obscure GL on DirectX thing for Maya3D to get games to work.

delecti 19 hours ago||
To keep the chain of Cunningham's Law going, Valve's 2026 headset is called the Steam Frame, not the Index (which came out in 2019).

But Valve's ARM efforts even mean that Android devices can play some (mostly less graphically intensive) Steam games. That makes me very excited about the prospects for the future of gaming handhelds.

sva_ 20 hours ago||||
As sibling pointed out, the Steamdeck basically runs a Ryzen 3 7335U which is x86.
bigyabai 20 hours ago|||
The Steam Deck is pure x86, it's not an ARM-based CPU. The Steam Frame might be what you're thinking of.
hparadiz 19 hours ago||
You're right. I was thinking of what I was reading about the Steam Frame
mywittyname 20 hours ago||
> As much as I hate to admit it, step one in most of my projects now is to ask AI about it. Maybe it’ll tell me something I don’t know.

Or, more likely, it will tell you something it doesn't know.

Reminds me of yesterday, when I was arguing with ChatGPT that the 5070TI was an actual video card. It kept trying to correct me by saying I must have meant a 4070ti, since no such 5070ti card exists.

collabs 19 hours ago||
Or, it will acknowledge that it made a mistake and continue to make the same mistake again.

I asked Claude to generate an HTML page about PowerShell 7. It gave me a page saying 7.4 was the latest LTS release. I corrected it with links showing 7.6 was released in March and asked it to regenerate with the latest information.

It generated basically the same page with the same claim that 7.4 was the latest release.

ericmay 19 hours ago|||
> Or, it will acknowledge that it made a mistake and continue to make the same mistake again.

People do this too though. At least the AI generally tries to follow instructions that you give it even when you are lacking clarity in the details.

I feel like it's similar to the self-driving car problem. The car could have 99.9999% reliability, drive much better and safer than a human, yet folks will still freak out about a single mistake that's made even though you have actual humans today driving the wrong way down the highway, crashing in to buildings, drunk driving, stealing cars, and all sorts of other just absolutely stupid things.

We need to move away from this idea that because it's an AI system it should give you perfect responses. It's not a deterministic system and it can be wrong, though it should get better over time. Your Google search results are wrong all the time too. The NYT writes things that are factually incorrect. Why do we have such a high standard for these models when we don't apply them elsewhere?

bryceacc 19 hours ago|||
>I corrected it with links

it should be reasonably expected that you can give a source and fix an error in the AI output.

I would even go as far as to say if a human directly told the AI "no, use 7.6 as the latest version", the AI should absolutely follow direct instructions no matter what it thinks is true. What if this human was working on a slide about the upcoming release of 7.6 that has no public documentation?

xp84 15 hours ago||||
I see a lot of angry responses in your replies, but I do think you have a good point. It seems like those arguing with you are mostly vigorously opposing a strawman: the idea that AI is perfect and that trusting AI to be perfect is the right move. Only crazy people think that, though.

For me, I ask AI questions about taxes and my health all the time. In the case of taxes, getting a basic handle on the relevant tax law is made 1000 times easier. I can always refer directly to the IRS publications to verify, once I know what I’m looking for.

For health, frankly, it would be impractical for me to ever get as much useful information from doctors as what I can easily get from AI. Four years ago, I would have a bunch of health questions and simply never know the answers to any of them because I would have nobody to ask. Now I get them all answered, and if I were to be suggested to actually do anything that sounded even slightly risky I’d go to the doctor, armed with much more context than I had before, to verify it.

applfanboysbgon 19 hours ago||||
> Your Google search results are wrong all the time too. The NYT writes things that are factually incorrect.

This is also very bad and people complain about these things all the fucking time.

> Why do we have such a high standard for these models

Because Altman and Amodei are defrauding investors out of hundreds of billions of dollars on the promise that they will replace the entire workforce. Of course people are going to point out the emperor has no clothes when half of our society is engaged in mass hysteria worshipping these fucking things as the next industrial revolution, diverting massive amounts of resources to them, and ruining HN with 10 articles on the front page per day about how software engineering is dead.

dvlsg 18 hours ago|||
> ruining HN with 10 articles on the front page per day about how software engineering is dead.

Even this article, which is theoretically about playing games on a MacBook and not about AI, has devolved into AI discussions. It's honestly kind of tiring.

I suppose the article invites it by putting an AI blurb up top, and I suppose I'm also not helping by adding my own comment, but _still_.

ericmay 18 hours ago|||
> This is also very bad and people complain about these things all the fucking time.

So at worst these AI tools are as bad as the existing system. Worth complaining about? Absolutely. Worth holding to much higher standards? Nah I don't think so. Not at this stage at least. And folks are just disappointing themselves by setting up straw men expectations.

These tools are non-deterministic systems (like humans) which sometimes don't do exactly what you want (like humans) but are also extremely fast, much cheaper (for now), and have domain knowledge generation that is much broader than any single human has. Like anything else, there are pros and cons.

applfanboysbgon 18 hours ago||
They aren't "straw man expectations" when the entire US economy is now oriented around those expectations.
reaperducer 18 hours ago|||
The NYT writes things that are factually incorrect. Why do we have such a high standard for these models when we don't apply them elsewhere?

The New York Times publishes a "corrections" section in each issue. Let me know where I can view the 60TB file where ChatGPT fesses up to its daily fails.

ericmay 17 hours ago||
"Things exist as they are today and can't possible change or improve in the future".

People lie all the time too. You're just radicalizing yourself to create a bias for no reason other than concocting a straw man expectation that you made up for yourself. What's the point of that?

dakolli 16 hours ago|||
But people want to do their taxes with these things lmfao
corry 19 hours ago|||
LLMs are (broadly-speaking) poorly-positioned to give you a strong verdict on plausibility of a frontier topic. That said - ChatGPT was exactly right in its response to OP!

"Very deep", "border-line impractical" "in a research-sense" is the perfect summary of this article itself! :)

funimpoded 19 hours ago|||
Watching the entire economy of a superpower and ~all of online culture go absolutely ga-ga over Furbys has been one of the weirdest things I've ever witnessed.
dakolli 16 hours ago|||
Watching the entire economy of a superpower bet its entire future on SOTA text autocomplete models has been interesting to watch (which I think you're referring too).

Previous Empires naively bet their entire future on the words of magicians, or people who claimed they could look into water, the sky and fire and tell you what the future is going to be.

Machine Learning Engineers are the modern day Empire's court magician.

Apocryphon 19 hours ago|||
Eh, in this use case it's more like a goofy search engine.
perarneng 19 hours ago|||
This is why i use grok expert mode. It agressivly goes out searching the web for info. Its so much better then relying on year old data.
_blk 19 hours ago||
Yes, I really like that about Grok. It had a few good qualities but it was too verbose so now it's mostly Claude.
JumpCrisscross 19 hours ago||
Solid compromise is Kagi's research assistant. Aggressively cites, unlike Claude. Concise, unlike Grok.
Tsiklon 17 hours ago|||
I argued with GPT-OSS 120B about cascade lake Xeon workstation CPU parts not having a GPU when it vehemently said otherwise
amluto 19 hours ago|||
At least ChatGPT is now aware that Codex exists. I have a chat, still in my history, from a few months ago, in which I asked for help wrangling npm to get @openai/codex working, and ChatGPT said:

> Important: Codex CLI no longer exists

> OpenAI discontinued the Codex model + CLI a while back. There is no official binary named codex in any current OpenAI npm packages. OpenAI’s current CLI tool is:

    npm install -g openai
> which installs the openai command, not codex.

The world knowledge of these models is not necessarily up to date :)

edit: I replayed the same prompt into current ChatGPT and it is less clueless now. Maybe OpenAI noticed that it was utterly dumb that GPT-5.whatever didn't believe that Codex existed and fine-tuned it.

sigmoid10 19 hours ago||
>The world knowledge of these models is not necessarily up to date :)

It's amazing how this still needs to be said. Codex was released in April 2025. The initial GPT-5 and 5.1 still had a knowledge cutoff in late 2024. Like, what did you expect? Always beware the knowledge cutoff for LLMs (although recent releases have gotten much better with researching the web for updates before answering modern software topics).

z2 18 hours ago||
OpenAI being more aware of the implications would help too--last year I also struggled with using Codex to write scripts to run Codex headless, because it kept insisting that Codex was a retired model from the GPT-3 days and not a program that could be called by a script.
simonh 19 hours ago||
It’s training data only goes up to late 2024 or early 2025 so that might be why, though it does have access to the internet.
mywittyname 19 hours ago|||
Yeah, the solution was to link it to the nvidia page of the card, then it was like, 'oh, okay.' But at that point, I lost faith in it's ability to provide me with the information I was looking for. If it's information is so out of date that it doesn't know about the 5000 series, how could I be confident that it knew the details I was asking about (game engine related research)?
asats 19 hours ago||
Are you using the instant model?
mh- 14 hours ago|||
No one ever is willing to share a link to the ChatGPT or Claude session when I ask follow-up questions.
reaperducer 18 hours ago|||
You're holding it wrong.
anomaly_ 11 hours ago||
[flagged]
weird-eye-issue 19 hours ago|||
Depending on your ChatGPT settings...
SamiahAman 18 hours ago||
Very nice effort. This has incredible technical depth, particularly in the DMA and QEMU sections. I also like that you didn't oversell it as the ideal Mac gaming solution. I found the AI inference results to be the most fascinating. Overall, it was a great read.
rballpug 10 hours ago||
It renders according to the Blackwell and Hopper 100.
lenerdenator 15 hours ago||
The lack of native games on Apple Silicon is one of the greatest crimes ever committed against computing.

I got Fallout 3 working on my M2 MBP as well as it did on Windows back in the day. Temps were cool, battery was decent. If they sold my college years gaming collection (15-ish years ago) in a way that ran natively through GoG or Steam, I'd buy every single title.

nottorp 3 hours ago||
Skyrim runs well [1] on my M2 mac mini through crossover and rosetta. So most older games will run even better.

The real question is what happens when they drop Rosetta. They promised they'll keep the APIs related to running 32 bit games but can we trust them?

[1] Not at 8k 240 fps of course.

bigyabai 10 hours ago||
Porting games natively to macOS is a waste of developer time. Apple has already depreciated vast swathes of 32-bit games that were never updated to support 64-bit x86 or Apple Silicon. Developers that give macOS the same level of attention as Windows don't get the same level of support that Microsoft offers in return.

Not to mention that Mac owners are a minority share of the PC gaming market. Linux has the right idea, if you don't translate the games then you'll never have true preservation.

inforemix 12 hours ago||
Awesome dude! Extra fan on the desk too :)
zer0zzz 19 hours ago||
Once egpus work on Apple Silicon there will be little reason to own a pc
traderj0e 16 hours ago||
Been hearing this for over a decade, except back then it was eGPU in Intel Macs which were closer to other PCs if anything. Even if this didn't require so much DIY and if Thunderbolt could do PCIe speeds, most people don't want to add drama when they can just use a PC with regular PCIe slots and native compatibility with Nvidia. The native way already has enough edge cases without adding an unusual setup.
bel8 18 hours ago|||
Mac GPU isn't the bottleneck for most games. Compatibility is.
_blk 18 hours ago|||
I assume your reasons are different to mine so for your reasons it might very well be true. But for my reasons definitely not as long as Apple Silicon can't run Linux somewhat decently natively - and even then, it's still an Apple..
jaimex2 11 hours ago|||
The only thing Apple silicon has going for it is power use and that gap is getting closed. I can't really see any reason why I would switch to Mac, it just seems like you pay a lot more for a closed expensive environment that fights you at every step.

I'll never pay anyone for a developer licence or fee either. They can sponsor me to port my software to their platform.

lowbloodsugar 17 hours ago|||
Just built a workstation with an older Threadripper Pro. It has 128 PCIE lanes, for 7 16-lane PCIE slots. An egpu has 4. I have one GPU, at x16, and I can add more.

Most people don't need that, but most people don't need an eGPU either. The number of gamers who would switch to Macbook+eGPU is negligible. It's just not compelling. For LLMs, hanging a 5090 off the thunderbolt port makes prompt processing fast, but I will be surprised if the M6 doesn't come with silicon just for that, as its the current gap. M5 is quite adequate for token generation for the price, given the RAM quantity and bandwidth. An M6 that accelerates TTFT would make an eGPU irrelevant.

For gaming, the threadripper gets at least +50FPS for windows vs linux, and some games just freeze for periods of time on linux with things like dynamic frame generation. I have an SSD for windows just for gaming.

bigyabai 16 hours ago||
> The number of gamers who would switch to Macbook+eGPU is negligible. It's just not compelling.

This. eGPUs fade in and out of relevance every few years, and even back in the Intel Macbook days there were people advocating for eGPU gaming with Bootcamp. It was a terrible solution, there is every reason to avoid macOS with a dGPU when you have something like Linux or even Windows as an alternative.

Melatonic 15 hours ago|||
Thats also because we keep trying to use terrible interconnects. If we get an interconnect with a proper latency spec things might change
hacker_mar 14 hours ago|||
[dead]
ActorNightly 11 hours ago||
Man, Apple fans are still proving the stereotype to be accurate after 20 years.

Ignoring the fact that the Mac OS gets in your way every time you try to do something that Apple doesn't like, with no guarantee that an update won't break anything existing, ignoring the fact that Macs are non repairable, non upgradable, ignoring the fact that they don't support multiple displays flawlessly, I hope you realize that egpu support natively is NEVER coming to Macs, because why the fuck would they enable it when they can just charge you full price for a desktop computer? Apple is built on the sole image that Apple users have money, so buying another Mac Mini or Mac Pro in addition to your laptop is what you are supposed to do.

Android is way ahead of Mac with Android Desktop mode and Samsung Dex, to the point where you don't even need to own a laptop anymore. Ive been using my S24/S25 with lapdock for over 3 years now as a laptop, and it works flawlessly. Apple can easily do this with iPhone, but they won't because that means one less macbook purchase.

neuroelectron 10 hours ago|
I just want to point out that anything you ask ChatGPT about that hasn't been discussed 1000 times on Reddit or Wikipedia is going to be wrong, and it will only be "right" in the sense that it aligns with the artificial consensus created on those platforms.

Of course the author probably did that as a joke.

MikeNotThePope 10 hours ago|
Pretty much! A precedent-fueled prediction engine can’t predict the unprecedented.
neuroelectron 10 hours ago||
It (LLMs in general) actually can make some very prescient hallucinations by making similar inferences across dissimilar domains, but they have since removed that feature to prevent liability and libel. GPT3 was much more useful in this capacity, especially before they started stress testing it on 4chan (Jan 2023)
More comments...