Posted by allenleee 20 hours ago
Hopefully in 2026 the Valve Index VR headset which is ARM (Qualcomm?) we get what you're talking about here - basically proton for Win32/64 to Linux ARM64.
Side note that Windows on ARM isn't bad just that its priced out of its league and cooling is awful for gaming on current laptops. The only issue I had was OpenGL needing some obscure GL on DirectX thing for Maya3D to get games to work.
But Valve's ARM efforts even mean that Android devices can play some (mostly less graphically intensive) Steam games. That makes me very excited about the prospects for the future of gaming handhelds.
Or, more likely, it will tell you something it doesn't know.
Reminds me of yesterday, when I was arguing with ChatGPT that the 5070TI was an actual video card. It kept trying to correct me by saying I must have meant a 4070ti, since no such 5070ti card exists.
I asked Claude to generate an HTML page about PowerShell 7. It gave me a page saying 7.4 was the latest LTS release. I corrected it with links showing 7.6 was released in March and asked it to regenerate with the latest information.
It generated basically the same page with the same claim that 7.4 was the latest release.
People do this too though. At least the AI generally tries to follow instructions that you give it even when you are lacking clarity in the details.
I feel like it's similar to the self-driving car problem. The car could have 99.9999% reliability, drive much better and safer than a human, yet folks will still freak out about a single mistake that's made even though you have actual humans today driving the wrong way down the highway, crashing in to buildings, drunk driving, stealing cars, and all sorts of other just absolutely stupid things.
We need to move away from this idea that because it's an AI system it should give you perfect responses. It's not a deterministic system and it can be wrong, though it should get better over time. Your Google search results are wrong all the time too. The NYT writes things that are factually incorrect. Why do we have such a high standard for these models when we don't apply them elsewhere?
it should be reasonably expected that you can give a source and fix an error in the AI output.
I would even go as far as to say if a human directly told the AI "no, use 7.6 as the latest version", the AI should absolutely follow direct instructions no matter what it thinks is true. What if this human was working on a slide about the upcoming release of 7.6 that has no public documentation?
For me, I ask AI questions about taxes and my health all the time. In the case of taxes, getting a basic handle on the relevant tax law is made 1000 times easier. I can always refer directly to the IRS publications to verify, once I know what I’m looking for.
For health, frankly, it would be impractical for me to ever get as much useful information from doctors as what I can easily get from AI. Four years ago, I would have a bunch of health questions and simply never know the answers to any of them because I would have nobody to ask. Now I get them all answered, and if I were to be suggested to actually do anything that sounded even slightly risky I’d go to the doctor, armed with much more context than I had before, to verify it.
This is also very bad and people complain about these things all the fucking time.
> Why do we have such a high standard for these models
Because Altman and Amodei are defrauding investors out of hundreds of billions of dollars on the promise that they will replace the entire workforce. Of course people are going to point out the emperor has no clothes when half of our society is engaged in mass hysteria worshipping these fucking things as the next industrial revolution, diverting massive amounts of resources to them, and ruining HN with 10 articles on the front page per day about how software engineering is dead.
Even this article, which is theoretically about playing games on a MacBook and not about AI, has devolved into AI discussions. It's honestly kind of tiring.
I suppose the article invites it by putting an AI blurb up top, and I suppose I'm also not helping by adding my own comment, but _still_.
So at worst these AI tools are as bad as the existing system. Worth complaining about? Absolutely. Worth holding to much higher standards? Nah I don't think so. Not at this stage at least. And folks are just disappointing themselves by setting up straw men expectations.
These tools are non-deterministic systems (like humans) which sometimes don't do exactly what you want (like humans) but are also extremely fast, much cheaper (for now), and have domain knowledge generation that is much broader than any single human has. Like anything else, there are pros and cons.
The New York Times publishes a "corrections" section in each issue. Let me know where I can view the 60TB file where ChatGPT fesses up to its daily fails.
People lie all the time too. You're just radicalizing yourself to create a bias for no reason other than concocting a straw man expectation that you made up for yourself. What's the point of that?
"Very deep", "border-line impractical" "in a research-sense" is the perfect summary of this article itself! :)
Previous Empires naively bet their entire future on the words of magicians, or people who claimed they could look into water, the sky and fire and tell you what the future is going to be.
Machine Learning Engineers are the modern day Empire's court magician.
> Important: Codex CLI no longer exists
> OpenAI discontinued the Codex model + CLI a while back. There is no official binary named codex in any current OpenAI npm packages. OpenAI’s current CLI tool is:
npm install -g openai
> which installs the openai command, not codex.The world knowledge of these models is not necessarily up to date :)
edit: I replayed the same prompt into current ChatGPT and it is less clueless now. Maybe OpenAI noticed that it was utterly dumb that GPT-5.whatever didn't believe that Codex existed and fine-tuned it.
It's amazing how this still needs to be said. Codex was released in April 2025. The initial GPT-5 and 5.1 still had a knowledge cutoff in late 2024. Like, what did you expect? Always beware the knowledge cutoff for LLMs (although recent releases have gotten much better with researching the web for updates before answering modern software topics).
I got Fallout 3 working on my M2 MBP as well as it did on Windows back in the day. Temps were cool, battery was decent. If they sold my college years gaming collection (15-ish years ago) in a way that ran natively through GoG or Steam, I'd buy every single title.
The real question is what happens when they drop Rosetta. They promised they'll keep the APIs related to running 32 bit games but can we trust them?
[1] Not at 8k 240 fps of course.
Not to mention that Mac owners are a minority share of the PC gaming market. Linux has the right idea, if you don't translate the games then you'll never have true preservation.
I'll never pay anyone for a developer licence or fee either. They can sponsor me to port my software to their platform.
Most people don't need that, but most people don't need an eGPU either. The number of gamers who would switch to Macbook+eGPU is negligible. It's just not compelling. For LLMs, hanging a 5090 off the thunderbolt port makes prompt processing fast, but I will be surprised if the M6 doesn't come with silicon just for that, as its the current gap. M5 is quite adequate for token generation for the price, given the RAM quantity and bandwidth. An M6 that accelerates TTFT would make an eGPU irrelevant.
For gaming, the threadripper gets at least +50FPS for windows vs linux, and some games just freeze for periods of time on linux with things like dynamic frame generation. I have an SSD for windows just for gaming.
This. eGPUs fade in and out of relevance every few years, and even back in the Intel Macbook days there were people advocating for eGPU gaming with Bootcamp. It was a terrible solution, there is every reason to avoid macOS with a dGPU when you have something like Linux or even Windows as an alternative.
Ignoring the fact that the Mac OS gets in your way every time you try to do something that Apple doesn't like, with no guarantee that an update won't break anything existing, ignoring the fact that Macs are non repairable, non upgradable, ignoring the fact that they don't support multiple displays flawlessly, I hope you realize that egpu support natively is NEVER coming to Macs, because why the fuck would they enable it when they can just charge you full price for a desktop computer? Apple is built on the sole image that Apple users have money, so buying another Mac Mini or Mac Pro in addition to your laptop is what you are supposed to do.
Android is way ahead of Mac with Android Desktop mode and Samsung Dex, to the point where you don't even need to own a laptop anymore. Ive been using my S24/S25 with lapdock for over 3 years now as a laptop, and it works flawlessly. Apple can easily do this with iPhone, but they won't because that means one less macbook purchase.
Of course the author probably did that as a joke.