Top
Best
New

Posted by naves 13 hours ago

Apple approves driver that lets Nvidia eGPUs work with Arm Macs(www.theverge.com)
330 points | 147 commentspage 2
wmf 12 hours ago|
Pretty misleading. This driver is only for compute not graphics.
polotics 12 hours ago||
As a sizable share of the market is going to want to use this for local LLMs, I do not think this is that misleading.
bigyabai 1 hour ago||
Most people I know are not using TinyGrad for inference, but CUDA or Vulkan (neither of which are provided here).
comboy 12 hours ago|||
GPUs can do graphics too?
aobdev 8 hours ago||
I can’t tell if you’re making a joke about the current state of AI and GPUs or refuting the purpose of this driver
manmal 12 hours ago||
Graphics was not what came to mind when I saw the headline.
mort96 10 hours ago|||
Graphics is typically what comes to my mind when people talk about graphics processing units
Fnoord 11 hours ago|||
The term eGPU gives it away, but is inaccurate.

Something like eNPU or eTPU seems more appropriate here.

vondur 11 hours ago||
If you could get Nvidia driver support on Mac’s I bet Apple would have sold more MacPro’s.
ProllyInfamous 4 hours ago|
If unfamiliar: it is a big deal that AAPL & NVDA again have an official relationship.

For well over the previous decade Apple has not allowed newer nVidia GPUs (by not allowing drivers).

A seven year old GPU (e.g. VEGA64, RTX1080Ti) can still process more tokens/second than most Apple Silicon (particularly the lower-ends).

As discussed elsewhere, Apple MAX/Ultra processors are best-suited for huge models (but are not as fast as e.g. RTX5090).

bigyabai 2 hours ago||
This is not an official relationship, this is a third-party effort by tiny corp with no Nvidia involvement.
the__alchemist 11 hours ago||
I'm writing scientific software that has components (molecular dynamics) that are much faster on GPU. I'm using CUDA only, as it's the eaisiest to code for. I'd assumed this meant no-go on ARM Macs. Does this news make that false?
wmf 11 hours ago|
This driver doesn't support CUDA.
ksec 10 hours ago|||
This comment should be pinned at the top.
brcmthrowaway 9 hours ago|||
Isnt mlx a cuda translation later?
ykl 7 hours ago|||
No, MLX is nothing like a Cuda translation layer at all. It’d be more accurate to describe MLX as a NumPy translation layer; it lets you write high level code dealing with NumPy style arrays and under the hood will use a Metal GPU or CUDA GPU for execution. It doesn’t translate existing CUDA code to run on non-CUDA devices.
superb_dev 8 hours ago||||
My understanding is that MLX is Apple’s CUDA, so a CUDA translation layer would target MLX
ykl 7 hours ago||
No, it’s not. MLX is Apple’s NumPy more or less.
wmf 8 hours ago|||
Does tinygrad support MLX?
frankc 12 hours ago||
My main thought is would this allow me to speed up prompt process for large MoE models? That is the real bottleneck for m3ultra. The tokens per second is pretty good.
embedding-shape 11 hours ago|
tinygrad does have pretty neat support for sharding things across various devices relatively easy, that'd help. I'm guessing you'd hit the bandwidth ceiling transferring stuff back and forth though instead.
ece 7 hours ago||
Apple should update this page for ARM macs, now runs tinygrad on eGPUs: https://support.apple.com/en-us/102363
brcmthrowaway 12 hours ago||
What are the limitations of USB4/Thunderbolt compared with a regular PCIe slot?
embedding-shape 12 hours ago||
Well, for starters, PCIe 5.0 x16 would do something like about 60 GB/s each way, while Thunderbolt 4 does 4 GB/s each way, TB 5 does 8 GB/s each way. If you don't actually hit the bandwidth limits, it obviously matters less. Whether you'd notice a large difference would depends heavily on the type of workload.
givinguflac 11 hours ago||
I think you missed a zero, TB5 does 80GB/s.
Tepix 11 hours ago|||
No. It does 80Gbps.

https://www.convertunits.com/from/Gbps/to/GB/s

givinguflac 10 hours ago||
Derp, didn’t read closely enough. Thanks
mch17 11 hours ago||||
No, it does 80 Gb/s. With encoding loss it’s closer to 8GB/s
yonatan8070 10 hours ago|||
I can speak to my own experience, YMMV

I hooked up a Radeon RX 9060 XT to my Feodra KDE laptop (Yoga Pro 7 14ASP9) using a Razer Core X Chroma (40Gbps), and the performance when using the eGPU was very similar to using the Radeon 880M built into the laptop's Ryzen 9 365 APU.

So at least with my setup, performance is not great at all.

On paper, TB4 is capable of pushing 5GB/s, which is somewhere between 4x and 8x of PCIe 3.0, while a 16x PCIe 4.0 link can do ~31.5GB/s.

For numbers about all PCIe generations and lane counts, see the "History and revisions" section here: https://en.wikipedia.org/wiki/PCI_Express

Edit to add: the performance I measured is in gaming workloads, not compute

jasomill 7 hours ago||
For gaming, lots of things can affect Thunderbolt eGPU performance.

First, you need to connect the display directly to the eGPU rather than to the laptop.

Second, you need to make sure you have enough VRAM to minimize texture streaming during gameplay.

Third, you'll typically see better performance in terms of higher settings/resolutions vs higher framerates at lower settings/resolutions.

Fourth, depending on your system, you may be bottlenecked by other peripherals sharing PCH lanes with the Thunderbolt connection.

Finally, depending on the Thunderbolt version, PCIe bandwidth can be significantly lower than the advertised bandwidth of the Thunderbolt link. For example, while Thunderbolt 3 advertises 40 Gbps, and typically connects via x4 PCIe 3.0 (~32 Gbps), for whatever reason it imposes a 22 Gbps cap on PCIe data over the Thunderbolt link.

Even taking all this into account, you'll still see a significant performance drop on a current-gen GPU when running over Thunderbolt, though I'd still expect a useful performance improvement over integrated graphics in most cases (though not necessarily worth the cost of the eGPU enclosure vs just buying a cheap used minitower PC on eBay and gaming on that instead of a laptop).

justincormack 11 hours ago||
It carries pcie, but only at x4. Thunderbolt 4 is pcie gen 3 and Thunderbolt 5 is pcie gen 4.
brcmthrowaway 7 hours ago||
Thats poor.. It's just copper, why can't it be as fast as a PCIe slot..
vegabook 9 hours ago||
now can they please approve the linux kernel
yjftsjthsd-h 8 hours ago|
They... do? Or rather, they built a system where they don't need to; macs happily run Linux on bare metal or VMs. (Whether Linux supports Apple hardware well is another matter)
userbinator 9 hours ago||
[flagged]
amelius 9 hours ago||
You only own the hardware if you can use it as advertised even after breaking all ties with the vendor. Otherwise you bought a service not a product.
mrits 9 hours ago|||
You aren't restricted at a hardware level.
ddtaylor 9 hours ago||
Apple has hardware level DRM in some of their products.
llm_nerd 9 hours ago||
So you're just replying to the headline, not the actual article. Useful.

Apple, just like Microsoft, has a driver signing process because drivers have basically system-wide access to a system. There is no evidence that nvidia has tried to get eGPU drivers signed for years, but now someone did and Apple signed it. So?

And you could always, precisely as the article states in the very first paragraph, disable System Integrity Protection if you want to run drivers that aren't signed.

u_fucking_dork 9 hours ago||
[flagged]
bigyabai 13 hours ago|
The opportunity cost of Apple refusing to sign Nvidia's OEM AArch64 drivers is probably reaching the trillion-dollar mark, now that Nvidia and ARM have their own server hardware.
chuckadams 12 hours ago|
Apple got out of the server game long before they adopted aarch64, so that's a trillion worth of server hardware they never would have sold anyway. And probably not actually a trillion.
bigyabai 12 hours ago|||
Apple was the only one stopping themselves from getting back in. It's not like the Mac is a trillion-dollar market segment to begin with.
QuantumNomad_ 11 hours ago||
Almost everyone including myself had MacBook Pros at my last place of work.

If Apple was in the high-end server market, I see no reason why the company I was working for would not be running macOS on Apple hardware as servers, instead of the fleet of Linux based servers they had.

bigyabai 8 hours ago|||
Why wait? You can go run macOS as a server right now. It will take you a few hours to get Docker working, and disable mdworker_shared() and turn off SIP, and then install a package manager/XCode utilities, and finally configure macOS to run as a headless UNIX box, but it's attainable.

Despite how easy Apple makes it, nobody is really using Macs as a server in production. Apple[0] is not using them as a server in production. They would need a radically different strategy to replace Linux, because their efforts on macOS still haven't replaced Windows.

[0] https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...

varispeed 12 hours ago|||
USD starts sounding more and more like meaningless tokens. Billion here, trillion there. I still have 100 trillion Zimbabwean dollars somewhere.
altairprime 12 hours ago||
Feels like that here in the U.S., too.