Top
Best
New

Posted by guiand 3 days ago

macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt(developer.apple.com)
535 points | 290 commentspage 4
0manrho 3 days ago|
Just for reference:

Thunderbolt5's stated "80Gbps" bandwidth comes with some caveats. That's the figure for either Display Port bandwidth itself or in practice more often realized by combining the data channel (PCIe4x4 ~=64Gbps) with the display channels (=<80Gbps if used in concert with data channels), and potentially it can also do unidirectional 120Gbps of data for some display output scenarios.

If Apple's silicon follows spec, then that means you're most likely limited to PCIe4x4 ~=64Gbps bandwidth per TB port, with a slight latency hit due to the controller. That Latency hit is ItDepends(TM), but if not using any other IO on that controller/cable (such as display port), it's likely to be less than 15% overhead vs Native on average, but depending on drivers, firmware, configuration, usecase, cable length, and how apple implemented TB5, etc, exact figures very. And just like how 60FPS Average doesn't mean every frame is exactly 1/60th of a second long, it's entirely possible that individual packets or niche scenarios could see significantly more latency/overhead.

As a point of reference Nvidia RTX Pro (formerly known as quadro) workstation cards of Ada generation and older along with most modern consumer grahics cards are PCIe4 (or less, depending on how old we're talking), and the new RTX Pro Blackwell cards are PCIe5. Though comparing a Mac Studio M4 Max for example to an Nvidia GPU is akin to comparing Apples to Green Oranges

However, I mention the GPU's not just to recognize the 800lb AI compute gorilla in the room, but also that while it's possible to pool a pair of 24GB VRAM GPU's to achieve a 48GB VRAM pool between them (be it through a shared PCIe bus or over NVlink), the performance does not scale linearly due to PCIe/NVLinks limitations, to say nothing of the software, and configuration and optimization side of things also being a challenge to realizing max throughput in practice.

This is also just as true as a pair of TB5 equipped macs with 128GB of memory each using TB5 to achieve a 256GB Pool will take a substantial performance hit compared to on otherwise equivalent mac with 256GB. (capacities chosen are arbitrary to illustrate the point). The exact penalty really depends on usecase and how sensitive it is to the latency overhead of using TB5 as well as the bandwidth limitation.

It's also worth noting that it's not just entirely possible with RDMA solutions (no matter the specifics) to see worse performance than using a singular machine if you haven't properly optimized and configured things. This is not hating on the technology, but a warning from experience for people who may have never dabbled to not expect things to just "2x" or even just better than 1x performance just by simply stringing a cable between two devices.

All that said, glad to see this from Apple. Long overdue in my opinion as I doubt we'll see them implement an optical network port with anywhere near that bandwidth or RoCEv2 support, much less a expose a native (not via TB) PCIe port on anything that's a non-pro model.

EDIT: Note, many mac skus have multiple TB5 ports, but it's unclear to me what the underlying architecture/topology is there and thus can't speculate on what kind of overhead or total capacity any given device supports by attempting to use multiple TB links for more bandwidth/parallelism. If anyone's got an SoC diagram or similar refernce data that actually tells us how the TB controller(s) are uplinked to the rest of the SoC, I could go in more depth there. I'm not an Apple silicon/MacOS expert. I do however have lots of experience with RDMA/RoCE/IB clusters, NVMeoF deployments, SXM/NVlink'd devices and generally engineering low latency/high performance network fabrics for distributed compute and storage (primarily on the infrastructure/hardware/ops side than on the software side) so this is my general wheelhouse, but Apple has been a relatively blindspot for me due to their ecosystem generally lacking features/support for things like this.

givemeethekeys 3 days ago||
Would this also work for gaming?
AndroTux 3 days ago|
No
londons_explore 3 days ago||
Nobodies gonna take them seriously till they make something rack mounted and that isn't made of titanium with pentalobe screws...
moralestapia 3 days ago|
You might ignore this but, for a while, Mac Mini clusters were a thing and they were capex and opex effective. That same setup is kind of making a comeback.
fennecbutt 3 days ago|||
They were only a thing to do ci/compilation related to apples os because their walled garden locked using other platforms out. You're building an iPhone or mac app? Well your ci needs to be on a cluster of apple machines.
londons_explore 3 days ago|||
It's in a similar vein to the PS2 linux cluster or someone trying to use vape CPU's as web servers...

It might be cost effective, but the supplier is still saying "you get no support, and in fact we might even put roadblocks in your way because you aren't the target customer".

moralestapia 3 days ago||
True.

I'm sure Apple could make a killing on the server side, unfortunately their income from their other products is so big that even if that's a 10B/year opportunity they'll be like "yawn, yeah, whatever".

fennecbutt 3 days ago||
Doubt. A 10B idea is still a promotion. And if capitalism is shrinkflationing hard, which it is atm, then capitalists would not leave something like that on the table.
unit149 3 days ago||
Garageband DAW + MacOS 14.4 Roland Juno-D7 synthsizer, for 8-bit audio complementary compact disk format as AIFF, WAV, or MIDI appliance, in which under SLA-royalties licenses, binary 44.1 Khz sample rate sets the reproducer for reference level.

[1]: https://www.apple.com/legal/sla/docs/GarageBand.pdf

sora2video 2 days ago||
[dead]
schmuckonwheels 3 days ago||
That's nice but

Liquid (gl)ass still sucks.

nodesocket 3 days ago||
Can we get proper HDR support first in macOS? If I enable HDR on my LG OLED monitor it looks completely washed out and blacks are grey. Windows 11 HDR works fine.
Razengan 3 days ago||
Really? I thought it's always been that HDR was notorious on Windows, hopeless on Linux, and only really worked in a plug-and-play manner on Mac, unless your display has an incorrect profile or something/

https://www.youtube.com/shorts/sx9TUNv80RE

masspro 3 days ago|||
MacOS does wash out SDR content in HDR mode specifically on non-Apple monitors. An HDR video playing in windowed mode will look fine but all the UI around it has black and white levels very close to grey.

Edit: to be clear, macOS itself (Cocoa elements) is all SDR content and thus washed out.

crazygringo 3 days ago|||
Define "washed out"?

The white and black levels of the UX are supposed to stay in SDR. That's a feature not a bug.

If you mean the interface isn't bright enough, that's intended behavior.

If the black point is somehow raised, then that's bizarre and definitely unintended behavior. And I honestly can't even imagine what could be causing that to happen. It does seem like that it would have to be a serious macOS bug.

You should post a photo of your monitor, comparing a black #000 image in Preview with a pitch-black frame from a video. People edit HDR video on Macs, and I've never heard of this happening before.

Starmina 3 days ago||||
That's intended behavior for monitor limited in peak brightness
nodesocket 3 days ago|||
I don't think so. Windows 11 has a HDR calibration utility that allows you to adjust brightness and HDR and it maintains blacks being perfectly black (especially with my OLED). When I enable HDR on macOS whatever settings I try, including adjusting brightness and contrast on the monitor the blacks look completely washed out and grey. HDR DOES seem to work correctly on macOS but only if you use Mac displays.
masspro 3 days ago||||
That’s the statement I found last time I went down this rabbit hole, that they don’t have physical brightness info for third-party displays so it just can’t be done any better. But I don’t understand how this can lead to making the black point terrible. Black should be the one color every emissive colorspace agrees on.
kmeisthax 3 days ago|||
Actually, intended behavior in general. Even on their own displays the UI looks grey when HDR is playing.

Which, personally, I find to be extremely ugly and gross and I do not understand why they thought this was a good idea.

robflynn 3 days ago||||
Oh, that explains why it looked so odd when I enabled HDR on my Studio.
adastra22 3 days ago|||
Huh, so that’s why HDR looks like shit on my Mac Studio.
heavyset_go 3 days ago|||
Works well on Linux, just toggle a checkmark in the settings.
m-ack-toddler 3 days ago||
AI is arguably more important than whatever gaming gimmick you're talking about.
djdkdldl 3 days ago||
[flagged]
stego-tech 3 days ago|
This doesn’t remotely surprise me, and I can guess Apple’s AI endgame:

* They already cleared the first hurdle to adoption by shoving inference accelerators into their chip designs by default. It’s why Apple is so far ahead of their peers in local device AI compute, and will be for some time.

* I suspect this introduction isn’t just for large clusters, but also a testing ground of sorts to see where the bottlenecks lie for distributed inference in practice.

* Depending on the telemetry they get back from OSes using this feature, my suspicion is they’ll deploy some form of distributed local AI inference system that leverages their devices tied to a given iCloud account or on the LAN to perform inference against larger models, but without bogging down any individual device (or at least the primary device in use)

For the endgame, I’m picturing a dynamically sharded model across local devices that shifts how much of the model is loaded on any given device depending on utilization, essentially creating local-only inferencing for privacy and security of their end users. Throw the same engines into, say, HomePods or AppleTVs, or even a local AI box, and voila, you’re golden.

EDIT: If you're thinking, "but big models need the higher latency of Thunderbolt" or "you can't do that over Wi-Fi for such huge models", you're thinking too narrowly. Think about the devices Apple consumers own, their interconnectedness, and the underutilized but standardized hardware within them with predictable OSes. Suddenly you're not jamming existing models onto substandard hardware or networks, but rethinking how to run models effectively over consumer distributed compute. Different set of problems.

wmf 3 days ago||
inference accelerators ... It’s why Apple is so far ahead of their peers in local device AI compute, and will be for some time.

Not really. llama.cpp was just using the GPU when it took off. Apple's advantage is more VRAM capacity.

this introduction isn’t just for large clusters

It doesn't work for large clusters at all; it's limited to 6-7 Macs and most people will probably use just 2 Macs.

fwip 3 days ago|||
The bandwidth of rdma over thunderbolt is so much faster (and lower latency) than Apple's system of mostly-wireless devices, I can't see how any learnings here would transfer.
stego-tech 3 days ago||
You're thinking, "You can't put modern models on that sort of distributed compute network", which is technically correct.

I was thinking, "How could we package or run these kinds of large models or workloads across a consumer's distributed compute?" The Engineer in me got as far as "Enumerate devices on network via mDNS or Bonjour, compare keys against iCloud device keys or otherwise perform authentication, share utilization telemetry and permit workload scheduling/balance" before I realized that's probably what they're testing here to a degree, even if they're using RDMA.

threecheese 3 days ago||
I think you are spot on, and this fits perfectly within my mental model of HomeKit; tasks are distributed to various devices within the network based on capabilities and authentication, and given a very fast bus Apple can scale the heck out of this.
stego-tech 3 days ago||
Consumers generally have far more compute than they think; it's just all distributed across devices and hard to utilize effectively over unreliable interfaces (e.g. Wi-Fi). If Apple (or anyone, really) could figure out a way to utilize that at modern scales, I wager privacy-conscious consumers would gladly trade some latency in responses in favor of superior overall model performance - heck, branding it as "deep thinking" might even pull more customers in via marketing alone ("thinks longer, for better results" or some vaguely-not-suable marketing slogan). It could even be made into an API for things like batch image or video rendering, but without the hassle of setting up an app-specific render farm.

There's definitely something there, but Apple's really the only player setup to capitalize on it via their halo effect with devices and operating systems. Everyone else is too fragmented to make it happen.