Posted by guiand 3 days ago
I'd have some other uses for RDMA between Macs.
https://github.com/Anemll/mlx-rdma/commit/a901dbd3f9eeefc628...
Don’t get me wrong... It’s super cool, but I fail to understand why money is being spent on this.
The way this capability is exposed in the OS is that the computers negotiate an Ethernet bridge on top of the TB link. I suspect they're actually exposing PCIe Ethernet NICs to each other, but I'm not sure. But either way, a "Thunderbolt router" would just be a computer with a shitton of USB-C ports (in the same way that an "Ethernet router" is just a computer with a shitton of Ethernet ports). I suspect the biggest hurdle would actually just be sourcing an SoC with a lot of switching fabric but not a lot of compute. Like, you'd need Threadripper levels of connectivity but with like, one or two actual CPU cores.
[0] Like, last time I had to swap work laptops, I just plugged a TB cable between them and did an `rsync`.
https://docs.nvidia.com/cuda/gpudirect-rdma/index.html
The "R" in RDMA means there are multiple DMA controllers who can "transparently" share address spaces. You can certainly share address spaces across nodes with RoCE or Infiniband, but thats a layer on top