Posted by xaskasdf 8 hours ago
This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho
I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).
The 0.3 tok/s for 70B Q4_K_M on a single 3090 is slow for interactive use, but the architecture itself is what matters here. PCIe Gen3 x8 at ~6.5 GB/s is the clear bottleneck - I'd be very curious to see numbers on a Gen5 NVMe setup where sequential reads can hit 12+ GB/s. That alone could potentially double throughput.
The layer skip via cosine similarity calibration (20/80 layers skipped) is a clever trick. Reminds me of early work on adaptive computation in transformers. The quality tradeoff at threshold 0.98 would be interesting to benchmark more rigorously - for many inference tasks like summarization or classification, you could probably push that much further.
Also worth noting: zero external dependencies beyond CUDA Toolkit is a bold design choice. No cuBLAS means they wrote their own GEMM kernels, which is a massive undertaking but gives full control over the memory access patterns needed for this streaming architecture.
Great achievement for privacy inference nonetheless.
It probably won't matter much here though.
I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.
You can get lots of tokens per second on the CPU if the entire network fits in L1 cache. Unfortunately the sub 64 kiB model segment isn't looking so hot.
But actually ... 3000? Did GP misplace one or two zeros there?
But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.
I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?
DDR4 tops out about 27Gbs
DDR5 can do around 40Gbs
So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.
One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.
I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.
Curious if anyone's tried this already.
And that's good because that increases democratization of AI away from the silos that are being created.
I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.
Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.
It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.
But for experts that light up at, say, 1% frequency per batch, you're doing an awful lot of transfers from DRAM which you amortize over a single token, instead of reads from HBM which you amortize over 32 tokens.
1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.
2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).
It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.