Top
Best
New

Posted by xaskasdf 8 hours ago

Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU(github.com)
Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"

This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho

170 points | 39 comments
civicsquid 4 minutes ago|
Really cool. I'm wondering: what background did you need to be able to think of the question that resulted in this project?

I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).

turingsroot 34 minutes ago||
This is a really impressive piece of systems engineering. The 3-tier adaptive caching (VRAM resident > pinned RAM > NVMe/mmap) is essentially reimplementing what the Linux kernel's page cache does, but with GPU-awareness baked in.

The 0.3 tok/s for 70B Q4_K_M on a single 3090 is slow for interactive use, but the architecture itself is what matters here. PCIe Gen3 x8 at ~6.5 GB/s is the clear bottleneck - I'd be very curious to see numbers on a Gen5 NVMe setup where sequential reads can hit 12+ GB/s. That alone could potentially double throughput.

The layer skip via cosine similarity calibration (20/80 layers skipped) is a clever trick. Reminds me of early work on adaptive computation in transformers. The quality tradeoff at threshold 0.98 would be interesting to benchmark more rigorously - for many inference tasks like summarization or classification, you could probably push that much further.

Also worth noting: zero external dependencies beyond CUDA Toolkit is a bold design choice. No cuBLAS means they wrote their own GEMM kernels, which is a massive undertaking but gives full control over the memory access patterns needed for this streaming architecture.

umairnadeem123 1 hour ago||
0.2 tok/s is slow for chat but perfectly fine for batch/async workloads. I run automated content generation pipelines where a single job kicks off dozens of LLM calls (script generation, metadata, descriptions) and none of them need to be interactive. The whole job takes 20 minutes anyway because of image generation bottlenecks. Being able to run a 70B model locally for those batch calls instead of paying per-token API costs would be a significant cost reduction, even at this speed.
esquire_900 28 minutes ago|
Cost wise it does not seem very effective. .5 token / sec (the optimized one) is 3600 tokens an hour, which costs about 200-300 watts for an active 3090+system. Running 3600 tokens on open router @.4$ for llama 3.1 (3.3 costs less), is about $0,00144. That money buys you about 2-3 watts (in the Netherlands).

Great achievement for privacy inference nonetheless.

Aerroon 6 minutes ago||
Something to consider is that input tokens have a cost too. They are typically processed much faster than output tokens. If you have long conversations then input tokens will end up being a significant part of the cost.

It probably won't matter much here though.

01100011 3 hours ago||
Yeah, GPUdirect should allow you to dma straight to a storage device.

I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.

javchz 2 hours ago||
Yeah a ramdisk would probably work wonders. It's a shame Intel optane didn't became a standard, those type of workflows would be amazing for it.
TechSquidTV 1 hour ago|||
Ahhh damn it. Intel! Come back!
ElectricalUnion 23 minutes ago||
Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.
randomtoast 6 hours ago||
0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff
Wuzado 5 hours ago||
I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.
xaskasdf 4 hours ago|||
yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :(
derstander 4 hours ago|||
*32MB of RAM (plus 4MB of video RAM and a little sound and IOP memory).
anoncow 1 hour ago|||
3000 tokens per sec on 32 mb Ram?
fc417fc802 39 minutes ago||
fast != practical

You can get lots of tokens per second on the CPU if the entire network fits in L1 cache. Unfortunately the sub 64 kiB model segment isn't looking so hot.

But actually ... 3000? Did GP misplace one or two zeros there?

tyfon 6 hours ago|||
I didn't really understand the performance table until I saw the top ones were 8B models.

But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

tgrowazay 5 hours ago||
LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

DDR4 tops out about 27Gbs

DDR5 can do around 40Gbs

So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

uf00lme 4 hours ago|||
Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.
wtallis 3 hours ago||
Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores.
someguy2026 5 hours ago||||
DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.
vlovich123 5 hours ago||||
Faster than the 0.2tok/s this approach manages
zozbot234 5 hours ago||||
Should be active param size, not model size.
xaskasdf 4 hours ago|||
yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :(
fluoridation 1 hour ago||
That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context.
rl3 5 hours ago||
Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

Curious if anyone's tried this already.

xaskasdf 4 hours ago|
That would be nice to see. Actually I was thinking about getting another 3090 and a mobo upgrade since I'm bottlenecked by pcie3 to tryna run glm 4.7 or 5 at q4_k_m, it should be possible.
jacquesm 3 hours ago||
This is an interesting area for experiments. I suspect that in the longer term model optimization (knowing which bits you can leave out without affecting the functioning of the model) will become the dominant area of research just like it did with compression algorithms because effectively a model is a lossy compression scheme.

And that's good because that increases democratization of AI away from the silos that are being created.

Wuzado 5 hours ago||
I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?
rao-v 5 hours ago|
Yeah I’ve often wondered why folks aren’t training two tier MoEs for VRAM + RAM. We already have designs for shared experts so it cannot be hard to implement a router that allocated 10x or 100x as often to “core” experts vs the “nice to have” experts. I suppose balancing during training is tricky but some sort of custom loss on the router layers should work.

I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.

reitzensteinm 5 hours ago|||
I think part of the issue is that in production deployments, you're batching high enough that you'll be paging in those long tail experts constantly.

Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.

It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.

zozbot234 5 hours ago||
Ideally, you should rearrange batches so that inference steps that rely on the same experts get batched together, then inferences that would "hold up" a batch simply wait for that one "long tail" expert to be loaded, whereupon they can progress. This might require checkpointing partial inference steps more often, but that ought to be doable.
reitzensteinm 4 hours ago||
I think this is doable for very long tail experts that get swapped in for specialised topics - say, orbital mechanics.

But for experts that light up at, say, 1% frequency per batch, you're doing an awful lot of transfers from DRAM which you amortize over a single token, instead of reads from HBM which you amortize over 32 tokens.

svnt 5 hours ago||||
Maybe I am misunderstanding something but:

1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.

2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).

It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.

hedgehog 5 hours ago|||
I don't have links handy but there is active research in this area.
exabrial 4 hours ago||
I feel like we need an entirely new type of silicon for LLMs. Something completely focused on bandwidth and storage probably at the sacrifice of raw computation power.
throwaway2027 6 hours ago|
Didn't DirectX add an API for loading assets directly to GPU memory? Would that work?
someguy2026 5 hours ago|
My impression is that that is limited to assets and really needs to fit into the DirectX framework. From what I can tell, the gpu-nvme-direct is mostly similar to https://github.com/enfiskutensykkel/ssd-gpu-dma and https://github.com/ZaidQureshi/bam
xaskasdf 2 hours ago||
Actually this idea was fueled by those since I went to check if there was anything near to what I wanted to achieve, pretty useful tho