Posted by jxmorris12 3 days ago
A = torch.randn(2048, 2048, device='cuda', dtype=torch.bfloat16)
B = torch.randn(2048, 2048, device='cuda', dtype=torch.bfloat16)
ref = torch.mm(A, B)
for _ in range(1000):
assert (torch.mm(A, B) - ref).abs().max().item() == 0
I’m sort of surprised that Torch doesn’t have some kind of lazy evaluation thing to avoid computing anything here. I thought that was one of the nice things about all these fancy frameworks (if I wanted the computer to actually do silly things when I asked it to, I would use BLAS directly, right?).What would hope to be achieved by making this case lazy? If you wanted these to run in parallel, with a multi-gpu system, you would use the appropriate parallel interface.
.abs().max().item()
of something that can be identified as definitionally zero.The parallel interface, which is async, is probably what you're lookin for.
If evaluation is lazy, then the subtraction operator gets fed two unevaluated matrix multiplies.
If it's a dumb subtraction operator, this gives us no benefit. Eventually it evaluates both and then subtracts. And it has some extra overhead like you said.
But if it's a smart subtraction operator, it can realize that both parameters are the same equation, and then it can return all 0s without evaluating anything.
And even better than just skipping the matrix math, "all 0s" can be a stub object that takes O(1) time to set up. And then .abs().max() will be instant too.
So, even if will be achieved progress just now, I think in predictable future this will be constant dead-end.
Great post.
I hope all the model providers adopt this.
I had no problem getting deterministic LLM outputs when I experimented with this 6 months ago.
Run two of these with the same prompts and same seed and you get the same results.
Obviously in GPU clusters with different hardware things get more complicated.
"I had no problem getting deterministic LLM outputs when I experimented with this 6 months ago" looks like you're using llama-cpp in that repo. This is about vllm serving many requests at once, at long sequence lengths.
> As it turns out, our request’s output does depend on the parallel user requests. Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.
Your situation isn't really comparable.
TL;DR
Seed your PRNGs and call torch.use_deterministic_algorithms(True) to get the deterministic kernels. They may be slightly slower, but in practice, you probably will not notice.
Note that results will still differ between different drivers and GPUs. It would be great if NVIDIA tried harder in that regard.