Posted by jxmorris12 3 days ago
Nondeterminism is what currently keeps me from working with other developers.
As I wrote in "Prompt Coding" [1], these days I am not looking for good code. I am looking for prompts that create good code. But how do you share prompts among developers when they produce different code every time? You cannot simply state "Here, I found a prompt that makes gpt-5-2025-08-07 output a solution with all the desired attributes".
Similar with images. At the moment, for most image models, you cannot outsource the task of writing prompts that create the desired images. Because most image models will not create the same image when given the same prompt and parameters.
A deterministic LLM isn't going to behave appreciably differently from a non deterministic one if your input or context varies by even a tiny bit (pun intended) each time.
i'm hoping that it becomes more useful as models improve and become more reliable at producing working code (though determinism would be great for improving prompts).
Really? If you include the seed as one of the parameters most produce pixel identical output.
E.g. "Generate deterministic images" https://cloud.google.com/vertex-ai/generative-ai/docs/image/...
It took me ages to get the prediction for the second token after "hello" to match the same as the prediction for the second token when running the model on the string "hello world", despite the fact that I was using a causal model. I tried all kinds of things before discovering that `quantized: false` was the important setting.
I dont fully understand what you said but I guess higher probability logits are encoded with fewer bits. If your text is the LLM output then you may need a bit or two per token?
In terms of performance, I've not done any serious testing, but e.g. the wikipedia article on volcanos compresses to about 20% using GPT2. I've seen other strings compress even further.
The big issue is that while encoding is not unreasonable, decoding any significant amount of data is incredibly slow, since I'm doing a model run for every token in the output. It's bad enough that the scheme is probably unworkable as it is. I'm thinking about changing my code so that it streams out the tokens as it decodes them, so you're not just left there waiting for ages.
I supervised a student's project whose goal was exactly that : implement compression with LLMs using AC.
Since AC is optimal, if your LLM has an average cross entropy x on some dataset, you can expect that the compression will compress data using x nats per token on average!
> For example, you might observe that asking ChatGPT the same question multiple times provides different results.
even with 0.0 temperature due to MOE models routing at a batch level, and you're very unlikely to get a deterministic batch.
> Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.
The router also leaks batch-level information across sequences.
I don’t think this is correct - MoE routing happens at per token basis. It can be non deterministic and batch related if you try to balance out your experts load in a batch but that’s performance optimization (just like all of the blogpost) and not the way models are trained to work.
What I would have loved is a discussion around collectives/multi-node setups. And showing how to get determinism at low performance penalty for multi-node reduction collectives.
Focus on correctness, not determinism.
Compilers can also reorder operations but in practice this is rarely an issue because kernels typically synchronize frequently and this limits the ability for compilers to reorder things. This isn’t to say it doesn’t happen, but even if it does happen it’s likely because the compiler changed because the code they generate is generally run-to-run identical.
With revisions, you're trying to ensure a consistent floating point environment where the operations used are deterministic, and used in the same order with the same inputs. The best way to do that is to use operations that adhere to a mostly deterministic standard like IEEE-754.
Valid point. Floating point summation is not always commutative.