Posted by FranckDernoncou 13 hours ago
What's the catch?
Their approach is essentially a speculative decoding approach where multiple tokens are predicted at once and then verified. Therefore getting more tokens to be created at a speed that is closer to the prompt processing speed.
It seems to be special because their approach yields the exact same output distribution as the base model and it only takes a negligable amount of additional memory.
The main catch is that if your prompt processing speed is already bad, it will not help you all that much.
For example, the M-series Macs (up to M4) have a relative high generation speed compared to their prompt processing speed. That means they will not benefit as much (if at all). With the M5 the prompt processing speed has increased 4x, so those can expect to see a good uplift.
No, quite the opposite actually. Like with speculative decoding this model will compute more tokens and discard the invalid ones.
> What's the catch?
LLMs[1] are limited by memory latency and not by compute[2]: because they process tokens one at a time, you spend more time loading and unloading the weights on the GPU registers from VRAM than waiting for compute to happen. Techniques like these allow to process multiple tokens in parallel instead of one by one, and as such exploit better the compute of your graphic card. They do so by predicting which tokens are likely to occur and then verifying that the guess was correct.
For instance if the previous token is “hello”.
A regular autoregressive LLM will compute:
“hello” => “! ”,
then “hello! ” => “how ”,
“hello! how ” => “are ”,
“hello! how are ” => “you”.
and finally “hello! how are you” => “?<end>”
One at a time. Loading and unloading every weights 5 times from the GPU memory to its compute units.
With speculative decoding (I'd say this one isn't strictly speculative decoding, but it's a variant of the same principle), you have something that guesses that the whole sentence is going to be “how are you today?”, so the LLM can generate
“hello” => “! ”,
“hello! ” => “how ”,
“hello! how ” => “are ”,
“hello! how are ” => “you”.
“hello! how are you” => “?<end>”
“hello! how are you today” => “?<end>”
In parallel. So each weight would have been loaded only once from the VRAM instead of 5.
The last token will be discarded though, as the prefix “how are you today” doesn't match what has actually been generated. So in that particular example, you'd have gotten your 5 tokens 5 times faster than with pure autoregressive inference, but at the expense of a 6th token being generated and discarded immediately. So 5 times more token throughtput, but 20% compute cost increase per token.
[1]: autoregressive LLMs, that is. Which are the ones everybody uses because they are the most performant.
[2]: at least when run at low batch size, on your own computer for your personal use. On a datacenter, with many concurrent users, GPUs are actually compute-bound.
I was wondering what would be involved in getting it to work with GGUF files, rather than safetensor files...
At the moment not even MTP is merged into llama.cpp, so I wouldn't quite hold my breath for it.
I haven't read the paper but of course DTree tricks work here as well
Idea: Inject a trainable diffusion attention module into each layer of a frozen AR Transformer. Both heads share one KV cache. Diffusion head projects K=32 tokens in parallel; AR head verifies in a second pass and accepts the longest matching prefix. Output distribution is provably identical to the base model.
Results:
- Up to 7.8x TPF, ~6x wall-clock on MATH-500.
- 16% of params trained, <1B tokens, 24h on 8xH200.
- vs. diffusion LMs (Dream, Fast-dLLM-v2, SDAR, Mercury, Gemini Diffusion): they modify base weights and lose accuracy (Fast-dLLM-v2: -11 pts on MATH-500). Orthrus freezes the backbone; accuracy matches Qwen3-8B exactly.
- vs. Speculative Decoding (EAGLE-3, DFlash): no external drafter, no separate cache, zero TTFT penalty (no drafter to init/sync). KV overhead is O(1) (~4.5 MiB flat). Acceptance length on MATH-500: 11.7 vs. 7.9 (DFlash) vs. 3.5 (EAGLE-3).
- Single-step denoising beats multi-step (6.35 vs. 3.53 TPF). KL distillation beats CE on acceptance rate.
Limitations: strictly bounded by the frozen base model (inherits its biases, hallucinations, knowledge gaps); Qwen3-only evaluation; greedy + rejection sampling only.