There's pytorch's FlexAttention which could maybe make this practical, but currently it's just way too buggy.
Nvidia isn't likely to start releasing updated firmware for an obscure architecture for which there is limited evidence of improvement, and even less adoption.
Also note, depending on your model dimensions and sequence lengths, often the attention computation plays only a minor role (maybe 10% overall or so), and the MLP computation dominates.
Maybe it's better now, but I'd still consider using FlexAttention without a corresponding unit test checking its accuracy against an equivalent eager implementation completely irresponsible.
1. https://ai.meta.com/research/publications/byte-latent-transf...
“With the current implementation of Evo2, we do not have the heavily optimized kernels in place for convolution operators like we do for attention layers in a model like llama2. Even with this shortcoming, we see that the benefit from including more convolutional layers makes up for the earlier stage of optimization at around the 64k context length. Beyond that point we see an improvement in performance even compared to a highly optimized transformer model.“
https://docs.nvidia.com/bionemo-framework/latest/models/evo2...
I think we've already got a bit of a bottleneck in terms of memory bandwidth utilization.
https://arxiv.org/abs/2104.09864
The difference RoPE makes vs traditional positional encoding is that you just care about relative distances between tokens, and we can attenuate the attention over great distances.
Instead of making the model look at every token in the entire sequence all at once (which gets expensive fast), you can break the text into logical chunks—like sentences or paragraphs—and run self-attention within each chunk. That keeps things efficient while still capturing local meaning. Then, for each chunk, you create a summary—either by pooling or using a small learned head—and pass those summaries into a second layer of attention that operates on a much smaller scale. This gives you higher-level context across the document, kind of like moving from sentences to sections to the whole thing. Optionally, you can even send that higher-level context back down to influence the lower layers. This approach shows up in models like Longformer and BigBird (which use attention windows), hierarchical models (like HANs), and newer architectures like RetNet and Mamba that compress information over time or scale. RoPE fits neatly into this by helping each chunk handle relative positions more naturally.
RoPE is kind of perfect for this setup because it handles relative positions directly in the attention mechanism, which means each chunk can still understand the order and spacing of tokens without relying on fixed position embeddings. It’s especially useful when you're working with long sequences or chunked inputs, because it doesn’t care where the chunk is in the overall document—it just cares about how tokens relate to each other within that chunk. RoPE also makes it easier for models to generalize to longer inputs than they were trained on, since the rotational math behind it naturally extends beyond the original context window. Plus, because it's baked into the dot product itself, it adds no extra memory or computation, and plays well with hierarchical or multi-scale attention setups. Basically, it’s a clean, efficient way to inject positional awareness that doesn’t break when you start slicing things up.
PS: LLaMA's RoPE may be a bit off but it still works great: https://discuss.huggingface.co/t/is-llama-rotary-embedding-i...
If it is only nearby tokens it is multiplicative by a constant right? Not making it cubic scaling with context length or anything.
Deepseek got a training performance increase with two tokens at a time, though it doesn't go into the final model inference like this. They did say it can be used for speculative decode to reduce inference costs though.
They may get away with less attention heads with this new approach too.
I have been working on a classification problem on audio data (with context size somewhere between 1000 and 3000 with potential to expand later). I have been experimenting with adding attention onto a CNN for a classification task I have been working on.
I tried training a vanilla transformer but in the sizes that I am aiming for (5-30M parameters), the training is incredibly unstable and doesn't achieve the performance of an LSTM.
So I went back to CNNs which are fast to train but don't achieve the losses of LSTMs (which are much slower to train,and for higher context sizes you get into the vanishing gradient problem). The CNN-GRU hubrid a worked much better, giving me my best result.
The GRU layer I used had a size of 512. For increasing context sizes, I'd have to make the convolutional layers deeper so as not to increase the GRU size too large. Instead, I decided to swap out the GRU with a MultiHeadAttention layer. The results are great - better than the CNN-GRU (my previous best). Plus, for equivalent sizes the model is faster to train though it hogs a lot of memory.
The future of AI and all of ML in general likely does exist beyond tokenization, but I find it unlikely we will get there without moving past LLMs as a whole.
We need to focus on the strengths of LLMs and abandon the incredibly wasteful amount of effort being put into trying to make them put on convincing facsimiles of things they can't do just because the output is in natural language and easily fools humans at first glance.
https://ai.meta.com/research/publications/byte-latent-transf...
Personally I think LLMs will be relegated to transforming output and input from whatever new logic system is brought forth, rather than pretending they're doing logic by aggregating static corpora like we are now.
eg, yes, the magically relevant point is the third word of the fifth paragraph on page 183 of the document, but then having a good representation of all of that page is more helpful than the single word.
A borderline tautological answer might be “because the network learns that putting related things next to each other increases the usefulness of the convolutions”
Cool to see convolutions making such a comeback lately in the llm world. See also the recent striped hyena2 architecture, which uses the conv-based hyena operator to great success:
Skimming the paper, I don’t see them testing against e.g. a normal decoder with an extra layer or something.
I don’t see the same logic applying on an embedding, where the individual indexes matter. Adjacent indexes in an embedding have no relationship, unlike adjacent pixels in an image.
Put all the GPUs in cloud/s controlled by international scientists (now you can use your GPU on any device, can earn money by renting it when you don’t need it, nothing changes except you need to be online to us it, but we’ll have 5G and better worldwide. You can develop, sell or release free math-proven safe AI models in this cloud “AI App Store”, etc).
Because the main risk is an AI agent botnet - current GPUs are like nukes that are 100% unprotected - any hacker can make a virus with AI agent component just to steal money, this AI will be not aligned at all, will become a per perpetual and eventually autonomous botnet.