Top
Best
New

Posted by auraham 11 hours ago

The Illustrated Transformer(jalammar.github.io)
338 points | 71 comments
libraryofbabel 9 hours ago|
I read this article back when I was learning the basics of transformers; the visualizations were really helpful. Although in retrospect knowing how a transformer works wasn't very useful at all in my day job applying LLMs, except as a sort of deep background for reassurance that I had some idea of how the big black box producing the tokens was put together, and to give me the mathematical basis for things like context size limitations etc.

I would strongly caution anyone who thinks that they will be able to understand or explain LLM behavior better by studying the architecture closely. That is a trap. Big SotA models these days exhibit so much nontrivial emergent phenomena (in part due to the massive application of reinforcement learning techniques) that give them capabilities very few people expected to ever see when this architecture first arrived. Most of us confidently claimed even back in 2023 that, based on LLM architecture and training algorithms, LLMs would never be able to perform well on novel coding or mathematics tasks. We were wrong. That points towards some caution and humility about using network architecture alone to reason about how LLMs work and what they can do. You'd really need to be able to poke at the weights inside a big SotA model to even begin to answer those kinds of questions, but unfortunately that's only really possible if you're a "mechanistic interpretability" researcher at one of the major labs.

Regardless, this is a nice article, and this stuff is worth learning because it's interesting for its own sake! Right now I'm actually spending some vacation time implementing a transformer in PyTorch just to refresh my memory of it all. It's a lot of fun! If anyone else wants to get started with that I would highly recommend Sebastian Raschka's book and youtube videos as way into the subject: https://github.com/rasbt/LLMs-from-scratch .

Has anyone read TFA author Jay Alammar's book (published Oct 2024) and would they recommend it for a more up-to-date picture?

holtkam2 4 hours ago||
I agree and disagree. In my day job as an AI engineer I rarely if ever need to use any “classic” deep learning to get things done. However, I’m a firm believer that understanding the internals of a LLM can set you apart as an gen AI engineer, if you’re interested in becoming the top 1% in your field. There can and will be situations where your intuition about the constraints of your model is superior compared to peers who consider the LLM a black box. I had this advice given directly to me years ago, in person, by Clem Delangue of Hugging Face - I took it seriously and really doubled down on understanding the guts of LLMs. I think it’s served me well.

I’d give similar advice to any coding bootcamp grad: yes you can get far by just knowing python and React, but to reach the absolute peak of your potential and join the ranks of the very best in the world in your field, you’ll eventually want to dive deep into computer architecture and lower level languages. Knowing these deeply will help you apply your higher level code more effectively than your coding bootcamp classmates over the course of a career.

libraryofbabel 2 hours ago||
I suppose I actually agree with you, and I would give the same advice to junior engineers too. I've spent my career going further down the stack than I really needed to for my job and it has paid off: everything from assembly language to database internals to details of unix syscalls to distributed consensus algorithms to how garbage collection works inside CPython. It's only useful occasionally, but when it is useful, it's for the most difficult performance problems or nasty bugs that other engineers have had trouble solving. If you're the best technical troubleshooter at your company, people do notice. And going deeper helps with system design too: distributed systems have all kinds of subtleties.

I mostly do it because it's interesting and I don't like mysteries, and that's why I'm relearning transformers, but I hope knowing LLM internals will be useful one day too.

crystal_revenge 1 hour ago|||
> massive application of reinforcement learning techniques

So sad that "reinforcement learning" is another term whose meaning has been completely destroyed by uneducated hype around LLMs (very similar to "agents"). 5 years ago nobody familiar with RL would consider what these companies are doing as "reinforcement learning".

RLHF and similar techniques are much, much closer to traditional fine-tuning than they are reinforcement learning. RL almost always, historically, assumes online training and interaction with an environment. RLHF is collecting data from user and using it to reach the LLM to be more engaging.

This fine-tuning also doesn't magically transform LLMs into something different, but it is largely responsible for their sycophantic behavior. RLHF makes LLMs more pleasing to humans (and of course can be exploited to help move the needle on benchmarks).

It's really unfortunate that people will throw away their knowledge of computing in order to maintain a belief that LLMs are something more than they are. LLMs are great, very useful, but they're not producing "nontrivial emergent phenomena". They're increasing trained a products to invoked increase engagement. I've found LLMs less useful in 2025 than in 2024. And the trend in people not opening them up under the hood and playing around with them to explore what they can do has basically made me leave the field (I used to work in AI related research).

libraryofbabel 1 hour ago||
I wasn't referring to RLHF, which people were of course already doing heavily in 2023, but RLVR, aka LLMs solving tons of coding and math problems with a reward function after pre-training. I discussed that in another reply, so I won't repeat it here; instead I'd just refer you to Andrej Karpathy's 2025 LLM Year in Review which discusses it. https://karpathy.bearblog.dev/year-in-review-2025/

> I've found LLMs less useful in 2025 than in 2024.

I really don't know how to reply to this part without sounding insulting, so I won't.

crystal_revenge 19 minutes ago|||
While RLVF is neat, it still is an 'offline' learning model that just borrows a reward function similar to RL.

And did you not read the entire post? Karpathy basically calls out the same point that I am making regarding RL which "of course can be exploited to help move the needle on benchmarks":

> Related to all this is my general apathy and loss of trust in benchmarks in 2025. The core issue is that benchmarks are almost by construction verifiable environments and are therefore immediately susceptible to RLVR and weaker forms of it via synthetic data generation. In the typical benchmaxxing process, teams in LLM labs inevitably construct environments adjacent to little pockets of the embedding space occupied by benchmarks and grow jaggies to cover them. Training on the test set is a new art form

Regarding:

> I really don't know how to reply to this part without sounding insulting, so I won't.

Relevant to citing him: Karpathy has publicly praised some of my past research in LLMs, so please don't hold back your insults. A poster on HN telling me I'm "not using them right!!!" won't shake my confidence terribly. I use LLMs less this year than last year and have been much more productive. I still use them, LLMs are interesting, and very useful. I just don't understand why people have to get into hysterics trying to make them more than that.

I also agree with Karpathy's statement:

> In any case they are extremely useful and I don't think the industry has realized anywhere near 10% of their potential even at present capability.

But magical thinking around them is slowing down progress imho. Your original comment itself is evidence of this:

> I would strongly caution anyone who thinks that they will be able to understand or explain LLM behavior better by studying the architecture closely.

I would say "Rip them open! Start playing around with the internals! Mess around with sampling algorithms! Ignore the 'win market share' hype and benchmark gaming and see just what you can make these models do!" Even if restricted to just open, relatively small models, there's so much more interesting work in this space.

energy123 4 hours ago|||
An example of why a basic understanding is helpful:

A common sentiment on HN is that LLMs generate too many comments in code.

But comment spam is going to help code quality, due to the way causal transformers and positional encoding works. The model has learned to dump locally-specific reasoning tokens where they're needed, in a tightly scoped cluster that can be attended to easily, and forgetting about just as easily later on. It's like a disposable scratchpad to reduce the errors in the code it's about to write.

The solution to comment spam is textual/AST post-processing of generated code, rather than prompting the LLM to handicap itself by not generating as much comments.

libraryofbabel 2 hours ago|||
Unless you have evidence from a mechanistic interpretability study showing what's happening inside the model when it creates comments, this is really only a plausible-sounding just-so story.

Like I said, it's a trap to reason from architecture alone to behavior.

energy123 1 hour ago||
Yes I should have made it clear that it is an untested hypothesis.
minikomi 2 hours ago||||
An example of why a basic understanding is helpful:

A common sentiment on HN is that LLMs generate too many comments in code.

For good reason -- comment sparsity improves code quality, due to the way causal transformers and positional encoding work. The model has learned that real, in-distribution code carries meaning in structure, naming, and control flow, not dense commentary. Fewer comments keep next-token prediction closer to the statistical shape of the code it was trained on.

Comments aren’t a free scratchpad. They inject natural-language tokens into the context window, compete for attention, and bias generation toward explanation rather than implementation, increasing drift over longer spans.

The solution to comment spam isn’t post-processing. It’s keeping generation in-distribution. Less commentary forces intent into the code itself, producing outputs that better match how code is written in the wild, and forcing the model into more realistic context avenues.

p1esk 3 hours ago|||
You’re describing this like if you actually knew what’s going on in these models. In reality it’s just a guess and not a very convincing one.
ozgung 8 hours ago|||
I think the biggest problem is that most tutorials use words to illustrate how the attention mechanism works. In reality, there are no word-associated tokens inside a Transformer. Tokens != word parts. An LLM does not perform language processing inside the Transformer blocks, and a Vision Transformer does not perform image processing. Words and pixels are only relevant at the input. I think this misunderstanding was a root cause of underestimating their capabilities.
melagonster 1 hour ago|||
Maybe the most benefits are from the condition that people can read another new paper with enough background knowledge.
lugu 7 hours ago|||
Nice video o mechanical interpretability from Welch Labs:

https://youtu.be/D8GOeCFFby4?si=2rWnwv4M2bjkpEoc

miki123211 8 hours ago|||
> Most of us confidently claimed even back in 2023 that, based on LLM architecture and training algorithms, LLMs would never be able to perform well on novel coding or mathematics tasks.

I feel like there are three groups of people:

1. Those who think that LLMs are stupid slop-generating machines which couldn't ever possibly be of any use to anybody, because there's some problem that is simple for humans but hard for LLMs, which makes them unintelligent by definition.

2. Those who think we have already achieved AGI and don't need human programmers any more.

3. Those who believe LLMs will destroy the world in the next 5 years.

I feel like the composition of these three groups is pretty much constant since the release of Chat GPT, and like with most political fights, evidence doesn't convince people either way.

libraryofbabel 8 hours ago||
Those three positions are all extreme viewpoints. There are certainly people who hold them, and they tend to be loud and confident and have an outsize presence in HN and other places online.

But a lot of us have a more nuanced take! It's perfectly possible to believe simultaneously that 1) LLMs are more than stochastic parrots 2) LLMs are useful for software development 3) LLMs have all sorts of limitations and risks (you can produce unmaintainable slop with them, and many people will, there are massive security issues, I can go on and on...) 4) We're not getting AGI or world-destroying super-intelligence anytime soon, if ever 5) We're in a bubble and it's going to pop and cause a big mess 6) This tech is still going to be transformative long term, on a similar level to the web and smartphones.

Don't let the noise from the extreme people who formed their opinions back when ChatGPT came out drown out serious discussion! A lot of us try and walk a middle course with this and have been and still are open to changing our minds.

brcmthrowaway 7 hours ago|||
How was reinforcement learning used as a gamechanger?

What happens to an LLM without reinforcement learning?

libraryofbabel 6 hours ago|||
The essence of it is that after the "read the whole internet and predict the next token" pre-training step (and the chat fine-tuning), SotA LLMs now have a training step where they solve huge numbers of tasks that have verifiable answers (especially programming and math). The model therefore gets the very broad general knowledge and natural language abilities from pre-training and gets good at solving actual problems (problems that can't be bullshitted or hallucinated through because they have some verifiable right answer) from the RL step. In ways that still aren't really understood, it develops internal models of mathematics and coding that allow it to generalize to solve things it hasn't seen before. That is why LLMs got so much better at coding in 2025; the success of tools like Claude Code (to pick just one example) is built upon it. Of course, the LLMs still have a lot of limitations (the internal models are not perfect and aren't like how humans think at all), but RL has taken us pretty far.

Unfortunately the really interesting details of this are mostly secret sauce stuff locked up inside the big AI labs. But there are still people who know far more than I do who do post about it, e.g. Andrej Karpathy discusses RL a bit in his 2025 LLMs Year in Review: https://karpathy.bearblog.dev/year-in-review-2025/

brcmthrowaway 5 hours ago||
Do you have the answer to the second question? Is an LLM trained on the internet just GPT-3?
libraryofbabel 5 hours ago||
I don't know - perhaps someone who's more of an expert or who's worked a lot with open source models that haven't been RL-ed can weigh in here!

But certainly without the RL step, the LLM would be much worse at coding and would hallucinate more.

malaya_zemlya 4 hours ago|||
You can download a base model (aka foundation, aka pretrain-only) from huggingface and test it out. These were produced without any RL.

However, most modern LLMs, even base models, would be not just trained on raw internet text. Most of them were also fed a huge amount of synthetic data. You often can see the exact details in their model cards. As a result, if you sample from them, you will notice that they love to output text that looks like:

  6. **You will win millions playing bingo.**
     - **Sentiment Classification: Positive**
     - **Reasoning:** This statement is positive as it suggests a highly favorable outcome for the person playing bingo.
This is not your typical internet page.
octoberfranklin 4 hours ago||
You often can see the exact details in their model cards.

Bwahahahaaha. Lol.

/me falls off of chair laughing

Come on, I've never found "exact details" about anything in a model card, except maybe the number of weights.

nrhrjrjrjtntbt 8 hours ago||
It is almost like understanding wood at a molecular level and being a carpenter. It also may help the carpentery, but you cam be a great one without it. And a bad one with the knowledge.
boltzmann_ 11 hours ago||
Kudos also to Transformer Explainer team for putting some amazing visualizations https://poloclub.github.io/transformer-explainer/ It really clicked to me after reading this two and watching 3blue1brown videos
gzer0 8 hours ago|
This is hands down one of the best visualizations I have ever come across.
laser9 11 hours ago||
Here's the comment from the author himself (jayalammar) talking about other good resources on learning Transformers:

https://news.ycombinator.com/item?id=35990118

Koshkin 10 hours ago||
(Going on a tangent.) The number of transformer explanations/tutorials is becoming overwhelming. Reminds me of monads (or maybe calculus). Someone feels a spark of enlightenment at some point (while, often, in fact, remaining deeply confused), and an urge to share their newly acquired (mis)understanding with a wide audience.
kadushka 9 hours ago||
Maybe so, but this particular blog post was the first and is still the best explanation of how transformers work.
nospice 9 hours ago||
So?

There's no rule that the internet is limited to a single explanation. Find the one that clicks for you, ignore the rest. Whenever I'm trying to learn about concepts in mathematics, computer science, physics, or electronics, I often find that the first or the "canonical" explanation is hard for me to parse. I'm thankful for having options 2 through 10.

some_guy_nobel 7 hours ago||
Great article, must be the inspiration for the recent Illustrated Evo 2: https://research.nvidia.com/labs/dbr/blog/illustrated-evo2/
gustavoaca1997 11 hours ago||
I have this book. Really a life savior to help me catching up a few months ago when my team decided to use LLMs in our systems.
qoez 10 hours ago|
Don't really see why you'd need to understand how the transformer works to do LLMs at work. LLMs is just a synthetic human performing reasoning with some failure modes that in-depth knowledge of the transformer interals won't help you predict what they are (just have to use experience with the output to get a sense, or other peoples experiments).
roadside_picnic 10 hours ago|||
In my experience this is a substantial difference in the ability to really get performance in LLM related engineering work from people who really understand how LLMs work vs people who think it's a magic box.

If your mental model of an LLM is:

> a synthetic human performing reasoning

You are severely overestimating the capabilities of these models and not realizing potential areas of failure (even if your prompt works for now in the happy case). Understanding how transformers work absolutely can help debug problems (or avoid them in the first place). People without a deep understanding of LLMs also tend to get fooled by them more frequently. When you have internalized the fact that LLMs are literally optimistized to trick you, you tend to be much more skeptical of the initial results (which results in better eval suites etc).

Then there's people who actually do AI engineering. If you're working with local/open weights models or on the inference end of things you can't just play around with an API, you have a lot more control and observability into the model and should be making use of it.

I still hold that the best test of an AI Engineer, at any level of the "AI" stack, is how well they understand speculative decoding. It involves understanding quite a bit about how LLMs work and can still be implemented on a cheap laptop.

amelius 9 hours ago|||
But that AI engineer who is implementing speculative decoding is still just doing basic plumbing that has little to do with the actual reasoning. Yes, he/she might make the process faster, but they will know just as little about why/how the reasoning works as when they implemented a naive, slow version of the inference.
roadside_picnic 8 hours ago||
What "actual reasoning" are you referring to? I believe you're making my point for me.

Speculative decoding requires the implementer to understand:

- How the initial prompt is processed by the LLM

- How to retrieve all the probabilities of previously observed tokens in the prompt (this also help people understand things like the probability of the entire prompt itself, the entropy of the prompt etc).

- Details of how the logits generate the distribution of next tokens

- Precise details of the sampling process + the rejection sampling logic for comparing the two models

- How each step of the LLM is run under-the-hood as the response is processed.

Hardly just plumbing, especially since, to my knowledge, there are not a lot of hand-holding tutorials on this topic. You need to really internalize what's going on and how this is going to lead to a 2-5x speed up in inference.

Building all of this yourself gives you a lot of visibility into how the model behaves and how "reasoning" emerges from the sampling process.

edit: Anyone who can perform speculative decoding work also has the ability to inspect the reasoning steps of an LLM and do experiments such as rewinding the thought process of the LLM and substituting a reasoning step to see how it impacts the results. If you're just prompt hacking you're not going to be able to perform these types of experiments to understand exactly how the model is reasoning and what's important to it.

amelius 8 hours ago||
But I can make a similar argument about a simple multiplication:

- You have to know how the inputs are processed.

- You have to left-shift one of the operands by 0, 1, ... N-1 times.

- Add those together, depending on the bits in the other operand.

- Use an addition tree to make the whole process faster.

Does not mean that knowing the above process gives you a good insight in the concept of A*B and all the related math and certainly will not make you better at calculus.

roadside_picnic 7 hours ago||
I'm still confused by what you meant by "actual reasoning", which you didn't answer.

I also fail to understand how building what you described would not help your understanding of multiplication, I think it would mean you understand multiplication much better than most people. I would also say that if you want to be a "multiplication engineer" then, yes you should absolutely know how to do what you've described there.

I also suspect you might have lost the main point. The original comment I was replying to stated:

> Don't really see why you'd need to understand how the transformer works to do LLMs at work.

I'm not saying implementing speculative decoding is enough to "fully understand LLMs". I'm saying if you can't at least implement that, you don't understand enough about LLMs to really get the most out of them. No amount of twiddling around with prompts is going to give you adequate insight into how an LLMs works to be able to build good AI tools/solutions.

machinationu 9 hours ago|||
speculative decoding is 1+1

transformer attention is integrals

bonesss 9 hours ago||||
> LLMs is just a synthetic human

1) ‘human’ encompasses behaviours that include revenge cannibalism and recurrent sexual violence —- wish carefully.

2) not even a little bit, and if you want to pretend then pretend they’re a deranged delusional psych patient who will look you in the eye and say genuinely “oops, I guess I was lying, it won’t ever happen again” and then lie to you again, while making sure happens again.

3) don’t anthropomorphize LLMs, they don’t like it.

Koshkin 9 hours ago|||
> is just a synthetic human performing reasoning

The future is now! (Not because of "a synthetic human" per se but because of people thinking of them as something unremarkable.)

edge17 3 hours ago||
Maybe I'm out of touch, but have transformers replaced all traditional deep learning architectures? (U-nets, etc)?
D-Machine 3 hours ago|
No, not at all. There is a transformer obsession that is quite possibly not supported by the actual facts (CNNs can still do just as well: https://arxiv.org/abs/2310.16764), and CNNs definitely remain preferable for smaller and more specialized tasks (e.g. computer vision on medical data).

If you also get into more robust and/or specialized tasks (e.g. rotation invariant computer vision models, graph neural networks, models working on point-cloud data, etc) then transformers are also not obviously the right choice at all (or even usable in the first place). So plenty of other useful architectures out there.

edge17 56 minutes ago||
Is there something I can read to get a better sense of what types of models are most suitable for which problems? All I hear about are transformers nowadays, but what are the types of problems for which transformers are the right architecture choice?
ActorNightly 10 hours ago||
People need to get away from this idea of Key/Query/Value as being special.

Whereas a standard deep layer in a network is matrix * input, where each row of the matrix is the weights of the particular neuron in the next layer, a transformer is basically input* MatrixA, input*MatrixB, input*MatrixC (where vector*matrix is a matrix), then the output is C*MatrixA*MatrixB*MatrixC. Just simply more dimensions in a layer.

And consequently, you can represent the entire transformer architecture with a set of deep layers as you unroll the matricies, with a lot of zeros for the multiplication pieces that are not needed.

This is a fairly complex blog but it shows that its just all matrix multiplication all the way down. https://pytorch.org/blog/inside-the-matrix/.

throw310822 9 hours ago|
I might be completely off road, but I can't help thinking of convolutions as my mental model for the K Q V mechanism. Attention has the same property of a convolution kernel of being trained independently of position; it learns how to translate a large, rolling portion of an input to a new "digested" value; and you can train multiple ones in parallel so that they learn to focus on different aspects of the input ("kernels" in the case of convolution, "heads" in the case of attention).
krackers 5 hours ago||
I think there are two key differences though: 1) Attention doesn't doesn't use fixed distance-dependent weight for the aggregation but instead the weight becomes "semantically-dependent", based on association between q/k. 2) A single convolution step is a local operation (only pulling from nearby pixels), whereas attention is a "global" operation, pulling from the hidden states of all previous tokens. (Maybe sliding window attention schemes muddy this distinction, but in general the degree of connectivity seems far higher).

There might be some unifying way to look at things though, maybe GNNs. I found this talk [1] and at 4:17 it shows how convolution and attention would be modeled in a GNN formalism

[1] https://www.youtube.com/watch?v=J1YCdVogd14

sifar 4 hours ago||
Nested concolutikns, dilated convolutiona both can pull in data from further afar.
zkmon 9 hours ago||
I think the internal of transformers would become less relevant like internal of compilers, as programmers would only care about how to "use" them instead of how to develop them.
crystal_revenge 1 hour ago||
Have you written a compiler? I ask because for me writing a compiler was absolutely an inflection point in my journey as a programmer. Being able to look at code and reason about it all the way down to bytecode/IL/asm etc absolutely improved my skill as a programmer and ability to reason about software. For me this was the first time I felt like a real programmer.
esafak 8 hours ago|||
Practitioners already do not need to know about it to run let alone use LLMs. I bet most don't even know the fundamentals of machine learning. Hands up if you know bias from variance...
rvz 9 hours ago||
Their internals are just as relevant (now even more relevant) as any other technology as they always need to be improved to the SOTA (state of the art) meaning that someone has to understand their internals.

It also means more jobs for the people who understand them at a deeper level to advance the SOTA of specific widely used technologies such as operating systems, compilers, neural network architectures and hardware such as GPUs or TPU chips.

Someone has to maintain and improve them.

prashant418 6 hours ago|
This guide is such a beast, Try pairing this guide with say claude code and ask it to generate sample mini pytorch pesudo-code and you can spend hours just learning/re-learning and mentally visualize a lot of these concepts. I am a big fan
More comments...