Posted by auraham 11 hours ago
I would strongly caution anyone who thinks that they will be able to understand or explain LLM behavior better by studying the architecture closely. That is a trap. Big SotA models these days exhibit so much nontrivial emergent phenomena (in part due to the massive application of reinforcement learning techniques) that give them capabilities very few people expected to ever see when this architecture first arrived. Most of us confidently claimed even back in 2023 that, based on LLM architecture and training algorithms, LLMs would never be able to perform well on novel coding or mathematics tasks. We were wrong. That points towards some caution and humility about using network architecture alone to reason about how LLMs work and what they can do. You'd really need to be able to poke at the weights inside a big SotA model to even begin to answer those kinds of questions, but unfortunately that's only really possible if you're a "mechanistic interpretability" researcher at one of the major labs.
Regardless, this is a nice article, and this stuff is worth learning because it's interesting for its own sake! Right now I'm actually spending some vacation time implementing a transformer in PyTorch just to refresh my memory of it all. It's a lot of fun! If anyone else wants to get started with that I would highly recommend Sebastian Raschka's book and youtube videos as way into the subject: https://github.com/rasbt/LLMs-from-scratch .
Has anyone read TFA author Jay Alammar's book (published Oct 2024) and would they recommend it for a more up-to-date picture?
I’d give similar advice to any coding bootcamp grad: yes you can get far by just knowing python and React, but to reach the absolute peak of your potential and join the ranks of the very best in the world in your field, you’ll eventually want to dive deep into computer architecture and lower level languages. Knowing these deeply will help you apply your higher level code more effectively than your coding bootcamp classmates over the course of a career.
I mostly do it because it's interesting and I don't like mysteries, and that's why I'm relearning transformers, but I hope knowing LLM internals will be useful one day too.
So sad that "reinforcement learning" is another term whose meaning has been completely destroyed by uneducated hype around LLMs (very similar to "agents"). 5 years ago nobody familiar with RL would consider what these companies are doing as "reinforcement learning".
RLHF and similar techniques are much, much closer to traditional fine-tuning than they are reinforcement learning. RL almost always, historically, assumes online training and interaction with an environment. RLHF is collecting data from user and using it to reach the LLM to be more engaging.
This fine-tuning also doesn't magically transform LLMs into something different, but it is largely responsible for their sycophantic behavior. RLHF makes LLMs more pleasing to humans (and of course can be exploited to help move the needle on benchmarks).
It's really unfortunate that people will throw away their knowledge of computing in order to maintain a belief that LLMs are something more than they are. LLMs are great, very useful, but they're not producing "nontrivial emergent phenomena". They're increasing trained a products to invoked increase engagement. I've found LLMs less useful in 2025 than in 2024. And the trend in people not opening them up under the hood and playing around with them to explore what they can do has basically made me leave the field (I used to work in AI related research).
> I've found LLMs less useful in 2025 than in 2024.
I really don't know how to reply to this part without sounding insulting, so I won't.
And did you not read the entire post? Karpathy basically calls out the same point that I am making regarding RL which "of course can be exploited to help move the needle on benchmarks":
> Related to all this is my general apathy and loss of trust in benchmarks in 2025. The core issue is that benchmarks are almost by construction verifiable environments and are therefore immediately susceptible to RLVR and weaker forms of it via synthetic data generation. In the typical benchmaxxing process, teams in LLM labs inevitably construct environments adjacent to little pockets of the embedding space occupied by benchmarks and grow jaggies to cover them. Training on the test set is a new art form
Regarding:
> I really don't know how to reply to this part without sounding insulting, so I won't.
Relevant to citing him: Karpathy has publicly praised some of my past research in LLMs, so please don't hold back your insults. A poster on HN telling me I'm "not using them right!!!" won't shake my confidence terribly. I use LLMs less this year than last year and have been much more productive. I still use them, LLMs are interesting, and very useful. I just don't understand why people have to get into hysterics trying to make them more than that.
I also agree with Karpathy's statement:
> In any case they are extremely useful and I don't think the industry has realized anywhere near 10% of their potential even at present capability.
But magical thinking around them is slowing down progress imho. Your original comment itself is evidence of this:
> I would strongly caution anyone who thinks that they will be able to understand or explain LLM behavior better by studying the architecture closely.
I would say "Rip them open! Start playing around with the internals! Mess around with sampling algorithms! Ignore the 'win market share' hype and benchmark gaming and see just what you can make these models do!" Even if restricted to just open, relatively small models, there's so much more interesting work in this space.
A common sentiment on HN is that LLMs generate too many comments in code.
But comment spam is going to help code quality, due to the way causal transformers and positional encoding works. The model has learned to dump locally-specific reasoning tokens where they're needed, in a tightly scoped cluster that can be attended to easily, and forgetting about just as easily later on. It's like a disposable scratchpad to reduce the errors in the code it's about to write.
The solution to comment spam is textual/AST post-processing of generated code, rather than prompting the LLM to handicap itself by not generating as much comments.
Like I said, it's a trap to reason from architecture alone to behavior.
A common sentiment on HN is that LLMs generate too many comments in code.
For good reason -- comment sparsity improves code quality, due to the way causal transformers and positional encoding work. The model has learned that real, in-distribution code carries meaning in structure, naming, and control flow, not dense commentary. Fewer comments keep next-token prediction closer to the statistical shape of the code it was trained on.
Comments aren’t a free scratchpad. They inject natural-language tokens into the context window, compete for attention, and bias generation toward explanation rather than implementation, increasing drift over longer spans.
The solution to comment spam isn’t post-processing. It’s keeping generation in-distribution. Less commentary forces intent into the code itself, producing outputs that better match how code is written in the wild, and forcing the model into more realistic context avenues.
I feel like there are three groups of people:
1. Those who think that LLMs are stupid slop-generating machines which couldn't ever possibly be of any use to anybody, because there's some problem that is simple for humans but hard for LLMs, which makes them unintelligent by definition.
2. Those who think we have already achieved AGI and don't need human programmers any more.
3. Those who believe LLMs will destroy the world in the next 5 years.
I feel like the composition of these three groups is pretty much constant since the release of Chat GPT, and like with most political fights, evidence doesn't convince people either way.
But a lot of us have a more nuanced take! It's perfectly possible to believe simultaneously that 1) LLMs are more than stochastic parrots 2) LLMs are useful for software development 3) LLMs have all sorts of limitations and risks (you can produce unmaintainable slop with them, and many people will, there are massive security issues, I can go on and on...) 4) We're not getting AGI or world-destroying super-intelligence anytime soon, if ever 5) We're in a bubble and it's going to pop and cause a big mess 6) This tech is still going to be transformative long term, on a similar level to the web and smartphones.
Don't let the noise from the extreme people who formed their opinions back when ChatGPT came out drown out serious discussion! A lot of us try and walk a middle course with this and have been and still are open to changing our minds.
What happens to an LLM without reinforcement learning?
Unfortunately the really interesting details of this are mostly secret sauce stuff locked up inside the big AI labs. But there are still people who know far more than I do who do post about it, e.g. Andrej Karpathy discusses RL a bit in his 2025 LLMs Year in Review: https://karpathy.bearblog.dev/year-in-review-2025/
But certainly without the RL step, the LLM would be much worse at coding and would hallucinate more.
However, most modern LLMs, even base models, would be not just trained on raw internet text. Most of them were also fed a huge amount of synthetic data. You often can see the exact details in their model cards. As a result, if you sample from them, you will notice that they love to output text that looks like:
6. **You will win millions playing bingo.**
- **Sentiment Classification: Positive**
- **Reasoning:** This statement is positive as it suggests a highly favorable outcome for the person playing bingo.
This is not your typical internet page.Bwahahahaaha. Lol.
/me falls off of chair laughing
Come on, I've never found "exact details" about anything in a model card, except maybe the number of weights.
There's no rule that the internet is limited to a single explanation. Find the one that clicks for you, ignore the rest. Whenever I'm trying to learn about concepts in mathematics, computer science, physics, or electronics, I often find that the first or the "canonical" explanation is hard for me to parse. I'm thankful for having options 2 through 10.
If your mental model of an LLM is:
> a synthetic human performing reasoning
You are severely overestimating the capabilities of these models and not realizing potential areas of failure (even if your prompt works for now in the happy case). Understanding how transformers work absolutely can help debug problems (or avoid them in the first place). People without a deep understanding of LLMs also tend to get fooled by them more frequently. When you have internalized the fact that LLMs are literally optimistized to trick you, you tend to be much more skeptical of the initial results (which results in better eval suites etc).
Then there's people who actually do AI engineering. If you're working with local/open weights models or on the inference end of things you can't just play around with an API, you have a lot more control and observability into the model and should be making use of it.
I still hold that the best test of an AI Engineer, at any level of the "AI" stack, is how well they understand speculative decoding. It involves understanding quite a bit about how LLMs work and can still be implemented on a cheap laptop.
Speculative decoding requires the implementer to understand:
- How the initial prompt is processed by the LLM
- How to retrieve all the probabilities of previously observed tokens in the prompt (this also help people understand things like the probability of the entire prompt itself, the entropy of the prompt etc).
- Details of how the logits generate the distribution of next tokens
- Precise details of the sampling process + the rejection sampling logic for comparing the two models
- How each step of the LLM is run under-the-hood as the response is processed.
Hardly just plumbing, especially since, to my knowledge, there are not a lot of hand-holding tutorials on this topic. You need to really internalize what's going on and how this is going to lead to a 2-5x speed up in inference.
Building all of this yourself gives you a lot of visibility into how the model behaves and how "reasoning" emerges from the sampling process.
edit: Anyone who can perform speculative decoding work also has the ability to inspect the reasoning steps of an LLM and do experiments such as rewinding the thought process of the LLM and substituting a reasoning step to see how it impacts the results. If you're just prompt hacking you're not going to be able to perform these types of experiments to understand exactly how the model is reasoning and what's important to it.
- You have to know how the inputs are processed.
- You have to left-shift one of the operands by 0, 1, ... N-1 times.
- Add those together, depending on the bits in the other operand.
- Use an addition tree to make the whole process faster.
Does not mean that knowing the above process gives you a good insight in the concept of A*B and all the related math and certainly will not make you better at calculus.
I also fail to understand how building what you described would not help your understanding of multiplication, I think it would mean you understand multiplication much better than most people. I would also say that if you want to be a "multiplication engineer" then, yes you should absolutely know how to do what you've described there.
I also suspect you might have lost the main point. The original comment I was replying to stated:
> Don't really see why you'd need to understand how the transformer works to do LLMs at work.
I'm not saying implementing speculative decoding is enough to "fully understand LLMs". I'm saying if you can't at least implement that, you don't understand enough about LLMs to really get the most out of them. No amount of twiddling around with prompts is going to give you adequate insight into how an LLMs works to be able to build good AI tools/solutions.
transformer attention is integrals
1) ‘human’ encompasses behaviours that include revenge cannibalism and recurrent sexual violence —- wish carefully.
2) not even a little bit, and if you want to pretend then pretend they’re a deranged delusional psych patient who will look you in the eye and say genuinely “oops, I guess I was lying, it won’t ever happen again” and then lie to you again, while making sure happens again.
3) don’t anthropomorphize LLMs, they don’t like it.
The future is now! (Not because of "a synthetic human" per se but because of people thinking of them as something unremarkable.)
If you also get into more robust and/or specialized tasks (e.g. rotation invariant computer vision models, graph neural networks, models working on point-cloud data, etc) then transformers are also not obviously the right choice at all (or even usable in the first place). So plenty of other useful architectures out there.
Whereas a standard deep layer in a network is matrix * input, where each row of the matrix is the weights of the particular neuron in the next layer, a transformer is basically input* MatrixA, input*MatrixB, input*MatrixC (where vector*matrix is a matrix), then the output is C*MatrixA*MatrixB*MatrixC. Just simply more dimensions in a layer.
And consequently, you can represent the entire transformer architecture with a set of deep layers as you unroll the matricies, with a lot of zeros for the multiplication pieces that are not needed.
This is a fairly complex blog but it shows that its just all matrix multiplication all the way down. https://pytorch.org/blog/inside-the-matrix/.
There might be some unifying way to look at things though, maybe GNNs. I found this talk [1] and at 4:17 it shows how convolution and attention would be modeled in a GNN formalism
It also means more jobs for the people who understand them at a deeper level to advance the SOTA of specific widely used technologies such as operating systems, compilers, neural network architectures and hardware such as GPUs or TPU chips.
Someone has to maintain and improve them.