Posted by daigoba66 3 days ago
I think there is a valid insight here which many already know: LLMs are much more reliable at creating scripts and automation to do certain tasks than doing these tasks themselves.
For example if I provide an LLM my database schema and tell it to scan for redundant indexes and point out wrong naming conventions, it might do a passable but incomplete job.
But if I tell the LLM to code a python or nodejs script to do the same, I get significantly better results. And it's often faster too to generate and run the script than to let LLMs process large SQL files.
There’s a massive issue with extrapolating to more complex tasks, however, where either you run the risk of prompt injection via granting your agent access to the internet or, more commonly, an exponential degradation in coherence over long contexts.
This is important information for anyone to understand who thinks these systems are thinking, reasoning, and learning from them or that they’re having a conversation with them i.e. 90% of users of LLMs.
Why do you think the results of this paper contradict these claims at all?
(Confabulation is IMO a much bigger problem, but it's unrelated to architecture - it's an artifact of how models are currently trained.)
and the C-suite
Reminder that "thinking" is an ill-defined term like others, and the question whether they "think" is basically irrelevant. No intelligent system, human or machine, will ever have zero error rate, due to the very nature of intelligence (another vague term). You have to deal with that the same way you deal with it in humans - either treat bugs as bugs and build systems resilient to bugs, or accept the baseline error rate if it's low enough.
This is well known and not that interesting to me - ask the model to use python to solve any of these questions and it will get it right every time.
An LLM is a router and completely stateless aside from the context you feed into it. Attention is just routing the probability distribution of the next token, and I'm not sure that's going to accumulate much in a single pass.
Not the latest SSM and hybrid attention ones.
A more accurate analogy for humans would be to imagine if every word had a colour. You are told that there are also a sequence of different colours that correspond to the same colour as that word. You are even given a book showing every combination to memorise.
You learn the colours well enough that you can read and write coherently using them.
Then comes the question of how many chocolate-browns are in teal-with-a-hint-of-red. You know that teal-with-a-hint-of-red is a fruit and you know that the colour can also be constructed by crimson followed by Disney-blond. Now, do both of those contain chocolate-brown or just one of them, how many?
It requires excersizing memory to do a task that is underrepresented in the training data because humans simply do not have to do the task at all when the answer can be derived from the question representation. Humans also don't have the ability that the LLMs need but the letter representation doesn't need that ability.
The issue is that people claim the performance is representative of a human's performance in the same situation. That gives an incorrect overall estimation of ability.
> For the multiplication task, note that agents that make external calls to a calculator tool may have ZEH = ∞. While ZEH = ∞ does have meaning, in this paper we primarily evaluate the LLM itself without external tool calls
The models can count to infinity if you give them access to tools. The production models do this.
Not that the paper is wrong, it is still interesting to measure the core neural network of a model. But modern models use tools.
Humans can fly, they just need wings!
More generally, the ability to use tools is a form of intelligence, just like when humans and crows do it. Being able to craft the right Python script and use the result is non-trivial.
There’s plenty of math that I couldn’t even begin to solve without a calculator or other tool. Doesn’t mean I’m not solving math problems.
In woodworking, the advice is to let the tool do the work. Does someone using a power saw have less claim to having built something than a handsaw user? Does a CNC user not count as a woodworker because the machine is doing the part that would be hard or impossible for a human?
What the user sees is the total behavior of the entire system, not whether the system has internal divisions and separations.
At least in theory.
It would be interesting to actively track how far long each progressive model gets...
> Yes — ((((()))))) is balanced.
> It has 6 opening ( and 6 closing ), and they’re properly nested.
Though it did work when using "Extensive Thinking". The model wrote a Python program to solve this.
> Almost balanced — ((((()))))) has 5 opening parentheses and 6 closing parentheses, so it has one extra ).
> A balanced version would be: ((((()))))
Testing a couple of different models without a harness such that no tool calls are possible would be interesting
The one thing I did trip it up on was "Is there the sh sound in the word transportation". It said no. And then realized I asked for "sound" not letters. It then subsequently got the rest of the "sounds-like" tests I did.
Clearly, my ChatGPT is just better than yours.
When LLMs can't count r's: see? LLMs can't think. Hoax!
When LLMs count r's: see? They patched and benchmark-maxxed. Hoax!
You just can't reason with the anti-LLM group.
Followed by lots of "works perfectly for me, why are people even talking about this?"
I can't say what exactly they're doing behind the scenes but it's a consistent pattern among the big SOTA model providers. With obvious incentive to "fix" the problem so users will then organically "debunk" the meme as they try it themselves and share their experiences.
>You just can't reason with the anti-LLM group.
On the contrary, the reasoning is simple and consistent:
LLMs can't count r's shows that LLM don't actually think the way we understand thought (since nobody with the kind of high skills they have in other areas would fail that). And because of that, there are (likely) patches for commonly reported cases, since it's a race to IPO and benchmark-maxxing is very much conceivable.
Is tokenization extremely efficient? Yes. Does it fundamentally break character-level understanding? Also yes. The only fix is endless memorization.
So yes.
And the valuations. Trillion dollar grifter industry.
Try it for yourself — under the most popular tokenizer vocabulary (https://tiktokenizer.vercel.app/?model=cl100k_base), "strawberry" becomes [str][aw][berry]. Or, from the model's perspective, [496, 675, 15717]. The model doesn't know anything about how those numbers correspond to letters than you do! It never gets sat down and told "[15717] <=> [b][e][r][r][y]", with single-byte tokens on the right. (In fact, these single-byte tokens appear in the training data extremely rarely, and so the model doesn't often learn to do anything with them.)
Note that LLMs can predictably count the number of r's in "s t r a w b e r r y", because <Count the number of r's in "s t r a w b e r r y"> becomes [Count][ the][ number][ of][ r]['s][ in][ "][s][ t][ r][ a][ w][ b][ e][ r][ r][ y]["]. And that's just a matching problem — [ r] tokens for [ r] tokens, no token-correspondence-mapping needed.
This is clearly not the case, any modern (non-reasoning) model easily decomposes words into individual token-characters (try separating them with e.g. Braille spaces...) and does arbitrary tokenization variants if forced with a sampler. It's way deeper than tokenization, and models struggle exactly with counting items in a list, exact ordering, retrieving scattered data, etc. LLM context works a lot more like associative memory than a sequence that can be iterated over. There are also fundamental biases and specific model quirks that lead to this.
no it doesnt. it makes sense that they cant count the rs because they dont have access to the actual word, only tokens that might represent parts or the whole of the word
> are the following parenthesis balanced? ((())))
> No, the parentheses are not balanced.
> Here is the breakdown:
Opening parentheses (: 3
Closing parentheses ): 4
... following up with:> what about these? ((((())))
> Yes, the parentheses are balanced.
> Here is the breakdown:
Opening parentheses (: 5
Closing parentheses ): 5
... and uses ~5,000 tokens to get the wrong answer.You wouldn't say that a human who doesn't know how to read isn't reliable in everything, just in reading.
Counting is something that even humans need to learn how to do. Toddlers also don't understand quantity. If a 2 year old is able to count to even 10 it's through memorization and not understanding. It takes them like 2 more years of learning before they're able to comprehend things like numerical correspondence. But they do still know how to do other things that aren't counting before then.
No human who can program, solve advanced math problems, or can talk about advanced problem domains at expert level, however, would fail to count to 5.
This is not a mere "LLMs, like humans, also need to be taught this" but points to a fundamental mismatch about how humans and LLMs learn.
(And even if they merely needed to be taught, why would their huge corpus fail to cover that "teaching", but cover way more advanced topics in math solving and other domains?)
What this points at is the abstraction/emergence crux of it all. Why does an otherwise very capable LLM such as the GPT-5 series, despite having been trained on vastly more examples of frontend code of all shapes, sizes and quality levels, struggle to abstract all that training data to the point where outputting any frontend that deviates from the clearly used examples?
If LLMs, as they are now, were comparable with human learning, there'd be no scenario where a model that can provide output solving highly advanced equations can not count properly.
Similarly, a model such as GPT-5 trained on nearly all frontend code ever committed to any repo online, would have internalised more than that one template OpenAI predominantly leaned on.
These models, I think at this point there is little doubt, are impressive tools, but they still do not generalise or abstract information in the way a human mind does. Doesn't make them less impactful for industries, etc. but it makes any comparison to humans not very suitable.
This paper has nothing to do with any questions starting with "why". It provides a metric for quantifying error on specific tasks.
> If LLMs, as they are now, were comparable with human learning
I think I missed the part where they need to be.
> struggle to abstract all that training data to the point where outputting any frontend that deviates from the clearly used examples? ... a model such as GPT-5 trained on nearly all frontend code ever committed to any repo online, would have internalised more than that one template OpenAI predominantly leaned on
There is a very big and very important difference between producing the same thing again and not being able to produce something else. When not given any reason to produce something else, humans also generate the same thing over and over. That's a problem of missing constraints, not of missing ability.
Long before AI there was this thing called Twitter Bootstrap. It dominated the web for...much longer than it should have. And that tragedy was done entirely by us meatsacks (not me personally). Where there's no goal for different output there's no reason to produce different output, and LLMs don't have their own goals because they don't have any mechanisms for desire (we hope).
[I've edited this comment for content and format]
Ok, that's better than comparing LLMs to humans. ZSL however, has not proven anything of that sort false years ago, as it was mainly concerned with assessing whether LLMs are solely relying on precise instruction training or can generalise in a very limited degree beyond the initial tuning. That has never allowed for comparing human learning to LLM training.
Ironically, you are writing this under a paper that shows just that:
A model that cannot determine a short strings parity cannot have abstracted from the training data to arrive at the far more impressive and complicated maths challenges which it successfully solves in output. Some of the solutions we have seen in output require such innate understanding that, if there is no generalisation, far deeper than ZSL has ever shown, than this must come from training. Simple multiplication, etc. maybe, not the tasks people such as Easy Riders [0] throw at these models.
This paper shows exactly that even with ZSL, these models do only abstract in an incredibly limited manner and a lot of capabilities we see in the output are specifically trained, not generalised. Yes, generalisation in a limited capacity can happen, but no, it is not nearly close enough to yield some of the results we are seeing. I have also, neither here, nor in my initial comment, said that LLMs are only capable of outputting what their training data provides, merely that given what GPT-5 has been trained with, if there was any deeper abstraction these models gained during training, it'd be able to provide more than one frontend style.
Or to put it simpler, if the output provided can be useful for Maths at the Bachelor level and beyond and this capability is generalised as you believe, these tasks would not be a struggle for the model.
> When not given any reason to produce something else, humans also generate the same thing over and over. That's a problem of missing constraints, not of missing ability.
Ignoring the comparison with humans, yes, LLMs don't output something unless prompted specifically, of course. My point with GPT-5 was that, no matter how you prompt, you cannot get salvageable frontend code from this line of models.
OpenAI themselves tried and failed appallingly [0]. Call it "constraints", call it "reason", call it "prompting", you cannot get frontend code that deviates significantly from their card laden training data. Despite GPT-5 having been trained with more high quality frontend code examples than any human could ever read in a lifetime, that one template is over presented, because the model never generalised anything akin to an understanding of UI principles or what code yields a specific design.
These are solvable problems, mind you, but not because a model at some stage gains anything that one could call an abstract understanding of these concepts. Instead, by providing better training data or being clever in how you provide existing training data.
Gemini 3 and Claude 4 class models have a more varied training set, specifically of frontend templates yielding better results though if you do any extended testing you will see these repeat constantly because again, these models never abstract from that template collection [1].
Moonshot meanwhile with K2.5 did a major leap by tying their frontend code tightly to visual input, leveraging the added vision encoder [2]. They are likely not the only ones doing that, but the first that clearly stated it reading the system cards. Even there, the gains are limited to a selection of very specific templates.
In either case, more specific data, not abstractions by these models yield improvements.
> Twitter Bootstrap [...] entirely by us meatsacks (not me personally). Where there's no goal for different output there's no reason to produce different output, and LLMs don't have their own goals because they don't have any mechanisms for desire (we hope).
What? So because some devs relied on Bootstrap that means, what exactly? That no one asked/told them to leverage a different solution, be more creative, what?
Again ignoring the comparison to humans which just is not appropriate for this tech, we can and do prompt models for specific frontend output. We are, if you must, providing the goal. The model however cannot accomplish said goal, even OpenAI cannot get GPT-5s lineage to deviate from their one template.
If we must stick with the human comparison and if we must further limit it to Bootstrap, GPT-5 despite being specifically prompted to never use the Carousel in Bootstrap, can not output any website without including a Carousel, because the template it was trained on included one. Any human developer asked to do so would just not include a Carousel, because their abilities are abstracted beyond the one Bootstrap template they first learned with. But if we truly wanted to make this fair, it'd actually have to be a human who was trained on thousands of Bootstrap example pages, but just one template really well and never connected anything between that one and the others. Which isn't very human, but then again, that's why this comparison is not really a solid one.
[0] Subjectively not one good result, objectively even their team of experts could not get their own model to seize the telltale signs of GPT frontend slop that originated from a template they have been training with since Horizon: https://developers.openai.com/blog/designing-delightful-fron...
Many animals can count. Counting is recognizing that the box with 3 apples is preferable to the one with 2 apples.
Yes, 2 year olds might struggle with the externalization of numeric identities but if you have 1 M&M in one hand and 5 in the other and ask which they want, they’ll take the 5.
LLMs have the language part down, but fundamentally can’t count.
However many animals can distinguish independently small numbers, like 3 or 5, and recognize them whenever they see them.
So in this respect, there is little difference between humans and many animals. Humans learn to count to arbitrarily big numbers, but they can still easily recognize only small numbers.
This is called subitizing. It's distinct from counting. We can see the difference in humans with Simultanagnosia, who are unable to count beyond the subitizing range. Subitizing is categorizing the scale of a small gestalt group.
The only thing I've ever seen where an animal appeared to demonstrate counting (up to 3) without training was in rhesus monkeys (maybe also chimpanzees?), but even that experiment could be explained through temporal gestalt. (It's the only reason I know of for them to not have been able to go higher than 3 in that experiment in the context of many other things that they can do.)
The overeager do quite often confuse subitizing and size discrimination for counting, though. That's its own problem.
I completely agree with you. LLMs are regurgitation machines with less intellect than a toddler, you nailed it.
AI is here!
“Model can count to 5”… tick.
“Model can count to 10”… sorry you gotta wait til 2028.