When autocorrect is wrong, it usually is because it chooses words believed to be used more frequently in that context, so especially the authors of scientific or technical texts are affected by the wrong guesses of autocorrect, because they use less common words.
"Right" and "wrong" aren't binary states. In many cases, if the data is at least in small part correct, that small part can be used to improve correctness in an automated way.
If you think about it from the VC dimensionality lens, in respect to learnability and set shattering is simply a choice function it can help.
Most of us have serious cognitive dissonance with dropping the principal of the excluded middle, as Aristotle and Plato's assumptions are baked into our minds.
You can look at why ZFC asserts that some sets are inconstructable, or through how Type or Category theory differ from classic logic.
But the difference between RE and coRE using left and right in place of true and false seems to work for many.
While we can build on that choice function, significantly improving our abilities to approximate or numerical stability, the limits of that original trinity of laws of thought are still underlying.
The union of RE and coRE is the recursive set, and is where not p implys p and not not p implys p holds.
There is a reason constructivist logic, lambda calculus, and category theory are effectively the same thing.
But for most people it is a challenging path to figure out why.
As single layer perceptrons depend on linearly separable sets, and multilayer perceptrons are not convex, I personally think the constructivist path is the best way to understand the intrinsic limits despite the very real challenges with moving to a mindset that doesn't assume PEM and AC.
There are actually stronger forms of choice in that path, but they simply cannot be assumed.
More trivial examples, even with perfect training data.
An LLM will never be able to tell you unknowable unknowns like 'will it rain tomorrow' or underspecified questions like 'should I driven on the left side of the road'
But it also won't be able to reliably shatter sets for problems that aren't in R with next token prediction, especially with problems that aren't in RE, as even coRE requires 'for any' universal quantification on the right side.
A LLM model will never be total, so the above question applies but isn't sufficient to capture the problem.
While we can arbitrarily assign tokens to natural numbers, that is not unique and is a forgetful functor, which is why compression is considered equivalent to the set shattering I used above for learnability.
The above questions framing with just addition and with an assumption of finite precision is why there is a disconnect for some people.
Life the "machine" is a calculator, and I want to ask 5+5, but I put in the "wrong figures" e.g. (4+4), is the "right answer" 8 or 10? Is the right answer the answer you want to the question you want to ask, or the answer to the question you actually asked?
Imagine you ask your friend “hey, what’s twenty divided by five?”, and they say “four” and then you realise you misspoke and meant to say “what’s twenty divided by four?” Is your friend wrong?
Of course not, in both cases.
People think they understand what "AI" is supposed to do, then "AI" turns out to not do what they expect and they call it broken.
https://news.ycombinator.com/item?id=33010046
kylebenzle on Sept 28, 2022 [dead] | parent | context | favorite | on: Why are sex workers forced to wear a financial sca...
All women are whores. Sorry to break it to you.
optillm authors suggest that the additional computations in Entropics don’t bring any better results in comparison with the simple CoT decoding (but I am not sure if they also check efficiency):https://x.com/asankhaya/status/1846736390152949966
It looks to me that many problems with LLMs come from something like semantic leaking, or distraction by irrelevant information (like in the GSM Symbolic paper) - maybe there is some space for improving attention too.
I wrote a couple of blog posts on these subjects: https://zzbbyy.substack.com/p/semantic-leakage-quick-notes, https://zzbbyy.substack.com/p/llms-and-reasoning, https://zzbbyy.substack.com/p/o1-inference-time-turing-machi...
I'd like to see this applied to coding or math. See the samplers work better in say olympiad math problems, with thorough benchmarks before and after.
It’s the same measure we judge human writers on so it’s not necessarily the worst.
Unless I'm reading Table2 (page7 - pdf version) wrong, on math, min_p is shown to score worse than top_p.
For temp 0.7 it scores 1 point lower than top_p. And from temps 1.0 and up, while scoring higher than top_p for the same temp, it scores way lower (6points and up) than top_p at 0.7. So overall, if you want accurate answers (and for math you kinda do), min_p is worse overall? Unless I miss-understand something.
I agree with the authors that if you want a tradeoff between accuracy and diversity, min_p might help, but if you're looking for precise answers, the results will be slightly worse. It's a tradeoff, but as I said above, people often fail to mention it as such, and instead proclaim it to be "better" across the board.
Or maybe it's a more fundamental weakness of the attention mechanism? (There are alternatives to that now.)
This recent work is highly relevant: https://learnandburn.ai/p/how-to-tell-if-an-llm-is-just-gues...
It uses an idea called semantic entropy which is more sophisticated than the standard entropy of the token logits, and is more appropriate as a statistical quantification of when an LLM is guessing or has high certainty. The original paper is in Nature, by authors from Oxford.
But even with this in mind, there are caveats. We have recently published [2] a comprehensive benchmark of SOTA approaches to estimating uncertainty of LLMs, and have reported that while in many cases these semantic-aware methods do perform very well, in other tasks simple baselines, like average entropy of token distributions, performs on par or better than complex techniques.
We have also developed an open-source python library [3] (which is still in early development) that offers implementations of all modern UE techniques applicable to LLMs, and allows easy benchmarking of uncertainty estimation methods as well as estimating output uncertainty for deployed models in production.
[1] https://arxiv.org/abs/2307.01379
I have been following this quite closely, it has been very interesting as it seems smaller models can be more efficient with this sampler. Worth going through the posts if someone is interested in this. I kind of have a feeling that this kind of sampling is a big deal.
I don't say that to be a hater or discourage them because they may well be on to something, and it's good for unique approaches like this to be tried. But I'm also not surprised there aren't academic papers about this approach because if it had no positive effects for the reasons I mention, it probably wouldn't get published.
When people in this field compare various methods of quantifying model uncertainty, they often perform what is called rejection verification. Basically, you continuously reject data points where uncertainty is high, and see how average quality of the remaining outputs increases. A good uncertainty estimate is highly correlated with output quality, and thus low-uncertainty outputs should have higher average quality.
We use exactly this approach in our recent benchmark of uncertainty estimation approaches for LLMS [1] and have an open-source library under development [2] which allows for such benchmarking. It also can produce uncertainty scores for a given model output, so ppl in industry can integrate it into their applications as well.
I'm not an expert in LLMs though, this is just my understanding of classifiers in general. Maybe with enough data this consideration no longer applies? I'd be interested to know.
My best guess is that somewhere close to the root of the problem is that language models still don't really distinguish syntagmatic and paradigmatic relationships. The examples in this article are a little bit forced in that respect because the alternatives it shows in the illustrations are all paradigmatic alternatives but roughly equivalent from a syntax perspective.
This might relate to why, within a given GPT model generation, the earlier versions with more parameters tend to be more prone to hallucination than the newer, smaller, more distilled ones. At least for the old non-context-aware language models (the last time I really spent any serious time digging deep into language models), it was definitely the case that models with more parameters would tend to latch onto syntagmatic information so firmly that it could kind of "overwhelm" the fidelity of representation of semantics. Kind of like a special case of overfitting just for language models.
Here's an example of someone doing that for 9.9 > 9.11: https://x.com/mengk20/status/1849213929924513905
4 can be an absolute demonic hallucinating machine.
You absolutely could experiment with pushing it into a denial, and I highly encourage you to try it out. The smollm-entropix repo[1] implements the whole thing in a Jupyter notebook, so it's easier to try out ideas.
Transformers are generative AI, not classifiers. They throw out a lot of statistics in the service of forward progress and completing the generative task. This project is a rudimentary attempt to regenerate those stats
There are definitely times when entropy can be high but not actually be uncertain (again synonyms are the best), but it seems promising. I want to build a visualizer using the OpenAI endpoints.
This was a problem not only studied but in which fast and impressive progress was happening until they just turned it off.
It’s a fucking gigantic business to be the best at this. And it’s exactly what a startup should be: unlikely to have a well-heeled incumbent competitor not because no well-heeled firms ignore the market, but because they actively don’t want it to exist.
Honestly it goes counter to the Bitter Lesson (http://www.incompleteideas.net/IncIdeas/BitterLesson.html, which stems from getting too fancy about maze traversal in Chess. But at the scale LLMs are at right now, the improvements might be worth it.
This is as opposed to pure sampling + next token prediction which basically randomly chooses a token. So if a model does 1274 x 8275 and it's not very sure of the answer, it still confidently gives an answer even though it's uncertain and needs to do more working.
I am not a programmer. No one at my company is a programmer. It writes code that works and does exactly what we asked it to do. When the code choked while I was "developing" it, I just fed it back into chatgpt to figure out. And it eventually solved everything. Took a day or so, whereas it would probably take me a month or a contractor $10,000 and a week.
LLM's might be bad for high level salary grade programming projects. But for those of us who use computers to do stuff, but can't get past the language barrier preventing us from telling the computer what to do, it's a godsend.
For this very constrained subset of a problem domain LLMs are indeed very suitable but this doesn't scale at all.
Of course. It's not a hypothetical question. Almost all of my code is written by Claude 3.5 Sonnet. It's much more robust and accurate than my regular code and I've been programming for 20 years.
It's just another hype, people. Just like Client/Server, Industry 4.0, Machine Learning, Microservices, Cloud, Crypto ...
For example, whenever certainty drops below a threshold the sampler backtracks and chooses different tokens. Such that at the end every single token had an above threshold certainty.
I doubt it would entirely eliminate undesirable outputs, but it would be interesting.
Or maybe just says "i don't know" with full certainty.
If that's not the case then it might just trigger bad risk compensation behavior in the model's human operators.
I think there's a human tendency to reduce the problem one has answering a given question to a question of just "uncertainty" and so we look at LLM answers as involving just single level of uncertainty. But that's anthropomorphism.
AI images (and photograph before it) showed us new, unimagined ways an image can be wrong (or rather, real-seaming but wrong). AI language interactions do this too but in a more subtle way.
So far this has mostly been done using Reinforcement Learning, but catching it and doing it inference seems like it could be interesting to explore. And much more approachable for open source, only the big ML labs can do this sort of RL.
If probability(sum(tokens[:5])) < 0.5: Respond("I'm sorry I don't quite understand what you mean.")
I feel anthropomorphism is part of the marketing strategy for LLMs
I've seen "bullshitting" suggested, but this of course still implies intent, which AIs do not have in any typical sense of the word.
I think we as a community have settled on hallucination as the best English word that approximately conveys the idea. I've seen folks on here making up words to describe it, as if that is any more useful to the victim here. The victim being the uninformed (w.r.t AI tech) layperson.
Human do this too, of course. The LLMs are simply emulating this human behavior.
In humans hallucination is about a loss of a relationship with an underlying physical world. A physical world whose model we have in our heads and interact with in intentional ways if we are not hallucinating.
That means using the word hallucinating implies that the thing could also not be hallucinating and have a grip on reality. And rhis was my criticism, a LLM spits out plausible phrases, if the graph wouldn't consider an output plausible it wouldn't return it. That means for the LLM there is no difference between plausible bogus and a factually correct statement, this is something humans interpret into the output from the outside.
It’s a better alternative than “bullshitting”, because “confabulating” does not have that kind of connotation of intent.
Wrong or inaccurate are alternatives.
It's also true that uncertainty can be decomposed into "flavours". The simplest and most discussed decomposition is into aleatoric and epistemic kinds of uncertainty. Epistemic uncertainty (or model-based uncertainty) usually refers to the case, when poor output is a result of the model being presented with the kind of input which it never saw before, and should not be expected to handle correctly. Aleatoric uncertainty on the other hand is thought to be intrinsic to the data itself, think of the natural ambiguity of the task, or noisy labelling.
People in the field of uncertainty estimation are very much concerned with developing methods of quantifying these different types of uncertainty, and different methods can be more sensitive to one or the other.
The article itself is uncertainty at the level of the next token rather than of the entire response, which is different: "Capital of Germany is" followed by "Berlin" is correct but it would have also been valid for the full answer to have been ", since reunification in 1990, Berlin; before this…" - correct at the conceptual level, uncertainty at the token level.
Most of the users aren't aware of the maths and use words in more every-day manners, to the annoyance of those of us who care about the precise technical definitions.
The listed types of uncertainty can and do have different uses in different cases.
Especially the difference between "I don't know the answer" and "I do know absolutely that the answer is that nobody knows".
As a chatbot it's also important to say "I don't understand your question" when appropriate, rather than to say "dunno" in response to e.g. "how do I flopragate my lycanthrope?"
It's difficult to prove because it's difficult to state clearly what is "better" and it's expensive to collect preference data (or similar).
You could use common sense after looking at lots of samples and say "this method seems to work better if you are trying to optimize for X".