Top
Best
New

Posted by salkahfi 11/4/2025

Language models cannot reliably distinguish belief from knowledge and fact(www.nature.com)
22 points | 6 comments
fuzzfactor 11/4/2025|
Naturally as expected, language models can more strongly leverage fiction than fact quite a bit like those fluent in regular languages have done since the beginning of time.

Sometimes the more fluent, the more often the fiction may fly under the radar.

For AI this could likely be when they are as close to human as possible.

Anything less, and well the performance will be lower by some opinion or another.

flebitz 11/4/2025||
> Most models lack a robust understanding of the factive nature of knowledge, that knowledge inherently requires truth.

I’d say that LLMs maybe understand better than we do, because of their lack of grandstanding classification of information, that belief and fact are tightly interwoven.

There is a dichotomy that truth can exist while fiction can be widely accepted as truth without the ability for humans to distinguish which is which and all the while thinking that some or most can.

I’m not pushing David Hume on you, but I think this is a learning opportunity.

scrubs 11/4/2025|
Pretty boys talking nonsense on tv (or social media) with all the implied grandstanding is a problem. But good lord we have to aim a lot higher than that.
letwhile 11/4/2025||
Just like humans
more_corn 11/5/2025||
I mean people suck at it too.

The only way we’ve learned is through referencing previous established trustworthy knowledge. The scientific consensus in merely a system that vigorously tests and discards previously held beliefs when they don’t match new evidence. We’ve spent thousands of years living in a world of make-believe. We only learned to emerge relatively recently.

It would be unreasonable to expect an LLM to do it without the tools we have.

It shouldn’t be hard to teach an LLM if you can’t verify it by reference to an evidence based source it’s not fact.

mock-possum 11/4/2025|
Well no, of course not - people seemingly can’t, or don’t care to, do that; and LLMs can only generate what they’ve seen people say.

It’s just another round of garbage in garbage out.