Top
Best
New

Posted by bookofjoe 1/26/2026

Google AI Overviews cite YouTube more than any medical site for health queries(www.theguardian.com)
415 points | 208 commentspage 2
coulix 1/26/2026|
The YouTube citation thing feels like a quality regression. For medical stuff especially, I’ve found tools that anchor on papers (not videos) to be way more usable like incitefulmed.com is one example I’ve tried recently.
neom 1/26/2026||
Further context: https://health.youtube/ and https://support.google.com/youtube/answer/12796915?hl=en and https://www.theverge.com/2022/10/27/23426353/youtube-doctors... (2022)
alex1138 1/26/2026|
[flagged]
sofixa 1/26/2026||
> Oh, you mean like removing scores of covid videos from real doctors and scientists which were deemed to be misinformation

The credentials don't matter, the actual content does. And if it's misinformation, then yes, you can be a quadruple doctor, it's still misinformation.

In France, there was a real doctor, epidemiologist, who became famous because he was pushing a cure for Covid. He did some underground, barely legal, medical trials on his own, and proclaimed victory and that the "big bad government doesn't want you to know!". Well, the actual proper study finished, found there is basically no difference, and his solution wasn't adopted. He didn't get deplatformed fully, but he was definitely marginalised and fell in the "disinformation" category. Nonetheless, he continued spouting his version that was proven wrong. And years later, he's still wrong.

Fun fact about him: he's in the top 10 of scientists with the most retracted papers, for inaccuracies.

gumboshoes 1/26/2026|||
How is any non-expert supposed to judge the content without some kind of guide like say credentials? Credentials do matter when the author is unknown.
input_sh 1/26/2026|||
A good first step would be to distrust each and every individual. This excludes every blog, every non-peer-reviewed paper, every self-published book, pretty much every YouTube channel and so on. This isn't to say you can't find a nugget of truth somewhere in there, but you shouldn't trust yourself to be able to differentiate between that nugget of truth and everything surrounding it.

Even most well-intentioned and best-credentialed individuals have blind spots that only a different pair of eyes can spot through rigorous editing. Rigorous editing only happens in serious organizations, so a good first step would be to ignore every publication that doesn't at the very least have an easy-to-find impressum with a publicly-listed editor-in-chief.

The next step would be to never blame the people listed as writers, but their editors. For example, if a shitty article makes it way to a Nature journal, it's the editor that is responsible for letting it through. Good editorial team is what builds up the reputation of a publication, people below them (that do most of the work) are largely irrelevant.

To go back to this example, you should ignore this guy's shitty study before it's published by a professional journal. Even if it got published in a serious journal, that doesn't guarantee it's The Truth, only that it has passed some level of scrutiny it wouldn't have otherwise.

Like for example website uptime, no editorial team is capable of claiming 100% of the works that passed through their hands is The Truth, so then you need to look at how transparently they're dealing with mistakes (AKA retractions), and so on.

add-sub-mul-div 1/26/2026|||
Separating credentialed but bad faith covid grift from evolving legitimate medical advice based on the best information available at the time did not require anything but common sense and freedom from control by demagoguery.
wizzwizz4 1/26/2026||
And when I'm nice and relaxed, my common sense is fully operational. I'm pretty good at researching medical topics that do not affect me! However, as soon as it's both relevant to me, and urgent, I become extremely incapable of distinguishing truthful information from blatant malpractice. At this point, I default to extreme scepticism, and generally do nothing about the urgent medical problem.
alex1138 1/26/2026|||
[flagged]
hobs 1/26/2026||
You mean people like this - The COVID vaccine “has been proven to have negative efficacy.”

https://www.politifact.com/factchecks/2023/jun/07/ron-johnso...

This is called disinformation that will get you killed, so yeah, probably not good to have on youtube.

- After saying he was attacked for claiming that natural immunity from infection would be "stronger" than the vaccine, Johnson threw in a new argument. The vaccine "has been proven to have negative efficacy," he said. -

alex1138 1/26/2026||
Unfortunately it's not disinformation, it's going to be a while for people to discover how many things they were lied to about
hobs 1/26/2026||
https://www.wpr.org/health/health-experts-officials-slam-ron...

Extraordinary claims require extraordinary evidence instead of just posting bs on rumble.

jdlyga 1/26/2026||
It's tough convincing people that Google AI overviews are often very wrong. People think that if it's displayed so prominently on Google, it must be factually accurate right?

"AI responses may include mistakes. Learn more"

It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.

alex1138 1/26/2026||
It absolutely baffles me they didn't do more work or testing on this. Their (unofficial??) motto is literally Search. That's what they're known for. The fact it's trash is an unbelievably damning indictment of what they are
danudey 1/26/2026|||
Testing on what? It produces answers, that's all it's meant to do. Not correct answers or factual answers; just answers.

Every AI company seems to push two points:

1. (Loudly) Our AI can accelerate human learning and understanding and push humanity into a new age of enlightenment.

2. (Fine print) Our AI cannot be relied on for any learning or understanding and it's entirely up to you to figure out if what our AI has confidently told you, and is vehemently arguing is factual, is even remotely correct in any sense whatsoever.

bethekidyouwant 1/26/2026|||
Testing what every possible combination of words? Did they test their search results before AI in this way?
AlienRobot 1/26/2026|||
My favorite part of the AI overview is when it says "X is Y (20 sources)" and you click on the sources and Ctrl+F "X is Y" and none of them seem verbatim what the AI is saying they said so you're left wondering if the AI just made it up completely or it paraphrased something that is actually written in one of the sources.

If only we had the technology to display verbatim the text from a webpage in another webpage.

gowld 1/26/2026|||
That's because decent (but still flawed) GenAI is expensive. The AI Overview model is even cheaper than the AI Mode model, which is cheaper than the Gemini free model, which is cheaper than the Gemini Thinking model, which is cheaer than the Gemini Pro model, which is still very misleading when working on human language source content. (It's much better at math and code).
WarmWash 1/26/2026||
I have yet to see a single person in my day to day life not immediately reference AI overviews when looking something up.
seanalltogether 1/26/2026||
I've also noticed lately that it is parroting a lot of content straight from reddit, usually the answer it gives is directly above the reddit link leading to the same discussion.
bjourne 1/26/2026||
Basic problem with Google's AI is that it never says "you can't" or "I don't know". So many times it comes up with plausible-sounding incorrect BS to "how to" questions. E.g., "in a facebook group how do you whitelist posts from certain users?" The answer is "you can't", but AI won't tell you.
htx80nerd 1/26/2026||
I ask Gemini health questions non stop and never see it using YouTube as a source. Quickly looking over some recent chats :

- chat 1 : 2 sources are NIH. the other isnt youtube.

- chat 2 : PNAS, PUBMED, Cochrane, Frontiers, and PUBMED again several more times.

- chat 3 : 4 random web sites ive never heard of, no youtube

- chat 4 : a few random web sites and NIH, no youtube

aix1 1/27/2026|
To clear up potential confusion, you seem to be talking about the Gemini app (https://gemini.google.com), the chat app formerly known as Bard.

The article is about AI Overviews, a feature of Google Search (the LLM-generated box that sometimes shows up above the search results).

They're powered by the same pretrained model but, in true Google style, are two otherwise unrelated products built by two completely separate orgs.

Then there's AI Mode (https://www.google.com/ai), NotebookLM and probably some others I'm forgetting right now. :)

nicce 1/26/2026||
I would guess that they are doing this on purpose, because they control YouTube's servers and can cache content in that way. Less latency. And once people figure it out, it pushes more information into Google's control, as AI is preferring it, and people want their content used as reference.
josefritzishere 1/26/2026||
Google AI cannot be trusted for medical adivice. It has killed before and it will kill again.
PlatoIsADisease 1/26/2026|
Maybe Google, but GPT3 diagnosed a patient that was misdiagnosed by 6 doctors over 2 years. To be fair, 1 out of those 6 doctors should have figured it out. The other 5 were out of their element. Doctor number 7 was married to me and got top 10 most likely diagnosis from GPT3.
jesse__ 1/26/2026||
With the general lack of scientific rigour, accountability, and totally borked incentive structure in academia, I'm really not sure if I'd trust whitepapers any more than I'd trust YouTube videos at this point.
not_good_coder 1/26/2026|
The authoritative sources of medical information is debatable in general. Chatting with initial results to ask for a breakdown of sources with classified recommendations is a logical 2nd step for context.
More comments...