Posted by burkaman 3 hours ago
"Ironically, the site integrates a Grok AI chatbot (from xAI) for answering nutrition questions, and reports indicate Grok sometimes provides responses that contradict or qualify parts of the site's own guidelines (e.g., noting concerns about evidence quality for certain emphases or that most Americans already get sufficient protein)."
Overall it was pretty positive about the site. Then I asked it, "Is HHS Secretary Robert F. Kennedy Jr. a trustworthy source of nutrition info?" It responded with some positive things, but was happy to call out his bullshit as well, and concluded:
"In summary, RFK Jr. is a mixed bag as a nutrition source: authoritative by virtue of his position, with some valid points on processed foods that resonate with experts, but his lack of specialized expertise, history of misinformation, and controversial guideline changes make him unreliable for many in the scientific community. For personalized nutrition advice, it's best to cross-reference with sources like registered dietitians, peer-reviewed studies, or organizations such as the American Heart Association, rather than relying solely on any single figure or policy."
I wonder if they know what it "thinks" about him.
It should be more shocking to you if it had not mentioned that
I have emotional reactions when people are trying to mislead me. I have the same when I hear populist demagogues lie on TV.
Evolution or something.
I had ChatGPT tell me I was imagining an HR problem related to the women.
Grok got them right. My executive team got them right.
I'm not defending Elon, but after those 2 chatGPT failings due to moral coating, I unsubscribed and got Claude.
Grok will also tell you it's MechaHitler, that Musk is fitter than LeBron James, and that he "would have risen from the dead faster than Jesus", sometimes. https://www.theguardian.com/technology/2025/nov/21/elon-musk...
Maybe don't use chatbots for HR at all?
How are people not making this distinction?
> Maybe don't use chatbots for HR at all?
Probably not!
RFK Jr's Nutrition Chatbot Recommends Best Foods to Insert Into Your Rectum: https://www.404media.co/rfk-jrs-nutrition-chatbot-recommends...
AI is very good at conforming to your own biases and pulling out the subtext of a prompt.
If your prompt goes along the lines of "I think x is healthy plan a meal for x", grok (and other AI) will happily affirm that you are correct and really smart for recognizing that "x" is the healthiest diet and then it'll give you that diet.
That's a biased answer. AI biases to your own biases.
Or maybe said another way. AI starts with the baseline assumption that you are an expert and correct in your prompt. It can be hard to get an AI to call you out for being wrong about something.
I earnestly can't anticipate what specific information-diet someone could have where they would so strongly assume that Google Deepmind (of all the various AI companies) is a clear and sole foil to Grok that they would assume anyone who didn't share that perspective to be feigning ignorance in bad faith.
Where-ever you're having these discussions where it's entirely unfamiliar to me (and evidently others). (I don't say this with scorn or malice!)
On the greater topic of "bias", it's kind of meaningless. There's correct answers and there are incorrect answers, and "bias" refers to some tendency away from an assumed default distribution. Randomly-generated strings might be the only "unbiased" response. This is really more a difficult epistemic question, and I'd prefer something that is biased towards what's most likely to be true (e.g. Wikipedia > someones Livejournal).
Given Grok has been intentionally made to generate text praising Hitler, and I have very very high confidence that Hitler actually sucks, I have very very low confidence in the ability for the Grok program to reliably generate text that's worth reading.