Posted by nabla9 10/28/2024
The freakonomics guys have been proven wrong before, and update their texts when they can to reflect those things. I'm sure if some contrary evidence were presented to them, they would gladly consider it, and maybe even have this person on their podcast to refute it!
I agree (I think) with what you're getting at, but then I reread TFA and I've come to see it from the author's point of view.
That is, reading between the lines of what you've written, I think you're saying that TFA doesn't actually talk in specifics about (a) what Ellen Langer's research purports to say, and (b) what the particular objections to the conclusions of her research actually are. And I totally agree that I think talking in specifics, giving real examples (and beyond just links to dense, 18-page studies and research where the point is buried somewhere within), etc. makes it much easier for the reader to actually figure out why I should care about this in the first place.
But on the other hand, I think the author of TFA is purposefully trying to refrain from "getting into the weeds" as he puts it because his main point is really along the lines of "extraordinary results should require extraordinary evidence". That is, he gives quotes from Steven Levitt about how Langer's research is completely contrary to what he would expect, but then Levitt barely even challenges how she got such unusual results to begin with. I.e. his point is rather than give an unfiltered bullhorn to a researcher with questionable (or at least controversial) results, why aren't you pushing back by at least asking how she would respond to her critics?
So the article is really about "How and why you should be skeptical of unexpected results", and much less so about any singular instance of unexpected results. That said, again I agree that going into more detail of a singular instance would have helped the author's argument immensely.
Isn't that pretty similar to what you just did? Reduced his entire ouvre to a snappy criticism without nuance and then used it to dismiss him.
We don't hold short HN discussion comments to the same standard as full articles.
>And, as I’ve said many times before, Freakonomics has so much good stuff. That’s why I’m disappointed, first when they lower their standards and second when they don’t acknowledge or wrestle with their past mistakes. It’s not too late! They could still do a few shows—or even write a book!—on various erroneous claims they’ve promoted over the years. It would be interesting, it would fit their brand, it could be educational and also lots of fun.
https://statmodeling.stat.columbia.edu/2024/09/14/freakonomi...
The other question I have is that the paper you linked gives makes some very clear, "no gray area" arguments about why some T-values that Langer calculated for one of her papers is just flat out wrong. He's saying "I'm a statistician, and you did the statistics wrong." I'm very curious if Langer ever responded to this, because the argument seems pretty black-and-white.
> First, Levitt starts out by accepting that a certain suspect claim “actually has been replicated a number of times.” Going with your interviewee can make sense in a podcast, but, again, it’s counter to Levitt’s earlier goal of asking, “How do you know whether you should believe surprising results?”
I haven't listened to the episode, but Levitt really should have pushed back a bit more. The conversation went (paraphrased) "Here's a surprising result" "Really? Has that been replicated?" "Yes, many times sort of, now here's another surprising result from myself". She really should have been asked about replications done by other groups at least. The replication question kind of gets dodged and then they drop it for the rest of the hour-long discussion (according to the transcript: https://freakonomics.com/podcast/pay-attention-your-body-wil... ), which is a bit odd when he made “How do you know whether you should believe surprising results?” a theme of the episode. If her work had good replications, it would be more believable.
https://freakonomics.com/podcast/the-power-of-poop
This guy claims he can cure Parkinson's with a fecal transplant and they just let him frame the conversion and the medical community's skepticism however he wants like it's a given that he is a misunderstood genius. It was so obvious the hosts lacked the basic tools of skepticism. Now if you look up that doctor 13 years later he's been promoting ivermectin as a cure for COVID which is totally on brand.
The medical commmunity will almost always be skeptical of new theories, sometimes this is well placed but sometimes it isn't.
That's neither here nor there though, the main takeaway is you probably shouldn't go to economists for medical advice.
1. The exercise, diet, sunlight, zinc, and vitamin d results all seem plausible. They are well known to improve your immune system.
2. Remdesivir was originally recommended for covid19 but then they stopped using it. I'm surprised the plasma has such a poor result because that was heavily hailed originally too.
3. Paxlovid and HCQ are both widely recommended, in line with the results presented.
So as far as I can tell (which isn't far!) it seems like a reasonable analysis.The main point I wanted to make is Borody is far from alone in being interested in c19 and ivermectin.
In my opinion, Levitt didn't even say he agreed with Langer, although he did compliment her work.
Disclaimer: I'm a hug fan on all the Freakonomics shows. I appreciate the author pointing out some opposing views and think the post is well-written, although exaggerated and overly emotional.
He might, he might not. What he definitely does think is that there have been several in-depth critiques of Langer's work, and that it does the listener of Freakonomics a disservice by apparently not taking them into account in an way (certainly not mentioning them).
The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".
And one of those in-depth critiques, which is linked to in the post, is by the author of the article himself.
> The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".
This seems like a distinction that's not really worth making. The author of the post is a statistician, and he's published a detailed critique (see https://news.ycombinator.com/item?id=41974050 ) that says that Langer did the stats wrong. So sure, he is saying "the experimental design, sample size and statistical analysis do not support the claims Langer is making", which seems equivalent to "she is wrong" when you're a statistician.
> 4.4. Statistical and conceptual problems go together
> We have focused our inquiry on the Aungle and Langer (2023) paper, which, despite the evident care that went into it, has many problems that we have often seen elsewhere in the human sciences: weak theory, noisy data, a data structure necessitating a complicated statistical analysis that was done wrong, uncontrolled researcher degrees of freedom, lack of preregistration or replication, and an uncritical reliance on a literature that also has all these problems.
> Any one or two of these problems would raise a concern, but we argue that it is no coincidence that they all have happened together in one paper, and, as we noted earlier, this was by no means the only example we could have chosen to illustrate these issues. Weak theory often goes with noisy data: it is hard to know to collect relevant data to test a theory that is not well specified. Such studies often have a scattershot flavor with many different predictors and outcomes being measured in the hope that something will come up, thus yielding difficult data structures requiring complicated analyses with many researcher degrees of freedom. When underlying effects are small and highly variable, direct replications are often unsuccessful, leading to literatures that are full of unreplicated studies that continue to get cited without qualification. This seems to be a particular problem with claims about the potentially beneficial effects of emotional states on physical health outcomes; indeed, one of us found enough material for an entire Ph.D. dissertation on this topic (N. J. L. Brown, 2019).
> Finally, all of this occurs in the context of what we believe is a sincere and highly motivated research program. The work being done in this literature can feel like science: a continual refinement of hypotheses in light of data, theory, and previous knowledge. It is through a combination of statistics (recognizing the biases and uncertainty in estimates in the context of variation and selection effects) and reality checks (including direct replications) that we have learned that this work, which looks and feels so much like science, can be missing some crucial components. This is why we believe there is general value in the effort taken in the present article to look carefully at the details of what went wrong in this one study and in the literature on which it is based.
He goes through several paragraphs of criticising Levitt for believing his interviewee without mentioning what claim the interviewee is making. So some chambermaids were told their work is exercise, didn't change their behaviour and then.... What?
I think what these authors do is apply science to human interactions -- a tilt toward social science -- but science to me is usually surprising. (Or I'm just really bad at science).
It not like there can only be one Podcast. If you see that the world is missing something, create it! Nobody else is going to do it. If they would, you already wouldn’t have found it lacking.
>It not like there can only be one Podcast. If you see that the world is missing something, create it! Nobody else is going to do it. If they would, you already wouldn’t have found it lacking.
I would have thought the past decade would have put this marketplace of ideas hokum to rest, but here we are.
Not sure why this rant was posted to HN.
Also the postcast wasn't freakonomics, it was an offshoot one where they don't critique a users work, they just interview them as a friendly conversation.
Getting upset like the author did indicates that the author doesn't know the difference between a podcast and an academic paper.
Yes, it is. There is a reason the reproducibility/replication crisis, especially in the social sciences, is such a hot topic. The podcast doesn't need to "meet the high standard of peer review", but there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up before essentially saying "Wow, your results are so unexpected! OK I'm pretty sold!"
Is that expected of Freakonomics? I don't know how much rigor they do with their interview subjects, nor how much of a subect matter expert they are when it comes to pushing back.
I think the whole problem is how he presents the podcast as being very factual, data driven and scientific and on the other end he just lack rigour in some cases - like this one.
Basic research has become rare in journalism, but they either should stop pretending to be data driven or should do their homework.
Umm, of course? Shouldn't that be expected of any interviewer? I mean, they invited a guest onto their show specifically because they keep coming up with unexpected results - shouldn't they have done at least a little bit of their homework to see why a gaggle of people are condemning their results as non-reproducible?
No? Imagine how ridiculous that would become if interviewers actually followed that logic. "Great gameplay out there, <insert professional sports star>, but nevermind the sport we are all watching, my research identified that you erroneously wrote 1+1=3 in Kindergarten. What was your thought process?"
The podcast in question is known as "People I (Mostly) Admire" from the Freakonomics podcast network. The name should tell you that it is going to be about the people, not diving deep into their work. Perhaps there is room for a Podcast that scrutinizes the work of scientists, but one that literally tells you right in its name that it is going to be about people is probably not it.
A better example, to piggyback off your sports analogy: Suppose a podcast titled "People I (Mostly) Admire" decided to interview Barry Bonds, and the interviewer asked "Wow, how did you get to be so good in the second half of your career?" and Bonds responded "Just a lot of hard work!" Yeah, I would totally expect the interviewer to push back at that point and say "So, your steroid use didn't have anything to do with it?"
Point being, I'm not asking the interviewer to be knowledgeable about the subject's kindergarten grades. I do think they should do some basic, cursory research about the specific topic and subject they brought the interviewer on to talk about in the first place.
Are you confusing expectation with desire? I can understand why you might prefer to listen to a podcast like that – and nothing says you can't – but that isn't necessarily on brand with the specific product in question.
In the same vein, you might prefer fine dining, but you wouldn't expect McDonalds to offer you fine dining. It is quite clearly not the product they sell.
So, I guess the question is: What is it about "People I (Mostly) Admire" that has given you the impression that it is normally the metaphorical fine dining restaurant and not the McDonalds it turned out to be here?
Yes...? Comes with not understanding the subject very well. I mean, logically, if I were an expert I wouldn't be here wasting my time talking about what I already know, would I? That would be a pointless waste of time. Obviously if I am going to talk about something I am going to struggle to talk about it in an effort to learn.
> These other comments put it better:
These other comments don't even try to answer the question...? Wrong links? Perhaps I didn't explain myself well enough? I can try again: What is it about this particular podcast that has given you the impression that it normally asks the hard hitting questions? Be specific.
The background of the people involved is irrelevant to the nature of the product. Someone who works on developing a cure for cancer by day can very well go home and build a fart app at night. There is no reason why you have to constrain yourself to just one thing.
The former is impractical for a lot of formats (ie podcasts) but the latter is clearly harmful in the context of a popular podcast or some other medium that amplifies the dubious message.
I'm not sure why the podcast author is being held to a standard that should be levied to other matter experts, that come way before he ever reaches out for an interview.
What is less clear is whether X was good experimental design, whether the measurements of Y were appropriate, relevant and correct, and thus whether or not Z can be concluded.
> "I’ve got a model in my head of how the world works — a broad framework for making sense of the world around me. I’m sure you’ve got one, too."
Anyone with scientific training should know that you should have multiple working hypothesis, you shouldn't wed yourself to one preferred model (which leads to idee fixe, rejecting evidence that doesn't fit your model and even inventing evidence which does). People who fall into this trap start seeing their mental model in the world around them, thinking they're engaging in pattern recognition when they're really doing pattern projection. Their emotions, ego and pride all converge at this point - there are dozens of examples throughout scientific history of people falling into this trap, who end up shaking their fists at experimental data that upsets their apple cart.
It's not that hard to hold two conflicting models in your mind at the same time, or more, without ending up emotionally attached to any of them.
Specifically, Langer has suggested that merely thinking about things can lead to physical changes in the world. This is at odds with not just some specific model of something, but with the broadest conception of post-Rennaisance science.
Her claims are hardly akin to moving objects through telekinesis. Your brain is part of your nervous system which has massive control over your body. What happens in your brain obvious leads to changes in your body. Beyond the obvious "I think about moving my arm and then my arm moves" there is a ton of hard research to back up the ability of thoughts and moods to influence the autonomic systems of the body. Why are the things Langer is suggesting fundamentally different?
Clearly more evidence is needed to prove many of her specific claims, and many of them may turn out to be noise, but the basic premise hardly seems worth dismissing. Descartes was 400 years ago.
It's tough, because communicating science in all it's depth and uncertainty is tough. You want to communicate the beauty and excitement, but don't want to mislead people, and the balance there just seems super hard to find.
Note this talk by another Pop Sci personality Robert Sapolsky, where he talks about the limitations of western reductionism.
https://www.youtube.com/watch?v=_njf8jwEGRo
Yet his latest book on free will exclusively depended on an reductionist viewpoint.
While I don't know his motivations for those changes, the fact that the paper he mentioned was so extremely unpopular that I was only one of a handful that read it surely provided some incentive:
> "REDUCTIONISM AND VARIABILITY IN DATA: A META-ANALYSIS ROBERT SAPOLSKY and STEVEN BALT"
Or you can go back to math and look at the Brouwer–Hilbert controversy, which was purely about if we should, universally, accept PEM a priori, which Church, Post, Gödel, and others proved wasn't a safe in many problems.
Luckily ZFC helped with some of that, but Hilbert won that war of of words. Where even suggesting a constructivist approach produces so much cognitive dissonance that it is often branded as heresy.
Fortunately with the Curry–Howard–Lambek correspondence you can shift to types or categories with someone who understands them to avoid that land mine, but even on here people get frustrated when people say something is 'undecidable' and then go silent. It is not that labeling it as 'undecidable' wins an argument, but that it is so painful to move on because from Plato onward PEM was part of the trinity of thought that is sacrosanct.
To be clear, I am not a strict constructivist, but view this as horses for courses, with the reductionist view being insanely useful for many needs.
If you look at the link that jeffbee the mention of "garden of forking paths" is a way of stepping on egg shells around the above.
https://stat.columbia.edu/~gelman/research/unpublished/heali...
Overfitting and underfitting are often explained as symptoms of the bias-variance trade-off, and even with PHDs it is hard to invoke indecomposablity, decidability, or non-triviality; all of which should be easy to explain as when PEM doesn't hold for some reason.
While mistaking the map for the territory is an easy way for the Freakonomics authors to make a living, it can be viewed as an unfortunate outcome due to the assumption of PEM and abuse of the principle of sufficient reason.
While there are most certainly other approaches, and obviously not everything can be proven or even found with the constructivist approach, whenever something is found that is surprising, there should be an attempt to not accept PEM before making a claim that something is not just epistemically possible but epistemically necessary.
To me this is just checking your assumptions, obviously the staunchly anti-constructivist viewpoint has members that are far smarter and knowledgeable then I will ever be.
IMHO for profit or donation based Pop science will always look for the man bites dog stories... I do agree that sharing the beauty while avoiding misleading is challenging and important.
But the false premise that you either do or do not accept constructive mathematics also blocks the ease in which you could show that these type of farcical claims the authors make are false.
That simply doesn't exist today where the many worlds ideas are popular in the press, but pointing out that many efforts appear to be an attempt to maintain the illusion of Laplacian determinism, which we know has counterexamples, is so counter to the Platonic zeitgeist that most people bite their tongues when they should be providing counterexamples to help find a better theory.
I know that the true believers in any camp help drive things forward, and they need to be encouraged too.
But the point is that there is a real deeper problem that is helping drive this particular communication problem and something needs to change so that we can move forward with the majority of individuals having larger toolboxes vs dogmatic schools of thought.
</rant>
The Mindlessness of Ostensibly Thoughtful Action (1978) [pdf] - https://news.ycombinator.com/item?id=41947985 - Oct 2024 (7 comments)
I don't agree with all of their criticism but it contains many valid points
You should be skeptical of surprising results, and see to disconfirm them rather than accepting and repeating them at face value
A specific title is important because otherwise specific discussion (i.e. about what's different in this article) is preferable to generic discussion. We get plenty of the latter in any case, but it's best if it doesn't dominate the thread.
[1] in this context 'good' := accurate, neutral, and preferably using representative language from the article
> If the findings consistently surprise you, and they seriously challenge the beliefs of mainstream science, then maybe you should more seriously consider the possibility that these findings are wrong!
It seems that "You should seriously consider surprising findings" could be a good title. Maybe "results" instead of "findings" but the article doesn't actually use that word outside quotes.
It's more or less the conclusion of the critique that's mentioned in the current title.
"How do you know whether you should believe surprising [scientific] results?"
The article comes back around to that question a couple of times, in trying to describe that Levitt should not trust Langur's results because either there's not enough evidence, or the evidence doesn't support the conclusion
That's basically the Freakonomics approach. It's bad science, but it appeals to "skeptics" and "contrarian" midwit liberals.