Posted by timr 1/25/2026
ResearchGate says 3936 citations. I'm not sure what they are counting, probably all the pdf uploaded to ResearchGate
I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.
Quoted in the article:
>> 1. Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.
All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964011
[2] https://www.researchgate.net/publication/279944386_The_Impac...
The number appears to be from Google Scholar, which currently reports 6269 citations for the paper
Judging from PubPeer, which allows people to post all of the above anonymously and with minimal moderation, this is not an issue in practice.
It has 0 comments, for an article that forgot "not" in "the result is *** statistical significative".
Even if nobody would cheat and massage data, we would still have studies that do not replicate on new data. 95 % confidence means that one in twenty surveys finds an effect that is only noise. The reporting of failed hypothesis testing would really help to find these cases.
So pre-registration helps, and it would also help to establish the standard that everything needed to replicate must be published, if not in the article itself, then in an accompanying repository.
But in the brutal fight for promotion and resources, of course labs won’t share all their tricks and process knowledge. Same problem if there is an interest in using the results commercially. E.g. in EE often the method is described in general but crucial parts of the code or circuit design are held back.
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
As I said, harder from a research perspective, but if you can show, for instance, that sustainable companies are less profitable with a better study, you have basically contradicted the original one.
The paper in question shows - credibly or not - that companies focusing on sustainability perform better in a variety of metrics, including generating revenue. In other words: Not only can you have companies that do less harm, but these ethically superior companies also make more money. You can have your cake and eat it too. It likely has given many people a way to align their moral compass with their need to gain status and perform well within our system.
Even if the paper is a completely fabrication, I'm convinced it has made the world a better a place. I can't help but wonder if Gelman and King paused to consider the possible repercussions of their actions, and of what kinds of motivations they might have had. The linked post briefly dips into ethics, benevolently proclaiming that the original authors of the paper are not necessarily bad people.
Which feels ironic, as it seems to me that Gelman and King are doing the wrong here.
That is not at all how science is supposed to work.
If a result can't be replicated, it is useless. Replicators should not be told to "tread lightly", they should be encouraged. And replication papers should be published, regardless of the result (assuming they are good quality).
No, we shouldn't. Research fraud is committed by people, who must be held accountable. In this specific case, if the issues had truly been accidental, the author's would have responded and revised their paper. They did not, ergo their false claims were likely deliberate.
That the school and the journal show no interest - equally bad, and deserving of public shaming.
Of course, this is also a consequence of "publish or perish."
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
Institutions could do something, surely. Require one-in-n papers be a replication. Only give prizes to replicated studies. Award prize monies split between the first two or three independent groups demonstrating a result.
The 6k citations though ... I suspect most of those instances would just assert the result if a citation wasn't available.
If the flow of tax, student debt and philanthropic money were cut off, the journals would all be wiped out because there's no organic demand for what they're doing.
They are pushed to publish a lot, which means journals have to review a lot of stuff (and they cannot replicate findings on their own). Once a paper is published on a decent journal, other researchers may not "waste time" replicating all findings, because they also want to publish a lot. The result is papers getting popular even if no one has actually bothered to replicate the results, especially if those papers are quoted by a lot of people and/or are written by otherwise reputable people or universities.
I often say that "hard sciences" have often progressed much more than social/human sciences.
[1] https://en.wikipedia.org/wiki/Replication_crisis#In_medicine
With the above, I think we've empirically proven that we can't trust mathmeticians more than any other humans We should still rigorously verify their work with diverse, logical, and empirical methods. Also, build ground up on solid ideas that are highly vetted. (Which linear algebra actually does.)
The other approach people are taking are foundational, machine-checked, proof assistants. These use a vetted logic whose assistant produces a series of steps that can be checked by a tiny, highly-verified checker. They'll also oftne use a reliable formalism to check other formalisms. The people doing this have been making everything from proof checkers to compilers to assembly languages to code extraction in those tools so they are highly trustworthy.
But, we still need people to look at the specs of all that to see if there are spec errors. There's fewer people who can vet the specs than can check the original English and code combos. So, are they more trustworthy? (Who knows except when tested empirically on many programs or proofs, like CompCert was.)
This had been assigned many times previously. When my friend disproved the lemma, he asked the professor what he had done wrong. Turns out the lemma was in fact false, despite dozens of grad students having turned in "proofs" of the lemma already. The paper itself still stood, as a weaker form of the lemma was sufficient for its findings, but still very interesting.
Yet, I believe there hasn't been much progress as compared with STEM. But it is just a belief at the end of the day. There might be some study about this out there.
All the talks they were invited to give, all the followers they had, all the courses they sold and impact factor they have built. They are not going to came forward and say "I misinterpreted the data and made long reaching conclusions that are nonsense, sorry for misleading you and thousands of others".
The process protects them as well. Someone can publish another paper, make different conclusions. There is 0 effort get to the truth, to tell people what is and what isn't current consensus and what is reasonable to believe. Even if it's clear for anyone who digs a bit deeper it will not be communicated to the audience the academia is supposed to serve. The consensus will just quietly shift while the heavily quoted paper is still there. The talks are still out there, the false information is still propagated while the author enjoys all the benefits and suffers non of the negative consequences.
If it functions like that I don't think it's fair that tax payer funds it. It's there to serve the population not to exist in its own world and play its own politics and power games.