Top
Best
New

Posted by timr 1 day ago

A flawed paper in management science has been cited more than 6k times(statmodeling.stat.columbia.edu)
700 points | 361 commentspage 3
psychoslave 1 day ago|
Social fame is fundamentally unscalable, as it operates in limited room on the scene and even less in the few spot lights.

Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.

Incitives are just irrelevant as far as global social good is concerned.

dgxyz 1 day ago||
Not even surprised. My daughter tried to reproduce a well-cited paper a couple of years back as part of her research project. It was not possible. They pushed for a retraction but university don't want to do it because it would cause political issues as one of the peer-reviewers is tenured at another closely associated university. She almost immediately fucked off and went to work in the private sector.
jruohonen 1 day ago||
> They pushed for a retraction ...

That's not right; retractions should only be for research misconduct cases. It is a problem with the article's recommendations too. Even if a correction is published that the results may not hold, the article should stay where it is.

But I agree with the point about replications, which are much needed. That was also the best part in the article, i.e. "stop citing single studies as definitive".

dgxyz 1 day ago||
I will add it's a little more complicated than I wanted to let on here as I don't identify it in the process. But it definitely was misconduct on this one.

I read the paper as well. My background is mathematics and statistics and the data was quite frankly synthesised.

jruohonen 1 day ago||
Okay, but to return to replications, publishers could incentivize replications by linking replication studies directly on a paper's website location. In fact, you could even have a collection of DOIs for these purposes, including for datasets. With this point in mind, what I find depressing is that the journal declined a follow-up comment.

But the article is generally weird or even harmful too. Going to social media with these things and all; we have enough of that "pretty" stuff already.

dgxyz 1 day ago||
Agree completely on all points.

However there are two problems with it. Firstly it's a step towards gamification and having tried that model in a fintech on reputation scoring, it was a bit of a disaster. Secondarily, very few studies are replicated in the first place unless there is a demand for linked research to replicate it before building on it.

There are also entire fields which are mostly populated by bullshit generators. And they actively avoid replication studies. Certain branches of psychology are rather interesting in that space.

jruohonen 1 day ago||
> Certain branches of psychology are rather interesting in that space.

Maybe, I cannot say, but what I can say is that CS is in the midst of a huge replication crisis because LLM research cannot be replicated by definition. So I'd perhaps tone down the claims about other fields.

dgxyz 1 day ago||
Another good example that for sure. You won't find me having any positive comments about LLMs.
dekhn 1 day ago|||
A single failure to reproduce a well-cited paper does not constitute grounds for a retraction unless the failure somehow demonstrates the paper is provably incorrect.
kelipso 1 day ago||
It’s much much more likely that she did something wrong trying to replicate it than the paper was wrong. Did she try to contact the authors, discuss with her advisor?

Pushing for retraction just like that and going off to private sector is…idk it’s a decision.

dgxyz 1 day ago||
It went on for a few months. The source data for the paper was synthesised and it was like trying to get blood out of a stone trying to get hold of it, clearly because they knew they were in trouble. Lots of research money was wasted trying to reproduce it.

She was just done with it then and a pharma company said "hey you fed up with this shit and like money?" and she was and does.

edit: as per the other comment, my background is mathematics and statistics after engineering. I went into software but still have connections back to academia which I left many years ago because it was a political mess more than anything. Oh and I also like money.

cloche 1 day ago||
> Because published articles frequently omit key details

This is a frustrating aspect of studies. You have to contact the authors for full datasets. I can see why it would not be possible to publish them in the past due to limited space in printed publications. In today's world though every paper should be required to have their full datasets published to a website for others to have access to in order to verify and replicate.

poemxo 1 day ago||
> They intended to type “not significant” but omitted the word “not.”

This one is pretty egregious.

B1FIDO 1 day ago|
Once, back around 2011 or 2012, I was using Google Translate for a speech I was to deliver in church. It was shorter than one page printed out.

I only needed the Spanish translation. Now I am proficient in spoken and written Spanish, and I can perfectly understand what is said, and yet I still ran the English through Google Translate and printed it out without really checking through it.

I got to the podium and there was a line where I said "electricity is in the air" (a metaphor, obviously) and the Spanish translation said "electricidad no está en el aire" and I was able to correct that on-the-fly, but I was pissed at Translate, and I badmouthed it for months. And sure, it was my fault for not proofing and vetting the entire output, but come on!

aidenn0 1 hour ago||
Similar timeframe, I used it for translating some German into English. I'm a native English speaker who has spent some time in Germany (but had not spoken any German in over a decade at this point) and quickly noticed some things were off. After reviewing the original text I realized that every single separable verb[1] that was not in infinitive form was mistranslated. This is an astoundingly bad systematic error for a machine translation program to have.

1: https://en.wikipedia.org/wiki/Separable_verb

wisty 1 day ago||
So 6000 people cited a paper, and either didn’t properly read it (IMO that's academic dishonesty) or weren't able to determine that the methdology was infeasible.

No real surprise. I'm pretty sure most academics spend little time critically reading sources and just scan to see if it broadly supports their point (like an undergrad would). Or just cite a source if another paper says it supports a point.

I've heard the most brutal thing an examiner can do in a viva vocce is to ask what a cited paper is about, lol.

lbcadden3 1 day ago||
>There’s a horrible sort of comfort in thinking that whatever you’ve published is already written and can’t be changed. Sometimes this is viewed as a forward-looking stance, but science that can’t be fixed isn’t past science; it’s dead science.

Actually it’s not science at all.

bronlund 1 day ago||
This likely represents only a fragment of a larger pattern. Research contradicting prevailing political narratives faces significant professional obstacles, and as this article shows, so does critiques of research that don't.
drob518 1 day ago||
We’ve developed a “leaning tower of science.” Someday, it’s going to fall.
Havoc 1 day ago||
Maybe that's why it gets cited? People starting with an answer and backfilling?
renewiltord 1 day ago|
Family member tried to do work relying on previous results from a biotech lab. Couldn’t do it. Tried to reproduce. Doesn’t work. Checked work carefully. Faked. Switched labs and research subject. Risky career move, but. Now has a career. Old lab is in mental black box. Never to be touched again.

Talked about it years ago https://news.ycombinator.com/item?id=26125867

Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.

rcxdude 1 day ago||
I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.

I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)

dekhn 1 day ago||
When I was a postdoc, I wrote up the results from a paper based on theories from my advisor. The paper wasn't very good- all the results were bad. Overnight, my advisor rewrote all the results of the paper, partly juicing the results, and partly obscuring the problems, all while glossing over the limitations. She then submitted it to a (very low prestige) journal.

I read the submitted version and told her it wasn't OK. She withdrew the paper and I left her lab shortly after. I simply could not stand the tendency to juice up papers, and I didn't want to have my reputation tainted by a paper that was false (I'm OK with my reputation being tainted by a paper that was just not very good).

What really bothers me is when authors intentionally leave out details of their method. There was a hot paper (this was ~20 years ago) about a computational biology technique ("evolutionary trace") and when we did the journal club, we tried to reproduce their results- which started with writing an implementation from their description. About half way through, we realized that the paper left out several key steps, and we were able to infer roughly what they did, but as far as we could tell, it was an intentional omission made to keep the competition from catching up quickly.

projektfu 1 day ago|||
For original research, a researcher is supposed to replicate studies that form the building blocks of their research. For example, if a drug is reported to increase expression of some mRNA in a cell, and your research derives from that, you will start by replicating that step, but it will just be a note in your introduction and not published as a finding on its own.

When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.

One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.

I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.

MaxBarraclough 1 day ago||
I've read of a few cases like this on Hacker News. There's often that assumption, sometimes unstated: if a junior scientist discovers clear evidence of academic misconduct by a senior scientist, it would be career suicide for the junior scientist to make their discovery public.

The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.

mike_hearn 1 day ago||
It's not particular to psychology, the modern discussion of it just happened to start there. It affects all fields and is more like a validity crisis than a replication crisis.

https://blog.plan99.net/replication-studies-cant-fix-science...

renewiltord 1 day ago||
He’s not saying it’s Psychology the field. He’s saying replication crisis may be because junior scientist (most often involved in replication) is afraid of retribution: it’s psychological reason for fraud persistence.

I think perhaps blackball is guaranteed. No one likes a snitch. “We’re all just here to do work and get paid. He’s just doing what they make us do”. Scientist is just job. Most people are just “I put thing in tube. Make money by telling government about tube thing. No need to be religious about Science”.

MaxBarraclough 1 day ago||
I see my phrasing was ambiguous, for what it's worth I'm afraid mike_hearn had it right, I was saying the replication crisis largely just affects research in psychology. I see this was too narrow, but I think it's fair to say psychology is likely the most affected field.

In terms of solutions, the practice of 'preregistration' seems like a move in the right direction.

More comments...