Posted by timr 1 day ago
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
That's not right; retractions should only be for research misconduct cases. It is a problem with the article's recommendations too. Even if a correction is published that the results may not hold, the article should stay where it is.
But I agree with the point about replications, which are much needed. That was also the best part in the article, i.e. "stop citing single studies as definitive".
I read the paper as well. My background is mathematics and statistics and the data was quite frankly synthesised.
But the article is generally weird or even harmful too. Going to social media with these things and all; we have enough of that "pretty" stuff already.
However there are two problems with it. Firstly it's a step towards gamification and having tried that model in a fintech on reputation scoring, it was a bit of a disaster. Secondarily, very few studies are replicated in the first place unless there is a demand for linked research to replicate it before building on it.
There are also entire fields which are mostly populated by bullshit generators. And they actively avoid replication studies. Certain branches of psychology are rather interesting in that space.
Maybe, I cannot say, but what I can say is that CS is in the midst of a huge replication crisis because LLM research cannot be replicated by definition. So I'd perhaps tone down the claims about other fields.
Pushing for retraction just like that and going off to private sector is…idk it’s a decision.
She was just done with it then and a pharma company said "hey you fed up with this shit and like money?" and she was and does.
edit: as per the other comment, my background is mathematics and statistics after engineering. I went into software but still have connections back to academia which I left many years ago because it was a political mess more than anything. Oh and I also like money.
This is a frustrating aspect of studies. You have to contact the authors for full datasets. I can see why it would not be possible to publish them in the past due to limited space in printed publications. In today's world though every paper should be required to have their full datasets published to a website for others to have access to in order to verify and replicate.
This one is pretty egregious.
I only needed the Spanish translation. Now I am proficient in spoken and written Spanish, and I can perfectly understand what is said, and yet I still ran the English through Google Translate and printed it out without really checking through it.
I got to the podium and there was a line where I said "electricity is in the air" (a metaphor, obviously) and the Spanish translation said "electricidad no está en el aire" and I was able to correct that on-the-fly, but I was pissed at Translate, and I badmouthed it for months. And sure, it was my fault for not proofing and vetting the entire output, but come on!
No real surprise. I'm pretty sure most academics spend little time critically reading sources and just scan to see if it broadly supports their point (like an undergrad would). Or just cite a source if another paper says it supports a point.
I've heard the most brutal thing an examiner can do in a viva vocce is to ask what a cited paper is about, lol.
Actually it’s not science at all.
Talked about it years ago https://news.ycombinator.com/item?id=26125867
Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)
I read the submitted version and told her it wasn't OK. She withdrew the paper and I left her lab shortly after. I simply could not stand the tendency to juice up papers, and I didn't want to have my reputation tainted by a paper that was false (I'm OK with my reputation being tainted by a paper that was just not very good).
What really bothers me is when authors intentionally leave out details of their method. There was a hot paper (this was ~20 years ago) about a computational biology technique ("evolutionary trace") and when we did the journal club, we tried to reproduce their results- which started with writing an implementation from their description. About half way through, we realized that the paper left out several key steps, and we were able to infer roughly what they did, but as far as we could tell, it was an intentional omission made to keep the competition from catching up quickly.
When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.
One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.
I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.
The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.
https://blog.plan99.net/replication-studies-cant-fix-science...
I think perhaps blackball is guaranteed. No one likes a snitch. “We’re all just here to do work and get paid. He’s just doing what they make us do”. Scientist is just job. Most people are just “I put thing in tube. Make money by telling government about tube thing. No need to be religious about Science”.
In terms of solutions, the practice of 'preregistration' seems like a move in the right direction.