I was nervous when I was reading the assigned articles about bias and irreproducibility. Because the word “fraud” appeared many times, it seems not hard for researchers to be categorized into that group. As an undergraduate student doing independent research, I am on two minds about these critiques on unreliability of scientific researches. On the one hand, I understand the eagerness to prove the worth of one’s work and the results can never be perfect. On the other hand, I think that what makes scientific research special is its objectiveness as we claim. Beside all the critiques or alerting comments or concerns, I found those articles are constructive in two aspects.
Researchers need to give enough details. Anecdotes, highlights in experimental design, etc. can reduce irreproducibility. In other words, honesty does not simply mean report one’s finding as it is, but also get close to share full-scale of it. I think right attitude to public research is to share like show your dairy, as Jeremy Berg suggested in “The reliability of scientific research”: “[investigators]made comments indicating that the experiment ‘worked’ only one out of 10 times but that successful result is the result that they published”. Adequate information does indicate the presence of unperfected result even failure.
We do need a broader scale of peer review. Recent studies show that people omit or hardly admit their own experimental mistakes, but often can catch mistakes by others colleagues; this show the necessity of attention from peers. But at what point of the study would it be helpful? Currently, post-publication peer review becomes more and more popular, in addition to the traditional pre-publication one, which many articles pointed out to be more or less perfunctory. Comment sections in public, such as the one on eLife journal, PubMed Commons, etc. are chances for people in the related field. PubPeer is another platform for post-publication peer review, which in my view is more public and not that field-specific, which can be helpful for future studies. I found the soundest way of peer review to be reproduce data from the same experiment. Failures to reproduce original experiments were used as alerting evidences for claims, but a systematic sharing of result on those attempts could be more helpful.