Tuesday, January 19, 2016

To not be a fraud: notes from critiques on scientific research

I was nervous when I was reading the assigned articles about bias and irreproducibility. Because the word “fraud” appeared many times, it seems not hard for researchers to be categorized into that group. As an undergraduate student doing independent research, I am on two minds about these critiques on unreliability of scientific researches. On the one hand, I understand the eagerness to prove the worth of one’s work and the results can never be perfect. On the other hand, I think that what makes scientific research special is its objectiveness as we claim. Beside all the critiques or alerting comments or concerns, I found those articles are constructive in two aspects.
Researchers need to give enough details. Anecdotes, highlights in experimental design, etc. can reduce irreproducibility. In other words, honesty does not simply mean report one’s finding as it is, but also get close to share full-scale of it. I think right attitude to public research is to share like show your dairy, as Jeremy Berg suggested in “The reliability of scientific research”: “[investigators]made comments indicating that the experiment ‘worked’ only one out of 10 times but that successful result is the result that they published”. Adequate information does indicate the presence of unperfected result even failure.

We do need a broader scale of peer review. Recent studies show that people omit or hardly admit their own experimental mistakes, but often can catch mistakes by others colleagues; this show the necessity of attention from peers. But at what point of the study would it be helpful? Currently, post-publication peer review becomes more and more popular, in addition to the traditional pre-publication one, which many articles pointed out to be more or less perfunctory. Comment sections in public, such as the one on eLife journal, PubMed Commons, etc. are chances for people in the related field. PubPeer is another platform for post-publication peer review, which in my view is more public and not that field-specific, which can be helpful for future studies. I found the soundest way of peer review to be reproduce data from the same experiment. Failures to reproduce original experiments were used as alerting evidences for claims, but a systematic sharing of result on those attempts could be more helpful.


  1. I think you touch on a very interesting idea that has plagued me in reading papers for a long time now. The methods sections of papers are small, somewhat vague snapshots into the life of a scientist and researcher. In an ideal world, and with a little know-how, you should be able to reproduce any result in the paper by following the methods section of the paper as a guide. However, methods sections are pared down in order to fit the word count of a paper and can sometimes be circuitous in nature, a sub-section of the methods section is brief, and refers to another paper in which you would assume they have a more detailed account of how the experiment/analysis was done. However, in looking up that paper, there is no more detail than in the paper you started with that cited this new paper you are now perusing. It makes for an infuriating round-about and you end up no further along than you would if you hadn't looked up this technique at all. Now I haven't determined if these small methods sections are simply to allow for more space for the text of the paper- the results and discussion sections- or if these methods sections are purposely vague in order for the authors to maintain an upper hand at being the only group to perform a perhaps new, novel, and difficult assay. Science is meant to build on top of itself, and researchers to build on top of each other. If this is the case, why have we not allotted more space to the methods section and allowed for the reproducibility of the data to (ideally) go up as the details become more known? Is this a mark of our desire to be unique and competitive and to not give others the same knowledge that we have? Is it a selfish practice or is it simply to conform to the word limits of the text?

  2. I completely agree with Madeline in that methods sections can be frustratingly brief and abridged. It seems that all labs are willing to throw 10 additional figures into the supplemental section (which can often be an additional paper in and of itself), but never 10 additional facts about their methods that could make them reproducible. I work primarily with cell culture, and depending on the media condition, source of your serum (whether from ATCC or Invitrogen, etc.), or the volume you utilize, your cells can display tremendously different physiology or expression patterns. Take transfecting cells, for instance. A paper will often only say that the authors used Kit X or Kit Y to transfect cells with a vector, and that they obtained Z% transfection efficiency. Transfection kits often have to be optimized, from the volume of transfection reagent to the buffer and cell density. It would save science as a whole so much time and effort if little details such as "we used 10 uL of transfection reagent for 1x10^6 cells" were incorporated into the methods, as well as help with reproducible results or assuage any claims of scientific fraud.