Sunday, January 15, 2017

Reproducibility… or rather accountability “crisis”

This week’s readings on bias and the reproducibility “crisis” were perhaps more eye opening than anything else. I realized how completely unaware I was on the issue of experimental reproducibility, which includes myself as part of the larger scientific community. I knew the issue of scientific reproducibility was a problem. In general, though, I hear about it in the context of the relatively small percentage that fabricate data and have their papers retracted, rather than the unknowing majority that are unsure of how to properly design experiments and to process resultant data. The scientific market to publish data that is “reproducing” or reaffirming an existing finding rather than a novel discovery is simply non-existent, which furthers the problem. To think that I, personally, could unknowingly be contributing to this frankly made me sick.

We as scientists are pushed to publish… or perish. We need to publish papers to receive grants, we need funding to pay researchers, and the vicious cycle repeats. However, all of this is built on the foundation that we publish almost exclusively positive results. A slew of negative results are not going to pay the bills. The Economist article, “Unreliable Research: Trouble at the Lab,” broke down this argument of only publishing the positive results into numbers. Only then did I get a real look into the glaring problem. After number crunching, I realized that even a small number of false positives in a data set can artificially inflate positive results; therefore, negative results can prove to be far more trustworthy but, unfortunately, often remain unpublished.

Jared Horvath penned a more optimistic perspective, in which he argued that many studies done by the likes of Galileo, Dalton, and others were not reproducible by scientists that followed. In these cases, the fact that their original works were not reproducible did not make the original scientist’s discoveries incorrect, but instead their findings and theories sparked further questions and other theories for future scientists to pursue. Horvath’s outlook was refreshing: “if replication were the gold standard of scientific progress, we would still be banging our heads against our benches trying to arrive at the precise values Galileo reported.” We should focus less on how reproducible a specific result is and more on holding ourselves accountable for the experiments we design, the results we publish, and the literature we review by taking the proper precautions to combat our inherent biases.

No comments:

Post a Comment