This week’s readings on bias and
the reproducibility “crisis” were perhaps more eye opening than anything else.
I realized how completely unaware I was on the issue of experimental reproducibility,
which includes myself as part of the
larger scientific community. I knew the issue of scientific reproducibility was
a problem. In general, though, I hear about it in the context of the relatively
small percentage that fabricate data and have their papers retracted, rather
than the unknowing majority that are unsure of how to properly design
experiments and to process resultant data. The scientific market to publish
data that is “reproducing” or reaffirming an existing finding rather than a
novel discovery is simply non-existent, which furthers the problem. To think
that I, personally, could unknowingly be contributing to this frankly made me
sick.
We as scientists are pushed to
publish… or perish. We need to publish papers to receive grants, we need
funding to pay researchers, and the vicious cycle repeats. However, all of this
is built on the foundation that we publish almost exclusively positive results.
A slew of negative results are not going to pay the bills. The Economist article, “Unreliable
Research: Trouble at the Lab,” broke down this argument of only publishing
the positive results into numbers. Only then did I get a real look into the glaring problem. After number crunching, I
realized that even a small number of false positives in a data set can
artificially inflate positive results; therefore, negative results can prove to
be far more trustworthy but, unfortunately, often remain unpublished.
No comments:
Post a Comment