After reading the posted articles
on irreproducibility and bias in science, I am surprised that there are not
further measures in place to combat these issues. The use of anonymous post-publication
peer review and Bayesian statistics to justify redoing an experiment seem like
common sense measures. Why have these not become standard practice among the
scientific community?
As the article “Trouble in the Lab”
states, “more than half of positive results could be wrong.” This was revealed
by John Ionnidis’ 2005 paper, which proved the cost of a seemingly small number
of false positives. When I connect this thought to my own research, I am horrified.
What if the claims that helped me to develop my experimental theory are
unreliable? Though they were published in peer-reviewed journals, perhaps their
results do not reflect actuality. These “discoveries” might have not been
discoveries at all, but simply instances in which the data told an incorrect
story. Because research builds on previously published results, a false
published result could lead to a chain reaction on incorrect assumptions. How
can this chain of events be halted?
Statistics proved that
irreproducibility is a rampant issue among many life science research
investigations. I believe that statistics can similarly be used to combat this
problem. Even though “most scientists are not statisticians,” acceptance of the
explanation laid out by Ionnidis should be a prerequisite for performing
research. It should guide scientists to perform more experiments and to not be
fooled by false positives. Perhaps, then, greater care will be made to
distinguish discoveries made by statistical anomaly from discoveries that
represent the laws of science. Because of the nature of their work, scientists
should take it upon themselves to become as versed as possible in statistics. The
two are so intrinsically intertwined—this fact can no longer be ignored in the
scientific community.
No comments:
Post a Comment