The Economist article “Trouble at the Lab” raises several interesting points about the current state of scientific endeavor. This article addresses the bias towards publishing “groundbreaking” data, the lack of incentive to conduct reproducibility studies, and the shortcomings of the peer review process in vetting erroneous papers. A rebuttal article by Jared Horvath published in Scientific American, “The Replication Myth: Shedding Light on One of Science’sDirty Little Secrets”, asserts that irreproducibility is inherent in the scientific process, and therefore not as big of a problem as The Economist says. However, this article confuses negative data with unreliable data, when the two are fundamentally different.
The Scientific American article uses the discovery of Viagra as an example for the importance of publishing irreproducible data. Viagra was originally invented as a treatment for cardiovascular disease, but was rebranded when it was found to be more effective at treating erectile dysfunction than heart conditions. Horvath uses this story as an example for the utility of “unreliable” data. However, this example misses the point of The Economist article. The Economist article focuses on the publication of data that is fundamentally flawed, mainly through improper statistical analysis and a bias towards only believing dramatic results. Rather than being an example of unreliable, irreproducible data, the Viagra study is actually a good example of scientists overcoming their biases and believing their negative data. Rather than manipulating their data to support the initial hypothesis that Viagra would make a good cardiovascular drug, these scientists rejected their hypothesis and allowed their data to lead them to new conclusions about their drug.
While the Scientific American article makes a valid point that irreproducibility is a longstanding phenomenon in the sciences, I do not agree with the conclusion that irreproducibility is therefore not an issue. There need to be more incentives for scientists to reproduce each other’s work, so that erroneous findings can be identified and the field can move forward. Both the Scientific American article and The Economist article illustrate the importance of publishing negative data from well-conducted studies. By disproving previously held hypotheses, negative data can still be useful in contributing to scientific knowledge.