Monday, January 18, 2016

Maybe it's not so bad that experiments can't always be reproduced

As many recent articles have pointed out, much scientific research is not reproducible, which poses a troubling problem for the scientific field. Research that can’t be reproduced can lead to the waste of millions of tax payers’ dollars when scientists chase after a hypothesis for which they think they have gathered good evidence to support, but which later doesn’t pan out. It can lead to people excitedly jumping on the next “miracle” cure. It can lead to the disgraceful ruin of careers, when scientists must retract their work.

I will not deny those very negative consequences; however, I found Jacob Hovarth’s article in “Scientific American” added a much needed optimistic perspective to the discussion.

Given our limitations as human beings, absolute truth is elusive. We will never be able to discover Truth definitively. At best, we can study and observe the world around us, and based on those observations come up with explanations, which we test again and again, looking for flaws in our proposed explanation. Most scientists are taught at some point that it is impossible to “prove” a hypothesis. Instead we gather evidence to suggest that our hypothesis is correct. Even after we gather some evidence in support of a hypothesis, it is always possible that we will come across some additional findings that capture some different aspect of the truth and that run counter to the predictions of that hypothesis.

As Hovarth points out, whether new evidence supports or contradictions earlier evidence, we learn something either way. Except in the case of out-right fraud or data manipulation (which, in my opinion, are separate issues), contradictory evidence does not negate the value of pervious findings. And in many cases, the search for an explanation for discrepancies between results can lead to important discoveries that might not have been explored otherwise.

The scientific field should put mechanisms in place to incentivize researchers to conduct replication experiments. At face value, these experiments may seem like a pointless use of money and resources, but they could in the long-run make research more efficient. For example, if replication experiments with a bacteria end up revealing that some labs are unknowingly using a strain with a mutation in a virulence gene, future researchers will ensure they work with the correct strain and cease wasting money conducting experiments in a strain that lacks clinical relevance.


  1. How irreproducibility of scientific data impacts the general public with their tax money is an important point that I personally lose sight of a lot. Though we are always trying to be successful in producing data and writing grants, it's easy to forget who actually pays for the grants: the public. Remembering this can help us all understand the urgency in needing to understand why some data can not be reproduced.
    I think you bring up an interesting point that by investing in scientists to reproduce data for quality control, we will be saving money in the long run. With the "publish or perish" attitude in science, it's hard to imagine a career in which you focus on only reproducing others' work. However, if the scientific community can all agree on the importance of focusing on replication experiments, perhaps this could become a new "normal." More so, by emphasizing that understanding discrepancies in science we can perhaps learn something new, I think it could become acceptable for this to be a new niche career within science.

  2. I've sometimes thought about whether or not having an occupation where science is just being reproduced according to published papers out there would be feasible or not - it is certainly a very intriguing idea, and in an ideal world, I think that this would be something extremely useful for the scientific community. In a way, though, I think the "scientists attempting to reproduce others' results" happens while generating new hypotheses. When someone reads a paper or hears about an interesting finding and then comes up with a hypothesis based on that piece of information, one of the first things that happens (or I guess, SHOULD happen) is that this person attempts to reproduce the results that were reported before continuing to test their own hypothesis. However, it seems that this would be contingent on others reporting and publishing their materials and methods in a detailed enough manner where the attempt to reproduce data would be feasible to begin with. My question would be: say we are in this ideal world where there is a group of scientists whose sole job is to reproduce data. What should we do with data that ends up not being reproducible? How would we go about this as to make the most use of tax payer money (i.e. it would probably be more wasteful to attempt to reproduce EVERY experiment or study that is being submitted to a journal, but that could also potentially save the public a lot of heartache and trust issues if the study ended up not being reproducible. Double-edged sword?).