“The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed…or the replications are…Either way, something is awry.”
Perhaps one of the most prevalent biases occurs when one assumes that doing X will always result in Y. For example, a scientist wanting to confirm previous findings will repeat protocol X based on the assumption that this should replicate result Y. A doctor prescribes treatment X because treatment X is expected to produce outcome Y, alleviating their patient’s ailment.
Relying on definitive X=Y thinking is comforting in everyday life. We drive route X every morning because it leads us to our work at destination Y. Except, we know that in day-to-day commuting there are a number of events that could arise in which taking route X would not necessarily lead us to destination Y. Tuesday morning, we may find ourselves taking an unexpected detour halfway along route X due to an unpredictable fallen tree across our route. In science, we adhere to randomization, proper controls, blinding ourselves to our samples, p=0.05, etc. as a means to prevent our results from being thwarted by unpredictable fallen trees. Yet, as an article in The Economist details, many studies do not repeatedly end up at result Y, despite adhering to route X.
The author points to a number of different reasons as to why studies may be failing to consistently replicate: review processes are less meticulous for some journals than others, authors omit methodological details needed for others to accurately reproduce protocols, an increasing pressure to quickly publish findings may promote quantity over quality of scientific results, etc. In response, I wonder whether fixing these institutional problems would definitely result in marked increases in reproducibility? If so, maybe we could potentially agree with the author’s comment that irreproducibility is the product of either flawed original research and/or replication studies. However, if irreproducibility continued (in instances where proper protocols, assays, sample populations, and analyses are implemented and institutional issues are corrected), then I believe that occasional unforeseen replication outcomes may accurately depict the complexity of the answers we seek, the illnesses we investigate, and the reality in which we live.