Monday, January 16, 2017

Irreproducible complexity

A recent article in The Economist explored the "replication crisis" in scientific research and how many recent papers have pointed out how research often cannot be reproduced by outside groups, and in many cases, even by the original labs. They then extrapolate this "crisis" as being a huge blow to the validity of science as a means of assessing claims. I find this to be a bit of an overreach from what I think many would agree is a real issue in the community.  
Yes, we are all taught as young scientists that replication is a key part of scientific exploration. Any result I obtain in my lab should be able to be replicated within my lab and by other researchers. While this is a key part of science, it is just that, a part of science. A single experimental design isn't definitive support of a hypothesis by itself, it should be part of a larger body of experiments all designed to assess the plausibility of given hypothesis by probing the idea with different approaches and looking for results that help to direct us toward accepting or rejecting a hypothesis. While it is an issue if only one lab is able to get a given result, or worse yet, that they cannot consistently obtain the same result, the scientific community shouldn't be taking this one experiment/paper as an absolute validation of a given claim. Science advances through a winnowing process whereby the next set of experiments inspired by a given claim will either further or weaken a given hypothesis. 
As a young scientist I tend think of several things when I read a new paper making a novel claim:
1 - How does this paper fit in with the larger body of research, is it incremental progress from previous work, or is it a massive shift in understanding? To quote Carl Sagan, "Extraordinary claims require extraordinary evidence."
2 - Bigger sample sizes tend to be better.  While not foolproof, if all else is equal, a bigger sample is probably better than a small one. An example of this that comes to mind is the now retracted Andrew Wakefield paper on the Measles, Mumps, and Rubella vaccine and its link to autism. Among it's many issues, they drew conclusions from an n=12.
3 - What are the effect sizes? In my own work I am quite wary of chasing small effects, and we should be similarly wary of those in the work of others, even if it gets into a major journal.
4 - This is one I hope to improve upon in this class, are the statistics used valid to support the claims and limit bias? Does a statistically significant result mean that a given result is of biological significance?

No comments:

Post a Comment