Monday, January 18, 2016

Different types of unreliable research call for different solutions

Our readings this week looked at multiple facets of unreliable research: Deception/cheating, unreplicability, and un-generalizability.  
 
Dan Ariely discusses cheating in the context of Wall Street, but it is easy to see how the behaviors he teases out in a laboratory setting with undergraduates could be relevant for data faking in science as well. Ariely discusses the finding that the propensity to cheat increases dramatically when one believes members of one’s “in-group” (in this case, Carnegie Melon students who see a student with a Carnegie Melon hoodie cheating). Ariely also discusses how under most circumstances, people would only cheat a little bit. He described this as a “personal fudge factor”, related to an individual’s desire to see his or herself as a good, moral person. You can easily imagine how this might translate into a research setting; the belief that your idea is correct (even if the data don’t support that) could lead one to believe that they are only cheating a little bit rather than a lot (which, to be clear, they are.)

Unreliable research can also result from from sloppy, improper use of statistics or poor data collection processes which find “significance” where there is none. An article for the Economist summarizes this problem quite well. This is in many ways the most frustrating area of unreliable research because it is the one that feels most correctable. We can disincentivize cheating, though realistically there will still be people who cheat. But rigorous and appropriate statistical analysis should be achievable for every published paper, if journals are willing to have every paper reviewed by a statistician (and realistically, every University is willing to foot the bill via increased paper submission fees).

Finally, un-generalizable research is research that was properly collected and analyzed, but is sensitive to small changes in experimental conditions. I would argue that these studies are the “features” of unreliable research, as discussed in Jared Horvath’s piece for Scientific American. While research that doesn’t generalize may be disappointing to those who thought they were studying a widespread phenomenon, this research still ultimately adds to our knowledge base.

These three types of unreliability are different in both their causes and solutions, and thus should not be lumped together. Cheating, or faking data, is completely unacceptable and should have serious professional consequences. Poor use of statistics can be corrected with education, and through structural mechanisms such as professional statisticians being employed by journals specifically to review the data analysis methodology of the paper. Un-generalizable research should be revealed through replications specifically designed to test the generalizability of the phenomenon: Using a different mouse line, a different age range, data collected at a different time of day. And unlike the previous two types of unreliable research, un-generalizable research shouldn’t reflect poorly on the researchers who conducted the experiments; instead it’s a familiar call back to the drawing board.

No comments:

Post a Comment