The scientific community is built upon the idea that that the best way to answer a question is through logical experimental design, redundant testing and unbiased analysis of results. Many members of the scientific community sought out its membership to sequester themselves from the ideological, emotional and political turmoil that so frequently drives major discussions in the rest of society. We often see ourselves as the cogent alternative to such trivialities. However, there are two very important points that we should keep in mind whenever we feel start to feel high and mighty:
Scientists are human beings. And as human beings we are susceptible to all the flaws of human perception and quirks that put us at risk for misrepresenting our results. One particularly concerning tendency is that of confirmation bias, the propensity to look for evidence only in the support of the expected result. Often, discussions of confirmation bias are trotted out on the world stage to illuminate large conflicting results, but it is equally at play when a scientist chooses to quantify this cell instead of that cell because it looks more like how it should.
Just as concerning is the human tendency to cheat--just a little-- if it is beneficial and difficult to prove. Dan Ariely, a behavioral economist, discusses this fudge factor in depth during his TED Talk about our "buggy" moral code. Of specific interest is his evidence that we are much more likely to fudge our own results if it is an accepted part of our in-group, our community. Therefore, if a practice (even a wrong practice) is common among scientists, it is likely to be perpetuated, especially if it is perceived as beneficial.
This conveniently leads us to point number two.
Humans suck at statistics. We do. We can't conceptualize our chance to win the PowerBall. We can't understand how big a million is. And, most definitely we can't understand what statistical analyses actually mean. Too often, scientists perform statistics in a retrospective and perfunctory, fill-in-the-blank way. There is a dearth of understanding in how to analyze statistical results because people can't easily tell the difference between an occurrence and a significant occurrence, or how numbers relate to each other.
Perhaps this is why sources report that less than half (and that's being generous) of published scientific studies can be replicated, which could contribute to the growing trend of public disbelief in scientific findings. As scientists, it is hard not to take this personally or to get angry when confronted with a person who misuses data to fuel their own argument. But really-- is there any difference between a congressman reformatting data to assault planned parenthood and you avoiding some questionable data to get a Nature paper?
No comments:
Post a Comment