Scientific bias or the “systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others” continues to be a huge issue in modern scientific research (Merriam-Webster's collegiate dictionary, n.d.). As Bethany Brookshire wrote in her recent op-ed piece “after all, science is an endeavor performed by scientists — people who come with their own beliefs and biases that influence how they see the world.” To support this, Brookshire points to a recent study published in 2012 examining gender bias in the hiring process (Moss-Racusin et al. 2012). What is interesting about this study though is not necessarily the results but rather the conflicting response to the study between surveyed men and women. Men, it turns out, consistently ranked the study less favorably than women and, Brookshire writes, also tended to argue with the results and study design more than their female counterparts.
This heightened sense of skepticism that men tended to place on the results of the study might describe a different path for bias to emerge: bias based on the skepticism we show to results. Marshall Shepherd, who highlights this possible phenomenon in his recent piece, states “the irony is [that] many skeptics will criticize the peer review literature until they find a paper that supports their perspective.”
We, as scientist, are taught to have a healthy level of skepticism when approaching scientific results (especially as a recent Nature survey showed that a large percentage of labs fail to reproduce published results, drawing into question to accuracy of many positive findings), but one has to wonder: at what point does our skepticism hinder our ability to form unbiased critiques? Whether that skepticism comes from our own implicit biases or our subconscious desire for our hypotheses to be supported, the scientific community must question how strongly we critique unexpected/unfavorable results when those results challenge our own research/beliefs.