Science has a bias problem.
This became clear when I typed the term “scientific bias”
into a Google search bar, and 1,470,000 results popped up. Some of the results
included simple definitions of research bias to help scientists understand what
research bias is so they can avoid it. But by and large, most of the results
were articles published in newspapers and magazines like The Washington Post, The
Economist, Wired, and Forbes to name a few.
However, in scrolling through these articles, I noticed
something: science has a bias problem. But
not the one I expected (which means I went into this search biased). The real
bias problem is two-pronged, consisting of both bias in research and the bias
of those who consume scientific information.
The first prong—bias in research—has been widely written
about and is a big discussion within the scientific community. One of the
articles I found, hypothesizes that some of the bias in science that
leads to false claims or experiments that cannot be reproduced stems from the
pressure within academia to “publish or perish.” Theorizing that this mentality
encourages scientists to take shortcuts in the number of replicates they do in
testing a hypothesis leading to the publication of false positives. Other
articles believe that some of the bias in science is a failing of the
pre-publication peer review process. Where the increasing demand on scientists
means they aren’t taking the time to fully assess what they are reading and
find mistakes and possible research misconduct. But the consensus seems to be
that these things contribute to irreproducibility in science and allow for the
publication of unsubstantiated work.
While this is certainly a problem, because it is taking
place in labs worldwide, the community is taking steps to fix this. For
instance, Science magazine is
requiring rigorous description and justification of replicates and statistical analysis, a trend that is being seen in in other journals as well. In
addition to more stringent requirements, other third-party websites such as
PubPeer allow researchers to anonymously review published papers
in the hopes of identifying research misconduct that can lead to
irreproducibility. PubMed Commons allows similar reviews to be
made, though they are not anonymous. Hopefully, given time and support from
scientists, the scientific community can crack down on the publication of
biased and irreproducible papers.
However, the second prong—bias in information consumption—is
potentially more problematic. Due to the internet, it is so easy to simply
search for articles that support your point of view on a particular topic,
which introduces confirmation bias. While anyone can have this confirmation
bias, people without scientific background are especially susceptible to it, as
they don’t have the training to critically analyze the scientific basis of what
they are reading. This allows misinformation to spread, as it did in the wake
of Andrew Wakefield’s now retracted article suggesting that
vaccines cause autism. For someone without scientific training it is hard to
understand how something that is “wrong” could have gotten published,
especially when there appears to be so much science backing it up. It is this
that allows science to be twisted by the media and used as a weapon, as
asserted in this article. That is why bias in scientific
information consumption could be almost more dangerous that bias in the
lab...because there are no real mechanisms in place to correct this sort of
confirmation bias.
In order to solve—or even begin to address science’s bias
problem, it will require the scientific community to bridge the gap between
academia and the general public.
No comments:
Post a Comment