Aware or not, we all have biases. In science, that could
mean that we expect a certain outcome from our experiments, or we’re hoping to
find some new and exciting revelation in our field. I don’t think that most
scientists are consciously acting on their biases, but it is hard to avoid. The
demand for cures and miracles, the idea of publish or perish, or just wanting
to graduate 6 years into your PhD can lead even the most well-intentioned
researcher astray. This may cause scientists to overstate their results in
order to appeal to high impact journals, but this is where peer review is meant
to intervene. Peer review has been widely hailed as the way the science
community maintains its integrity by determining the validity of experiments
and conclusions made. Idealistically, when a manuscript is submitted, it is sent
to other scientists of that field, who will then comb through the experiments,
results, and conclusions and point out any mistakes, inconsistencies,
methodological fallacies, and outright fictitious data. The reality is that
this system is flawed and too many mistakes get past this review process. The
fault is not entirely on the reviewers, because they are typically busy with
research of their own. It is impractical to expect them to juggle running their
own lab, writing their own manuscripts, and to not have biases of their own. So
how do we strengthen the peer review process so that fewer mistakes slip
through the cracks? I think that platforms like PubPeer can help tip the scales
in maintaining the integrity of science, which is important now more than ever
with public trust in science waning. Opening up a paper to the scientific
community at large allows a greater host of expertise to find issues and/or
offer improvements that inevitably benefit science as a whole. They say it takes a village to raise a child, and science is no different.
No comments:
Post a Comment