The recent discussion surrounding
irreproducibility in scientific experiments should come as no surprise to those
in research fields. In fact, it is a discernible responsibility of a researcher
to question results, and to explore potential sources of error and bias in experiments.
This should be true whether the results come from one's own work or from a
literature search.
As the number of PhD-seeking scientists increases and technology advances, it is not uncommon for researchers and science writers to feel pressure to inflate the generalizability of their results. This is especially true with the advancement of fast media. The “click bait” culture of the internet could inadvertently encourage the exaggeration of scientific claims, leading to “science” articles with idiotic headlines like “thinking makes you fat” (Yes, this journalist was actually writing about their interpretation of a real study).
As the number of PhD-seeking scientists increases and technology advances, it is not uncommon for researchers and science writers to feel pressure to inflate the generalizability of their results. This is especially true with the advancement of fast media. The “click bait” culture of the internet could inadvertently encourage the exaggeration of scientific claims, leading to “science” articles with idiotic headlines like “thinking makes you fat” (Yes, this journalist was actually writing about their interpretation of a real study).
Yet, this increase in pressure for strong
results should be (and seems to be) coupled with a strengthening of scientific
rigor and scrutiny with regard to interpreting results. I’ve heard countless
anecdotes about the types of papers or results that you “used to be able to get
away with” decades ago that would not withstand peer review today. The scientific
standard may be increasingly challenging, but if our goal as scientists is to
get closer to understanding the truth about cancer biology, neurodegeneration,
etc; then we should be prepared to endure these challenges. Double-blind
studies, control groups, grant review, and other measures to reduce error and
bias may add challenges but they are absolutely necessary in order to produce more
reliable findings. I think these are a good start, but I don’t think they are sufficient. In fact,
all of these human-driven solutions are to fix a human-generated problem, as mentioned here. In other
words, while peer-review certainly helps produce more rigorous studies, these reviewing humans face the same potential pitfalls as the authoring humans.
Apart from the challenge of identifying the sources
of bias and irreproducibility in interpreting data, there is a tremendous
responsibility for those reporting science to the general public. Laypeople without
scientific training may not have as much practice critically analyzing the
controls used in an experiment, or understand the potential pitfalls of a
particular method. I feel strongly that people in general, not just scientists,
should understand a great deal more about statistics, bias, and how to think critically
about what they read and watch.
Take for example, one of my favorite websites,
Spurious Correlations, graphs correlations such as “The number of
drowning deaths each year correlates with the number of films Nicolas Cage
starred in that year. This is an example of a time when the results of a
certain statistical analysis may tempt a naïve person into a grandiose
inference about the data. This misinterpretation could be innocuous, like in
the example above, but it could be an outright dismissal of scientific evidence
in favor of a popular non-scientific idea - e.g. “vaccines cause autism” – that
could halt progress and put public health at risk. For these reasons, it is the job of scientists and educators to steer towards a better understanding of bias in our system and fight it with whatever tools we have,
while understanding and communicating the limits of scientific experimentation.
No comments:
Post a Comment