Stepping out of the ivory tower of academia, we come
face-to-face with a public that has a growing distrust of science. When bad
science makes its way into the media, it feeds the distrust by giving leverage
to their arguments – if scientists say it, it must be true, right? For example,
we can point to study after study where it has been shown that vaccines do not
cause autism, but one heavily flawed paper from the 1980s has now convinced
thousands of new parents that vaccinating your children is unnecessary and even
dangerous, despite the eventual retraction of that paper (thanks, Andrew Wakefield). Biomedical science is probably the most publicly discussed field, because the bench results will theoretically make their way to
humans as treatments and cures.
The “publish or perish” mentality has driven scientists to
value results over process. And who could blame us, when our ability to do
research is funded based on our ability to produce novel results, and our skill
in gaining this funding is what keeps us employed? With the pressure on to
achieve the desired results in the shortest possible time, we arbitrarily
decide a sample size of three is enough to detect an effect, if one exists. Results
in hand, we open the door for our intrinsic biases to sneak in and permeate
themselves throughout our data analysis, hoping to achieve a value of p < 0.05. Further, we encourage this
behavior throughout the tiers of the lab – it starts with the PI, who is
thankful this data will fit nicely into the grant renewal and trickles down to
the relieved graduate students, who can write the data up into a manuscript to
check a box off their graduation requirements.
This results in inadequate training in statistical design
and analysis of experiments, generates science that produces results that
cannot be replicated, and perpetuates a cycle of scientists untrained in
recognizing an inherently flawed study. If the people who are considered
well-informed are unable to identify these issues, it is almost certain that a
layperson would not be able to distinguish between statistically sound and cutting-corners science.
We owe it to the public, and to ourselves, to do better.
No comments:
Post a Comment