Tuesday, January 23, 2018

Is The Strained Economics of the Scientific Enterprise A Significant Cause for Scientific Bias?

Economists are the most peculiar kind of scientists.  In fact, if I were not drawn to the world of medical technology innovation, I would certainly entertain my nerdy-streak by becoming an economist.  Anyone who has read the award-winning book Freakonomics might agree (aside: Freakonomics is a brilliant podcast for the intellectual-at-heart). So what exactly does the economy and the scientific enterprise have to do with bias in academic research?

I may be biased, yet I believe economics has everything to do with it.  We as human beings are susceptible to incentives, however benign or malignant. Any science-minded individual who keeps a beat on the news will know that irreproducibility and moreover, retractions of manuscripts is on the rise globally.  Indeed, while a recent article .  Wherein does the culpability lay?



I argue, incentives unduly influence individual investigators yet publishers are also to blame.  It is well known that funding for the scientific enterprise in the United States is at an all-time low when controlled for costs of inflation since the termination of the NIH budget doubling in the early 2000s.  With less access to funding and a glut of Ph.D’s entering the academic job market (a worth subject for another discussion), researchers must to more with less in order to publish. Fellow blogger Austin Nuckols is wise to note “the culture of science, especially in the academic setting, follows a mantra of “publish or perish”.  The circle of life for academic research is an ultra tenous one driven by supply and demand of the NIH dollar: Win grantàperform researchàpublish à repeat.  One break in that chain is enough to sink a mid-career academic’s productivity (not to mention salary support). When jobs are uncertain every few years, it is easy to see where bias can come top-down, influencing the un-empowered graduate student to conduct research with significant bias, leading to conclusions “in our own image”.   Publishers are similarly incentivized to avoid reducing bias, despite calls to do so in high profile journals (e.g. Nature, Cell).  “Novelty” sells; and who can remember the last time a reproducibility study was featured in the high-impact “Vanity” journals?


Looking at this dismal state of affairs for the budding researcher, I feel incentivized to begin the inaugural edition of The Journal of Research Reproducibility or better yet, The Journal of Failed Experiments (And How to Avoid Doing Them).  Perhaps then, the odds of academic success in research will be in my favor. 

No comments:

Post a Comment