Friday, April 28, 2017

Only 46%!


In a 2013 article published in PeerJ entitled "On the reproducibility of science: unique identification of research resources in the biomedical literature," Nicole A. Vasilevsky and colleagues found that only 46% of studies in biomedical journals had been transparent enough to provide basic information about a number of critical factors, including: strain of model organism, antibodies used, knockdown reagants, constructs, and cell lines. Furthermore, Vasilevsky et al. looked to see whether the data from the publications had been deposited in a data repository, and found that most data had not. They looked across a wide variety of journal metrics, including impact factor, subject matter, reporting requirements, etc., and found that there was no correlation between any of the above factors and whether or not an author published adequate specifics of the experiment.

On a personal level, I have tried to reproduce a number of publications in my own work, and in all of which I have been unable to replicate the results. (And sometimes, even when things are clearly laid out, it is still not possible to replicate the results!) The most baffling aspect to me is that so much work is put into planning, executing, analyzing, and submitting the experiment, that it seems rather ridiculous to not be more clear. Furthermore, all current work is modeled off of previous work. For so much time and effort to go into flimsy assumptions about experimental design is a disservice to the entire scientific community. Scientists with vested interest in patents, corporations, or other conflicting arrangements may not wish to publish the specifics of their methods. However, as a scientist uninfluenced by external factors, I tend to distrust any author that does NOT publish their full data set. Or at the very least offers the reader to contact the author if they wish to see the data set (because not all data types of repositories online, though most do). 

What is to gain by not specifying resources? The fear of being usurped by another scientific group racing to the same subject far outweighs the probability of actually getting scooped. I understand that there are exceptions to these rules because of the possibility of adverse health aspects (for example, identifying certain virulent strains of Ebola or other superbugs). However, the chances of this occurring in all 46% of studies that are underreporting appropriate data are considerably low. More than likely, the scenario is a lack of rigor in the editing and peer review process combined with a conscious or slightly subconscious assumption that the readers will not need or want every piece of data. But if we all took a step back, and reorganized how we looked at publication and review, there's no reason not to be as transparent as possible.

Below is a figure from Vasilvesky et al. 2013. It identifies all types of resources and the fraction from each field that adequately identifies the specifics of that resource.

No comments:

Post a Comment