Monday, January 18, 2016

Modern Science requires Modern Guidelines

After viewing Dan Ariely’s TED presentation: The Bugs in our Moral Code, I found myself astounded by the clear parallels between Dan’s social psychology experiments and the current state at which the scientific community finds itself. Briefly, an individual’s propensity to cheat to a certain degree (defined by Dan as personal fudge factor) is controlled through not only the associated risk, but also the perception of others - chiefly those within our own community. This is a common modality that can easily sway the minds of scientists if perceived as beneficial to either the funding of their work or the legitimacy of their beliefs or ideas. This can manifest in something as simple as the lack of a proper experimental control to that of turning a blind eye to confounding data on a “make or break” grant submission.
While instances of direct data fabrication are few and far between according to groups such as Retraction Watch and The Scientist, the inundation of published experiments in highly regarded journals found to be un-reproducible is only beginning to come to light. Editorials on this subject first began to surface from names like Nature in 2012 with the release of: Must try harder. This article argues the arrival of an endemic of scientific “sloppiness” citing the overwhelming number of novel cancer therapies that fail to reach clinical trials due to inadequate pre-clinical data that cannot be reproduced. In recent times, nothing screamed out at me “non-reproducible” quite as much as the story of Dr. Charles Vacanti at Brigham and Women’s Hospital in Boston MA. The publication was first heralded as the greatest advancement in stem cell technology of this century: STAP. Its retraction and verdict of scientific misconduct later resulted in the destruction of the careers of many highly regarded scientists as well as the suicide of one of the Japanese co-authors. This instance could be owned up to the ever present “publish or perish”  mindset  inherent of running a successful lab in today’s funding environment, or simply the presence of one dishonest scientist with an enlarged personal fudge factor. Regardless of the cause, these events demand a proactive advance towards the dissemination of highly reproducible studies assessed through strict guidelines imposed by the leading scientific journals. This tenet is supported by Dan’s social experiments in which an honor code is introduced, reducing the generalized cheating. 
 Other editorials have argued that “Reproducibility will not cure what ails science” stating that open access to data is the only “cure”. With the advent of big data experiments, data mishandling becomes inherent to the nature of the experimental plan. Personally, I could not agree more with the push for more stringent data reporting not only in terms of raw data calculations but also in the final statistical analysis of such studies. PubMed Commons aims to provide a secondary approach in which reproducibility and many other subjects can be discussed in a forum based method inside of the scientific community in regards to specific publications. At the end of the day, while the experimental integrity lies at the hands of the scientists, the affirmation is at the sole discretion of the peer review. 

No comments:

Post a Comment