Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

Tuesday, January 19, 2016

The Paradox of Trust in Science

As an undergraduate student, I find that most, if not all, of my lectures incorporate information which students naturally presume is credible, reliable evidence. Very rarely does a professor inquire about the shortcomings of data or urge us to approach publications from a critical perspective. Upon reading a PubPeer article and learning that 25% of 120 randomly selected published papers revealed image data errors made me ponder why we constantly accepted literature as truth. I was frequently under the impression that if a study was published, it underwent a rigorous review process and was subsequently immortalized in an impressive journal....thus what was there to question? It was only after I joined a lab when I began to pause and truly assess studies within the growing scope of my knowledge. I also became increasingly aware of one of the greatest challenges facing research scientists: striking a balance between the demanding pressure to publish and being conscientious about data production throughout. 

The case of the overhyped medical press does not surprise me, however. I personally have fallen prey to such articles, namely those that occupy headlines with "breakthrough findings" that daily use items are carcinogenic. It's no secret that journalists are cognizant of how to grasp the layman's attention, albeit with content that isn't scientifically sound. Often, such articles are followed by disclaimers that, for example, the hot new weight loss drug working "miracles" is contingent upon x, y, z and/or has yet to be tested on humans. 

On another note, it is reassuring to know that individuals are making strides towards developing peer feedback initiatives such that widespread expertise can be offered (PubPeer) or in the case of PubMed Commons, establish an ongoing review system. But, there is much more that needs to be done to achieve the level of scientific credibility humanity deserves. As we discussed in lecture, statistics' primary objective is to identify and avoid bias, therefore it is imperative that as a scientific community, we understand and implement the appropriate tests/tools to prevent unacknowledged flaws from persisting throughout our comprehension. Further, we must better integrate sound statistics knowledge with our perception of the research process - specifically as elucidated by Bayesian analysis, what a 5% false positive rate of scientific hypotheses means for our ability to reproduce given results. I hope that all contributors to this dynamic field can eventually become more well-versed in statistics to better uphold ethical standards and ultimately, allow the public to regain its trust in the currently misleading realm of scientific research.  

Repercussions in Research



Having perused the biased research articles it became evident that lack of repercussions in research contributes to the abundance of invalid data and publications.  The problems of incorrect data in publications encompasses two paths of invalidity:  1) the known use of incorrect or fraudulent data, and 2) unknown methodological errors in analyzing data and presenting research findings.  Each is equally detrimental to the pursuit of scientific knowledge and counterproductive to the progress of any field.  One such reason that this incorrect data continues to plague research is the lack of true consequence to multiple parties involved in the publication.  The goal of achieving higher rates of truly valid publications lies in three areas of gate-keeping: the authors, the reviewers and the journals themselves.  To increase the amount of veritable new data it is important to approach this systemically at all three levels of the publication process.  While the onus of true and verified data in research still, and will always be, primarily that of the authors; there are new ideas and sources to help to ensure the adherence to the scientific method .
                Belluz mentions in her article in September of 2015 the recent development of PubPeer which allows readers to anonymously review and critique published research and methods.  This article highlights an initial problem of the publication process, lackadaisical peer review contributing to a lack of quality control of published data.  The use of PubPeer is a step in the right direction of opening up the critique of research beyond that of a limited peer review.  One aspect of this process that is intriguing, is the anonymity which allows a third party to examine data free from an attached name or reputation.  I support the idea of allowing an open source of review, which helps to keep all readers engaged in the scientific process and opens up the review process to more than just a handful of individuals.
                The publication of improper research rarely results in a retraction and even so, the event of a retraction unless highly publicized to the public, often lacks any true consequence for the original author, the peer review system or the journal of publication.  As many researchers in a variety of fields strive to publish in such highly regarded journals as Nature and Science, one cannot excuse the fact that these journals often have some of the highest rates of retraction.  As mentioned in the Economist article, journals such as Nature have begun to put together a checklist for authors to adequately address for publication.  This puts responsibility often still on the authors, who had they been adhering to the scientific method would have already resolved any issues; and thus one should ask if the journal itself holds any accountability for the publication of incorrect data.  As a profitable enterprise, journals compete with one another and for readers.  They have a substantial reason to want to publish new and exciting data and thus are equally involved in the process of publication of invalid data.  An article published by Fang and Casadevall in Infection and Immunity shows the below image of retraction index of major journals.  Curiously, some of the most prominent journals in the fields are high on the retraction index and there is a correlation between impact factor and retractions.  One must ask then if there is any responsibility that journals also face scrutiny for publications that are proven invalid.  If a journal faced more serious repercussions than simply a self-admitted notice of retraction, would there be an added level of examination of data in publications?
 
Fig. 1.
Infect. Immun. October 2011 vol. 79 no. 10 3855-3859

While the idea that all scientists, editors, reviewers and journals would work in the pursuit of pure knowledge without bias is noble, it is evident in Dan Ariely's TED talk that the human condition is often incapable of being empirically unbiased.  Since this is an inherent challenge, it is important to create a system of checks and balances that ensures that all parties involved are appropriately scrutinizing and also admittedly involved in the publication process.  While reading many of these articles proved to me why authors are so important to the process, it became clearer to me that accountability of the process itself must be held to a higher standard and shared by all parties involved in publication.  This means that improper publications should not only be detrimental and worrisome for an individual author, but also for a journal.  Allowing more involvement of a broader community and anonymity, such as PubPeer, is a step towards involving critiques and thorough examination that may steer the direction of scientific publications closer to that what has always been expected.

Monday, January 18, 2016

The unseen biases by the peer review process

In the article “Why you can’t always believe what you reading scientific journals” the author discussed one of the major issues in life science research, the peer review process. The article focuses on the benefits and disadvantages of a new tool called the “PubPeer.” The reporter was able to ask and discuss many questions with the PubPeer founders. There were many issues that were particularly interesting to the general scope of the problems with bias in sciences.

The founders began by stating this this website has served to be very useful since it allows for a centralized commentary for published work. Interestingly just recently, this tool was able to uncover a controversial stem cell fiasco. With the help of an anonymous commentator the public was able to uncover the published fraud. I thought this was exceptional since I always wondered what the process was to uncover misconduct in already publish work. What I believe is even more interesting, is the fact that this work was accepted and published –showing the flaws in the peer review process.

Another point brought up in this article was the fact that most misconduct occurs in life sciences. However, now this makes sense since it would be very easy to show that for example a mathematics equation was faulty by making a visible mistake. On the other hand, in biology there are many areas were misconduct can present itself. One of the ones that matches the classrooms interests is the idea of science bias. This might be even more difficult to pick up since there might be many different types of biases like in the experimental design, the data collection, the analysis and even the interpretation. Which are difficult to identify once the data has been published and the authors was not detailed enough in their documentation.


Overall, this was a very interesting read and pointed out new tools that could one day replace the faulty peer review process currently begin used today, to avoid biases in science.

Modern Science requires Modern Guidelines

After viewing Dan Ariely’s TED presentation: The Bugs in our Moral Code, I found myself astounded by the clear parallels between Dan’s social psychology experiments and the current state at which the scientific community finds itself. Briefly, an individual’s propensity to cheat to a certain degree (defined by Dan as personal fudge factor) is controlled through not only the associated risk, but also the perception of others - chiefly those within our own community. This is a common modality that can easily sway the minds of scientists if perceived as beneficial to either the funding of their work or the legitimacy of their beliefs or ideas. This can manifest in something as simple as the lack of a proper experimental control to that of turning a blind eye to confounding data on a “make or break” grant submission.
While instances of direct data fabrication are few and far between according to groups such as Retraction Watch and The Scientist, the inundation of published experiments in highly regarded journals found to be un-reproducible is only beginning to come to light. Editorials on this subject first began to surface from names like Nature in 2012 with the release of: Must try harder. This article argues the arrival of an endemic of scientific “sloppiness” citing the overwhelming number of novel cancer therapies that fail to reach clinical trials due to inadequate pre-clinical data that cannot be reproduced. In recent times, nothing screamed out at me “non-reproducible” quite as much as the story of Dr. Charles Vacanti at Brigham and Women’s Hospital in Boston MA. The publication was first heralded as the greatest advancement in stem cell technology of this century: STAP. Its retraction and verdict of scientific misconduct later resulted in the destruction of the careers of many highly regarded scientists as well as the suicide of one of the Japanese co-authors. This instance could be owned up to the ever present “publish or perish”  mindset  inherent of running a successful lab in today’s funding environment, or simply the presence of one dishonest scientist with an enlarged personal fudge factor. Regardless of the cause, these events demand a proactive advance towards the dissemination of highly reproducible studies assessed through strict guidelines imposed by the leading scientific journals. This tenet is supported by Dan’s social experiments in which an honor code is introduced, reducing the generalized cheating. 
 Other editorials have argued that “Reproducibility will not cure what ails science” stating that open access to data is the only “cure”. With the advent of big data experiments, data mishandling becomes inherent to the nature of the experimental plan. Personally, I could not agree more with the push for more stringent data reporting not only in terms of raw data calculations but also in the final statistical analysis of such studies. PubMed Commons aims to provide a secondary approach in which reproducibility and many other subjects can be discussed in a forum based method inside of the scientific community in regards to specific publications. At the end of the day, while the experimental integrity lies at the hands of the scientists, the affirmation is at the sole discretion of the peer review.