Tuesday, January 23, 2018

Reaction to "A few things that would reduce stress around reproducibility/replicability in science"

There is a major issue regarding reproducibility and bias in science. I believe the primary causes of this issue are the limited funding for researchers and the lack of science knowledge by the general public. Together, these cause strong incentives for scientists to publish “high-impact” papers containing novel, important results and oversell the implications of their work, leading to decreased research quality and a lower likelihood of reproducibility. Science is supposed to be a noble pursuit of truth and knowledge for the good of humankind. However, science is performed by people who have personal career ambitions. Naturally, this is a source of trouble for many researchers. For example, it is very easy to publish underpowered studies, particularly when the statistical tests used are complex and might confuse a peer reviewer. Also, there is much more incentive to perform novel experiments rather than reproduce prior experiments, even though reproducing prior results is extremely important.  
            I enjoyed reading the article A few things that would reduce stress aroundreproducibility/replicability in science, by Jeff Leek. In particular, I strongly agreed with the author’s emphasis on acknowledging the difference between exploratory and confirmatory research. A more structured and organized way of confirming exploratory scientific results would be more useful than the current system. I also agreed with the author’s point that “data is valuable but in science you don’t own it.” Very often, scientists become attached to specific results or theories and consciously or unconsciously bias their work towards confirming these beliefs. A good scientist should be judged by his or her ability to reason and interpret results. However, it is extremely difficult to assess researchers’ abilities in these areas, so most researchers are judged on their publication history, again incentivizing less reproducible work. I strongly disagree with the author’s suggestion that we should be more private about our work and data. I believe that a push towards more open data sharing and experimental collaboration is crucial for solving the reproducibility crisis.

            Ultimately, I think improved funding for basic science research, as well as a strong, conscious effort to address these reproducibility issues, would go a long way in improving the quality of science being performed nowadays.

Is The Strained Economics of the Scientific Enterprise A Significant Cause for Scientific Bias?

Economists are the most peculiar kind of scientists.  In fact, if I were not drawn to the world of medical technology innovation, I would certainly entertain my nerdy-streak by becoming an economist.  Anyone who has read the award-winning book Freakonomics might agree (aside: Freakonomics is a brilliant podcast for the intellectual-at-heart). So what exactly does the economy and the scientific enterprise have to do with bias in academic research?

I may be biased, yet I believe economics has everything to do with it.  We as human beings are susceptible to incentives, however benign or malignant. Any science-minded individual who keeps a beat on the news will know that irreproducibility and moreover, retractions of manuscripts is on the rise globally.  Indeed, while a recent article .  Wherein does the culpability lay?



I argue, incentives unduly influence individual investigators yet publishers are also to blame.  It is well known that funding for the scientific enterprise in the United States is at an all-time low when controlled for costs of inflation since the termination of the NIH budget doubling in the early 2000s.  With less access to funding and a glut of Ph.D’s entering the academic job market (a worth subject for another discussion), researchers must to more with less in order to publish. Fellow blogger Austin Nuckols is wise to note “the culture of science, especially in the academic setting, follows a mantra of “publish or perish”.  The circle of life for academic research is an ultra tenous one driven by supply and demand of the NIH dollar: Win grantàperform researchàpublish à repeat.  One break in that chain is enough to sink a mid-career academic’s productivity (not to mention salary support). When jobs are uncertain every few years, it is easy to see where bias can come top-down, influencing the un-empowered graduate student to conduct research with significant bias, leading to conclusions “in our own image”.   Publishers are similarly incentivized to avoid reducing bias, despite calls to do so in high profile journals (e.g. Nature, Cell).  “Novelty” sells; and who can remember the last time a reproducibility study was featured in the high-impact “Vanity” journals?


Looking at this dismal state of affairs for the budding researcher, I feel incentivized to begin the inaugural edition of The Journal of Research Reproducibility or better yet, The Journal of Failed Experiments (And How to Avoid Doing Them).  Perhaps then, the odds of academic success in research will be in my favor. 

Incentive to Care

Scientific discovery and technological innovation can do and have managed extraordinary feats. However, today we hear so much questioning the reliability of the findings, and countless resources have been essentially wasted funding projects that never reach fruition. When we examine the system of scientific discovery and publication on paper, we find that it is a rigorous process that requires careful planning and execution of experiments meant to answer questions. The same question must be answered from multiple angles, proven and re-proven with each proof repeated to ensure that the manuscript sent to the reviewers is the best work the lab can offer. Multiple reviewers must then scrutinize the results and methods and send feedback often involving the original authors to run more experiments to cover any holes that might exist in the work. Finally, after publication, the article in question offers just one small answer to a problem still layered in questions, and it is the responsibility of other researchers to retest these data as they try to find their own answer to the problem.
Why then, with so many checks and balances, does this system seem to fail? In the article from The Economist “Trouble at the lab”, the author explores some of the specific issues that lead to the above problems with one of the major problems being the lack of incentive for researchers to engage in proper scientific practice. The culture of science, especially in the academic setting, follows a mantra of “publish or perish”, and journals incentivize positive and novel findings over replications of experiments or negative findings. These positive findings are much more likely to have a lower statistical power than the negative results that are found, meaning that more bias is published and fewer useful results. Additionally, other researches spend countless hours and dollars trying similar kinds of experiments not knowing that those methods have already been tried. But what researcher can afford to try to publish all their negative results or try every replication that’s in the relevant literature?
Looking at Dan Ariely’s “The Honest Truth about Dishonesty”, we can see that the human tendency to look after one’s own interest is phenomenon that is as omnipresent as it is complex. Applying some of the experimental conditions to those of the everyday conditions that many scientists face, I cannot blame any one scientist for behavior. In Ariely’s experiments, when the participants see another test taker (the actor) who very obviously cheated on the short math exam and easily profited from it, the incidence of cheating rose drastically. People compete with each other, not with integrity, for survival, and the same applies to a scientist. He might know he needs to replicate an experiment, but there’s only so many lab hours and reagents, and the draft to be sent out needs that last final spark to push it through as opposed to another replicant of a previous Western Blot. In a system where we feel like we’re being wronged, the laboratory lifestyle being very easy to imagine as one of those systems, it’s much more conceivable to justify self-promoting behavior because it’s the only way to compete with one’s colleagues who are engaging in the same practices.

However, this does not need to be the end-all for this story. More and more today, there are resources and entities seeking to remedy these problems we find in the scientific community by incentivizing behavior such as publishing methods, data, and negative results. A fellow blogger, Katherine Bricker, references in her piece the journal Cell’s mandate for investigators to list their exact methods and reagents in their “Star Methods” tab. In Ariely’s work, he found that when he asked students to recite the Ten Commandments before taking the exam, the incidence of cheating dropped to 0% astoundingly. Taking the time to remind investigators and scientists of their obligations to truthful and rigorous scientific practice and actually offering incentive for them to do so, we can change this tendency and start using our time and money more effectively and lay the foundation for stable and meaningful science in the future.

https://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble
http://www.cell.com/star-methods
https://www.youtube.com/watch?v=G2RKQkAoY3k

Monday, January 22, 2018

Unbiased Research and the Public's Perception

I found one interesting common thread that many of the articles touched upon was how science is portrayed to the public and media and how this creates both misrepresentations of what the data actually says and the pressure of scientists to live up to these misrepresentations.  Based on conversations I have had with family and friends who work outside of the world of science, I think that many of these people’s perceptions of biomedical research and the scientific process are far different from what actually happens.  Most picture that “eureka” moment where one mysterious, colorful liquid is dropped into another one and a miracle cure to a terrible disease is created.  As we all know, this is not what happens and very often not even the goal of a given project.  Media portrayals of exciting and important science that does emerge often use buzzwords like “miracle” and “breakthrough”, even when, as one the assigned articles stated, the effects of these drugs do not even have human data to back them up.  Marketing science this way puts pressure on researchers to produce results that can make headlines and inevitably introduces bias into what is supposed to be an objective process.  Would it be better, then, to give a more realistic portrayal to the lay public so that they have a better view of what good science really looks like?  Perhaps, but some of the public’s trust in science and the reason they see it as important comes from the idea that science provides grand, definitive solutions to serious medical problems.  Do you lose public trust and media interest from more accurately billing scientific discoveries as small, incremental steps that lead to a bigger picture? Can we trust the public to see that bigger picture (one that sometimes it is difficult even for the scientists to see)? I think it’s a tough, but important problem for the scientific community to address in the war for against biased research.

Sharing is caring? Open source data as a solution to the reproducibility crisis

As many others on this blog have already discussed, the reproducibility crisis is a serious concern shared by many scientists. While some blame the current culture of scientific achievement and others blame a widespread misapplication of statistics, the exact reasons behind this crisis are difficult to determine. In all of these discussions on scientific reproducibility, the question still remains: how do we fix it?

One proposed solution is sharing data. As Jeff Leek discusses in his article, there is much debate and fear about the idea of sharing dating for increased transparency and discovery. For many years, scientific findings have been shared in journals, where researchers present the results and interpretations of their studies via descriptions and figures. This method has been the fundamental means of scientific progress, allowing scientists to build discoveries off of the foundation of others. However, it is increasingly debated whether papers are enough – in an increasingly connected world, should scientists also share their raw data, allowing others to truly dive into the analyses performed as well as search for new findings of their own. While open-source data could open up a new world of discovery, there are also potential risks: data-sharers could lose an advantage in their field if others publish findings before them and without credit and data-analyzers could potentially misinterpret or improperly use the dataset without proper training. There are both pitfalls and advantages to data sharing, but as the science community begins to acknowledge and address the reproducibility crisis, open source data is a very viable solution.


What does open-source data look like in practice? In the field of neuroscience, there are several organizations and research groups pioneering data sharing. One such group, Neurodata Without Borders, attempts to address the logistical problems of sharing data. One obstacle to open-source data is that different research groups use very specialized techniques and store data in various distinct ways that can be difficult for a potential data analyst to understand. The Neurodata Without Borders pilot project attempts “to develop a unified, extensible, open-source data format for cellular-based neurophysiology data.” With a unified database, this organization aims to make data-sharing accessible and practical for scientists across the globe. In another pioneering effort to facilitate data sharing, a group of neuroscience laboratories across the world recently came together to form the “International Brain Lab.” This lab is a giant collaboration and project of reproducibility, where laboratories in various locations will use the same tasks and protocols to develop a standard model of neural processing. The International Brain Lab’s “standard protocol attempts to address all possible sources of variability…. from the mice’s diets to the timing and quantity of light they are exposed to each day and the type of bedding they sleep on. Every experiment will be replicated in at least one separate lab, using identical protocols, before its results and data are made public.” With solutions such as these, perhaps the trend of irreproducibility in science will be replaced with a more positive trend of collaboration and unity in scientific discovery.

Publish... and Perish?

We like to think that scientists are innately good people, spending countless hours at the bench to help catapult us into the next generation of life-saving medication or medical procedures. On the surface it seems wholesome and altruistic, but diving deeper into the scientific community it becomes apparent that there is a very large elephant in the room: the issue of bias and irreproducibility. In an article published by The Economist the author describes the cut-throat culture that academia has established and how it leads to bias; for example, the motto “publish or perish” may influence researchers to embellish their work in order to publish in high-impact journals or to even publish at all. These high-impact journals then fight back by having egregiously high rates of rejection for manuscripts, leading researchers to cherry-pick their data further to make the cut. The author then goes on to state that companies like Bayer and Amgen failed to replicate more than half of studies they found on breakthrough cancer research, a section of research that is highly esteemed by scientists and the general population alike. 

 The scientific community has created a vicious cycle that seems to keep growing. This immense amount of pressure is leading scientists to falsify or alter data to fit a specific agenda, and soon it will cost them more than their reputation in the field. Flawed research costs us time, money, resources, and the trust of the general population. This puts scientists at a bit of a crossroads, but I think it is up to us to begin making the changes necessary to fight bias. Ethics should be taken more seriously and started even before entering graduate school even though sometimes it can seem like a “no brainer.” Additionally, having a grasp on statistical analysis is imperative for all scientists and not just the PI; if more people understand statistics then it may yield more sound data or make it easier to spot falsified data instead of relying on someone’s best judgement. 

The Elephant in the Room – Irreproducibility

As a scientist-in-training, it’s hard to ignore the issues of bias and reproducibility in the field. Whether I am at an ethics workshop or keeping up to date with my social media, I always seem to come across an article reminding me that scientists are living in dark times. But how much of an issue is irreproducibility? According to an article in Nature, 90% of 1,576 researchers surveyed think there is a reproducibility crisis, with 52% believing it’s a significant crisis. As for my field, a research article published in PLoS Biology by Freedman et. al. estimates that at least 50% of preclinical research is irreproducible. As someone who works with mouse models, this figure hits close to home. It turns out that differences in mouse feed and mouse microbes can actually make it difficult to replicate in vivo mouse experiments. If this itself wasn't a problem, the authors of the PLoS Biology article also estimated that this results in about $28 billion dollars spent a year on irreproducible research. In a time where funding is extremely tight, this amount is ridiculous. All of these issues are also added on top of the fact that lack of reproducibility is costing us time. Even though it is important that biomedical research be replicated, having to replicate most of the research already out there is taking time away from more pressing issues and could hinder progress. Although there are factors that can't be controlled which might lead to lack of reproducibility, the Nature article also pointed out that most of the factors that do contribute to this problem are those that we can control. It’s hard to believe that scientists have created more problems than they have solved.  

After doing more research on the topic, I believe that the problem of reproducibility and bias in science is significant and can’t be underestimated. It is difficult to accept that a lot of research in my field can’t be reproduced, but, this does not mean that all hope should be lost. Work is already underway to help tackle some of the causes of irreproducibility. I applaud Emory for implementing ethics seminars and requiring an ethics program to be completed before graduation. I also applaud scientific journals such as Nature and Science, and funding bodies such as the NIH for taking action to ameliorate the situation. Ultimately, however, I believe that these changes won’t be enough in the long run. To truly work towards ending this problem, we need to start at the most basic level – with the scientist. This is the reason why I particularly liked Jeff Leek’s article, his suggestions seemed very reasonable and not too tedious to implement. One suggestion that was particularly interesting to me is to stop publicizing our scientific results as miracle solutions. This directly feeds into Julia Belluz’s Vox article about “revolutionary” cancer drugs that are not really what they claim to be. This could be a result of our bias, which is another important issue. Scientists, much like everyone else, are not immune to having biases. While we can’t always avoid having them, recognizing them and overcoming them is already a step in the right direction. Finally, I believe we need to work together on changing the culture around publishing, which places a lot of stress on scientists and definitely contributes to irreproducibility. I am confident that with this new generation of scientists can make headway on these problems!