Friday, January 27, 2017

Why are we biased?

chart by John Manoogian III
Everyone (reasonable) agrees that conducting research properly involves approaching problems within some kind of framework to control for bias. The principles of experimental design is one such operational framework.

But staying inside the framework's boundaries is far more difficult than it sounds. We've talked a lot already about some of the drivers of poor research practices. Many of us speak of corners cut due to the high stakes pressures to publish, to earn our PhD's and to acquire grant funding.

I would argue that the (probably) very common habits such as quietly trimming off outliers, or side-railing inconsistent replicates are not driven by the high stakes pressures. I'd be willing to bet they are more likely driven by the biases that might serve our artistic impulses--to mold misshapen results into an expression of our data that looks as pretty as we imagined it might look when we first set off to run the experiment.

Poor (or no) pre-planning of experiments seems to come from the lack of training, or an unwillingness to go back and relearn, or an unwillingness to take on the cost of heavy prescriptions (eg, a power analysis that says a large sample size is needed).

But I'm also beginning to give a lot of credence to the idea that the bias we introduce into our research is more subtle and pernicious. That it arises from some very primitive--and inescapable-- behavioral functions. See Dan Ariely's thoughts on the rationalization of cheating, for example.

Along comes Buster Benson, who appears to have given bias a lot of thought. He reaches the following conclusion while organizing a large compendium of cognitive biases:
"Every cognitive bias is there for a reason — primarily to save our brains time or energy."
In this view, bias is an adaptation. It's a feature, not a bug, of behavioral biology. Bias is how we solve difficult problems. We have evolved all types of biases because we face all kinds of problems.

Benson isn't just some random blogger operating in a vacuum. There's some serious scholarship behind the idea (pdf). For example, that willingness to go through with a severely underpowered experiment, would be considered an "error management bias" solution for the problem of blowing the year's budget to test just one hypothesis.

Although bias might be human nature, that's not going to dismiss the biased solutions we arrive at as researchers, which are also problems themselves. One's that we'll probably have to solve with some other bias.  And so on.

This is why operational frameworks come in so very handy.

Wednesday, January 18, 2017

Which of these proposed solutions for unreliable research do you favor?

Munafo et al’s recent publication of “A manifesto for reproducible science” represents a turn away from pointing at problems with how we practice science, and towards offering some solutions. 

I thought it would be interesting to take a poll on a handful of their suggestions.

Of the following, which one do you think should be implemented?

Use the comments in this post to explain why you voted that way.



Tuesday, January 17, 2017

Navigating the Noise of Imperfection

     During my graduate studies, I have frequently felt overwhelmed by the immense amounts of data and publications available on any given topic.  When I discuss these papers with my colleagues, they will point out irregularities that I mistook for truthful arguments within the publication. Like any respectable graduate student, this realization that I missed numerous flaws in these studies introduced copious amounts of doubt in my abilities to discern accurate data from these publications.  Instantly, I begin to wonder: Can these studies be trusted? How does one learn to find the truth amongst the noise of imperfect data? 
     These imperfections in published data stem from the “publish or perish” phenomenon we are currently witnessing in modern academic research.  Even as graduate students, scientists are constantly bombarded with the pressure of publishing prolifically while maintaining the utmost of integrity in their work. However, as most of us have experienced, the pursuit of scientific truth is riddled with negative results. Unfortunately, the perception in our scientific culture is that negative results are unimportant, useless findings.  In the Economist article, “Trouble in the lab,” it states that, “Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007.This fear of publishing negative results severely limits the availability of scientific truth to the community and enriches for imperfect positive data.

     Jared Horvath presents an interesting perspective in his Scientific American guest blog, “The Replication Myth: Shedding Light on One of Science’s Dirty Little Secrets.”  In his article, he states, “In order for utility to emerge, we must be okay with publishing imperfect and potentially fruitless data…we must trust that the public and granting bodies can handle the truth of our day-to-day reality.” Thus, to successfully navigate and uncover the truth within the expanse of scientific knowledge, the scientific community must collectively learn to not fear imperfect or negative data.  Instead of condemning the existence of imperfect data, we need to disclose this reality and embrace the truths we can uncover from the inherent presence of imperfections within scientific research.  

"Publish or Perish", Human Nature, and Media Hype--A Bad Cocktail

The "publish-or-perish culture" that dominates science today has pushed the field in an undesirable direction. But despite the fact that much of the science we see published today is inconsistently reproducible, this is not all entirely due to the "publish-or-perish" culture, nor maleficent cherry-picking motivated by self-gain. The truth behind the current situation is much more benign, although equally worrisome and harmful.

The first explanation simply lies in the imperfection of science. An article on Scientific American reveals that even the scientists we consider greatest had experiments that led to irreproducibility upon others’ attempts to repeat it. Rather than stigmatize irreproducibility and experiments that didn’t “work,” it would be more beneficial to open up discussion and generate a space where it is possible to talk about this issue and resolve it, as is being done in PubPeer. By making discussion about the more fallible aspects of science open, researchers wouldn't feel pressured to only report experiments that “worked” and hide those that didn’t. Consequently, experiments that had statistically insignificant or unexpected results could serve as advice or inspiration for future research.

The effects of the stigma against insignificant results and irreproducibility are compounded by human nature. More insidious and difficult to detect are our conflicts of interest and tendencies for dishonesty. While it is easier to detect conflicts of interest in others, as Dan Ariely posits, it is incredibly difficult to detect it in ourselves. Sometimes we simply think we are furthering science by eliminating outliers that are hiding the significance of our results which we believe to be correct. It is difficult but necessary to take a step back and realize that it not for the benefit of science for us to do so, but for our own.

The situation is exacerbated by the media. By reporting medical “breakthroughs” and “miracles” in studies that sometimes weren’t even done in humans, science is portrayed as a lot more all-powerful than it actually is. This adds to the atmosphere of pressure for researchers to meet that same misleading bar.

Bias in science needs to be dealt with from square one. Rather than allow ourselves to fall prey to the pressures of the scientific community to publish and our own well-meaning intentions to illustrate hypotheses we believe should be true, we need to accept that sometimes science isn’t infallible, breakthroughs don’t happen as often as the media reports, and experiments that don’t yield significant results aren’t failures.

Why are negative results impossible to publish?

One of the critical pillars of scientific research is that it is reproducible and free of bias.  That is how other scientists and the public are able to validate and trust new scientific work.  More recently than ever, with the age of “publish or perish” upon us, it seems that new research is not held to the same level of scrutiny that it was decades ago.  This is largely because there are not enough qualified people to do the unpaid work of rigorously peer-reviewing papers.  Research funders and university executives have created an environment such that the number of publications is just as (if not more) important the quality of them, as this is one of the most important criterion for awarding tenure-track positions or research funding.
            Additionally, publishing negative results in scientific journals is very challenging, if not impossible as journals claim that negative data fails to draw readership.  You’d be hard pressed to find any recent peer-reviewed journal article where the entire results of the study were that negative.  While I understand that new scientific work needs to “prove” something novel, negative research results are often not included, or are spun (sometimes using immoral statistical methods) to make the data appear positive, leading to massive amounts of bias in the field. 
            What some people fail to see is that, in some cases, there is nothing wrong with publishing negative results.  There are even cases where they can greatly benefit society, such as for debunking public health myths.  Pharmaceutical research is an area where publishing negative results is incredibly important, yet these results are so often passed over, because a paper will never be selected for a journal if the sum of the research is that “these 1000 molecules are ineffective at agonizing the receptor.” In that field, having the need for new drugs to come to the market to make money for the companies can lead to additional bias, as the FDA requires drugs to be stringently tested in clinical trials.

            All in all, the publish and perish culture of today’s scientific research coupled with the inability to publish negative results is leading to significant issues with bias and reproducibility that will become detrimental if change is not enacted.

PubPeer: Changing the face of peer review

The scientific community holds the peer review process in high-esteem, citing it as the means by which faulty science is weeded out and only the most accurate scientific data is published.  However there is a growing recognition that, given the sheer volume of work produced, and the competitive nature of our field, this process is no longer functioning as intended.  Brandon Stell, who only very recently revealed his identity, along with several colleagues developed a website with the aim of putting the peer review process back into the hands of all scientists, rather than a select few.  An article published on Vox.com describes Stell and colleagues' website, PubPeer.com, where virtually any researcher can comment on already published articles.  This forum, while certainly not without pitfalls, puts the scientific screening process into the hands of thousands of individuals.  In this way scientists of vastly different backgrounds, approaches, and viewpoints can begin an open discussion about the validity and impact of a published article.  The article's authors can view these comments and respond in turn, allowing for a level of world-wide scientific discourse not previously seen.  The problem with this method, however, is that the articles have already been published.  They have already been made available to the media and the public, two venues which all too often misinterpret accurate studies, or worse, take inaccurate ones as truth.  While the comments on PubPeer may occasionally garner wider recognition this is often not the case.  Perhaps when knowledge of the site and recognition of it's value becomes widespread it can begin to have a higher impact on scientific review and validity.  One of the strengths of PubPeer is that it relies on any and all individuals in the scientific community to post their critiques, however this may also prove to be a drawback. In order for a community-wide peer review process to be successful it must receive input from a representative portion of the population.  Scientists are often already over-worked and strapped for time.  Reading through an article and beginning a discussion regarding the validity of the work simply may not be a top priority, much less be a regular portion of their job.  Perhaps in order to encourage a larger majority of voices scientific training at institutions around the globe could encourage participation in this type of forum.  Perhaps as younger scientists are trained with the mindset of post-publication review by the world-wide scientific community, the trend of faulty science will begin to turn around.

Why, Why, Tell 'em That It's Human Nature

Humans—the creators of the seven world wonders and the source of global warming. We are a perfectly imperfect whether we know it or not and, as humans, scientists are not immune to this fact. The simple human nature of scientists arises in many forms. For example, human error appears during the experiment implementation process, while human desire makes its appearance as we all long to make a discovery that would positively impact our field. Lastly, our natural survival instincts and competitive air settles in as we feel the pressure to produce quality data in order to secure funding.
Basic human behavior is simple to display in the sciences, but how do we ensure the line between human nature and scientific ethics remains bold and defined? Neil DeGrasse Tyson stated in a short Q&A response that scientists are ”trained to minimize the role of our bias in our experiments and interpretations” but in all actuality, it’s impossible to completely remove ourselves from the equation. 

To help with this issue, Tyson points out the fact that the scientific process has a verification step built into it in which our peers hold us accountable to our duty to remain objective and driven by the facts. Lucky us to have a system that is ready to catch us if we fall, but what price is there to pay if we choose this option? What happens to us scientists if we choose to rely on the scientific process to keep us honest or if our human traits of error, desire, competition, and survival lead us to the wrong “truth” enabling us to see what we want to see in our experimental setup or data? Oh, well, only the possibility of discounting our future work in the eyes of our colleagues… clearly there’s got to be a better way than allowing “the system” to catch and discipline us. The answer sounds very Disney-ish: it all starts with you.

We as HUMAN scientists are able to police ourselves into remaining objective in the name of science. Our first steps of doing this lies in the selection of a testable (and refutable) hypothesis and the ethical choice to note experimental conditions or results that might make the resulting data invalid. While our bias is inherent to us as human beings, luckily we  have ourselves and “the system” to keep us in line.

Was Galileo a Mad Hatter?

The state of our current scientific climate, as evidenced by 2013 articles in the Economist and perhaps more tellingly, the heated reaction TO those articles, can leave a budding scientist feeling overwhelmed (Article 1Article 2). We are already given a laundry list of scientific caveats: to hold in memory the foundational papers leading to our own scientific questions, to thoroughly analyze and critique these papers, and to plan out the statistical analyses of our experiments before even warming up the centrifuge. And to add to that growing list, we now have to contend with the fact that the vast majority of those foundational papers are likely to contain incorrect or unverifiable data!? Why can’t I just trust the scientists that came before me and get on with my own contributions? That fact itself may make any reasonable person decide to kick the scientific can and walk away to something seemingly more attainable, such as pursuing a career as an Olympic triathlete.


The articles in the Economist claim that a number of factors contribute to the apparent sloppiness and unreliability of modern science including, but not limited to: professional pressure to “publish or perish,” the drooling of top-tier journals over novel results (replications are not sexy), and plain ol’ not understanding the intricacies and rigors of statistics.

One of the fiercest criticisms of the Economist’s papers is a graduate student’s acerbic response in Scientific American. He claims that: “In actuality, unreliable research and irreproducible data have been the status quo since the inception of modern science. Far from being ruinous, this unique feature of research is integral to the evolution of science.”

Say WHAT? First of all, saying that unreliable research is a unique feature of research does not make sense. Second of all, how can unreliable research and irreproducible results be integral to the evolution of science? Let’s see what his evidence is to back up his claims.

The author continues on to discuss examples (dare I say anecdotes!) of famous scientists, wrong about one thing or another, as his reasoning for why it’s okay to have irreproducible results. But is Galileo famous because he rolled a brass ball down a hallway and declared his law of the motion of falling bodies to no one in particular? Further, any ol’ mad hatter off the street could say the same and no one would blink. Why do we trust Galileo? The author gives no reason for why the foundations of science rest on men like Galileo, Darwin, and Dalton. He may do better to think about why we trust Galileo in the first place rather than take away the misleading message that “if Galileo can intentionally create unreliable scientific data, then so should we!”

Thomas Kuhn discusses paradigm shifts in his classic book, “The Structure of Scientific Revolutions.” Kuhn argues that scientific revolutions occur when anomalies are discovered in well-accepted paradigms, thus casting a new light onto old data. In the case of Galileo, he observed an anomaly in a well-regarded paradigm. Further, instrumentation to measure physical properties was not available to Galileo at the time. Much of science was logic and deduction. It was what we might think of as crude experimentation, with no true controls nor advanced statistical analysis. Kuhn states, “…the analytical thought experimentation that bulks so large in the writings of Galileo, Einstein, Bohr, and others is perfectly calculated to expose the old paradigm to existing knowledge in ways that isolate the root of crisis with a clarity unattainable in the laboratory.” In other words, until the thought was created by Galileo, the means to test it did not exist. It was only after Galileo created this paradigm shift in thought were other scientists ripe to do the rigorous experimental analysis. Of course, it’s also important to note that his scientific career did not hinge on tenure or grant money.

The Scientific American essay misses the point of the Economists’ critique. The Economist is not suggesting that the starts and stutters of the scientific process are a hindrance to progress, nor are they saying that science should come only from 100% “truthful” scientific publications (whatever that means), they are merely saying that we ought to be careful in our reporting and analysis of data. This responsibility includes the task of verifying the work that came before us. If we avoid this, we are doing a disservice to the great fathers of our scientific disciplines. Galileo and Darwin could have only dreamt about the possibilities we have at our modern-day fingertips. We ought not to let them down. 

Little Cheaters

Dan Ariely in his talk, ‘The Honest Truth About Dishonesty’ at The Amaz!ng Meeting 2013 introduces the concept of little cheaters, that is, people who are dishonest in ways that they consider small enough to maintain personal morality while still reaping benefits of dishonesty. This concept was derived from studies in the general population suggesting that scientists too are privy to such behavior, but what implications does this have for science?

The most likely effect of dishonesty in science is irreproducibility. If experiments are planned, executed, interpreted, or reported with even the slightest amount of dishonesty, they are impossible to repeat by others. Consequences extend beyond those who seek to replicate to those attempt to build on the existing work as they would be working off likely incorrect information. Such deception is clearly undesirable but eliminating it can be difficult as perpetrators may not always be aware of their deception because they perform it while convinced of their morality. This is further compounded by the inherent conflict of interest that exists in all scientists. Every researcher holds stake in the success of their work: graduate students benefit from publishing papers and graduating early, senior investigators gain career advancement and increase their marketability for grant funding by presenting positive results. All these factors color the objectivity of researchers making it harder to recognize the subtle ways in which they can be dishonest such as inflating the meaning of their findings or omitting unfavorable results. Proper statistics should be able to check this bias but it is no secret that many laboratory scientists are not sufficiently conversant in statistical methods.


What then, does the combination of dishonesty, bias, and poor statistical knowledge mean science is doomed? Should presenting work be put off until these problems are eliminated? No. Rather, science needs to be redefined as the work in progress that it is and not the subject of irrefutable answers as perceived by many. Efforts should be taken certainly, to minimize blatant falsehood in published work, but it should also be acceptable to not be quite certain. Scientists will be more likely to shed their little cheater identity when it is fine to have work that does not completely make sense.

Science: With a grain of salt

At almost every decision-making point, humans are innately biased beings. Our unique capacity for pattern recognition has allowed us to make great scientific strides, but undoubtedly has resulted in bias and false positives when interpreting scientific data; and the “publish or perish” culture in academia promotes novel findings and positive results over reliability of findings. These are things that most scientists are aware of, but very few take adequate measures to minimize the effects of bias; including myself. Many of the articles reveal the daunting commonality of the false positive, but Horvath presents it as a reality of science; a statement which resonated with me. It is our responsibility as scientists to conduct investigation and report findings as ethically as possible. But with that said, we are humans and therefore fallible. It would be impossible to expect that every reported finding would be true. To obtain such a goal, there would need to be a shift in the current expectations of the publication “race”. A slower pace of review and publication would promote greater validation of studies and an in-depth peer review process. But, a slower pace of publication could prevent a rapid rate of discovery. I am in the “self-correcting “science camp. I think that false theories can be filtered out if validation studies fail to replicate the results. I know if I’ve tried a technique and failed to replicate the result after a few attempts, I move on to a different methodology. The emerging trend toward post-publication review will be a great tool for this weed-out process and will ultimately lead to better science. I believe that we as scientists know to take scientific findings with a grain of salt. However, the size of that grain can vary based on whether it corroborates or contradicts our own theories, something that should give all of us pause for reflection.