Thursday, January 14, 2016

Bias and the Pressure to Publish



 As an undergraduate I rarely thought to scrutinize data presented in scientific papers. I always assumed the data presented was honest and accurate. In my mind it had to be in order to get published. It wasn’t until starting graduate school and becoming involved in various journal clubs that I learned to better analyze data presented, and determine not only if the data supported conclusions being made, but also how results were generated. As I became cognizant of this, I began to notice that some of the grand conclusions made in various papers were not fully supported by the results published. Taking it a step further, I also began to notice how scientists around me would manipulate conditions and data to better support their hypotheses. I fully agree with the point made in the article “Why you can’talways believe what you read in scientific journals” regarding the pressure to publish in high impact journals as an indication of scientific success. This pressure can not only compromise the quality of work in my opinion, but it can also lead to a compromise in integrity. Pressure on the PI leads to pressure for scientists in the lab. This stress can even create an environment of fear when presenting data that does not fit the theories proposed. Another point, made in Scientific American, touched on the issue of reproducibility in research, and how in many cases research is irreproducible. The article also addressed the fact that this irreproducibility is swept under the rug due to the fear of public opinion and funding opportunities. I feel that if less emphasis is put on “perfect research”, research that proves theories, scientists may feel freer to actually perform unbiased research.   

 
While peer-review can prove useful in trying to combat unreliable research, I agree with implementing other methods, such as the 18-point checklist implemented by Nature and its sister publications and the post-publication review site PubPeer, to help rebound this apparent downward trend in the integrity of scientific research. This secondary accountability may help scientists to better design experiments and self-correct so as to not jeopardize credibility. 



Social factor: an important player in research overselling

The article “Half of the cancer drugs journalists called “miracles” and “cures” were not approved by the FDA” ( http://www.vox.com/2015/10/29/9637062/media-hype-cancer-drugs ) highlights a behavior that is more and more present nowadays, especially in medical research.
Scientists working on human diseases face a constant daily pressure, which is exerted by a society willing a cure for a certain diseases, and by their own awareness that finding the right treatment could alleviate the pain of many other people.
The problem is that no one is immune to this type of pressure. The president Obama during his last State of the Union speech says Last year, Vice President Biden said that with a new moonshot, America can cure cancer… For the loved ones we've all lost, for the family we can still save, let's make America the country that cures cancer once and for all.”  ( https://www.youtube.com/watch?v=EJDyBBGncQc )
This type of prevailing sentiment is affecting researchers all around the world and the quality of research gets more and more susceptible to bias and overselling.
In addition to the social pressure, there is another big factor...money. The financing of the research reaches amounts that we cannot even imagine. As a young scientist I am just approaching to this kind of issues and I’m starting to see the loop in which researchers are trapped. In order to be able to perform the experiments, scientists need money, which is given only if good, novel and interesting results are shown.
Nowadays, it is not only the quality of work that matters, but how good you can “sell” it to the people that will decide if you can get founded or not. If this type of loops keeps going on, universities should start consider introducing a marketing course for the scientific tracks.
Most of these issues aroused from the fact that the amount of money given to the research was diminished, on the contrary, the number of scientists increased, making more difficult for everyone to get the necessary financing.

Scientists are constantly under social and financial pressure and this is affecting the quality of the research that gets published. More accurate revisions are needed by the scientific journals and even more rigorous should be the journalists, because the public belief has a huge weight on the type of research conducted.

Wednesday, January 13, 2016

Unbiased Research

As scientist we are taught that the scientific method is the key to success, but the constant pressure of the "publish-or-perish" environment is detrimental to our scientific inquiry. As presented by Dan Ariely we must constantly test our intuitions and question our own data with scrutiny. However, this is not often the case in fact I was surprised that our human nature is to follow a norm of cheating where we "cheat by a little". But surely scientist are different right? We pride ourselves in being an intelligent and self regulating group; furthermore, tax payers expect integrity from our work. Nonetheless, this is not always the case and multiple instances have shown us that some publications are not valid or largely exaggerated. And when the public finds out about these scientific disasters we lose public trust. Since the media largely influences public opinion, I was particularly interested in reading popular media articles.

It is questionable whether the public would be willing to spend $30.1 billion tax dollars annually in funding institutions like the NIH if they find that scientist are unethical, exaggerate results or even create data. Therefore, maintaining public trust through the media is extremely important, as well as constantly training scientist on ethical methods.  As mentioned by The Replication Myth: Shedding Light on One of Science’s Dirty Little Secrets there is much to be learned from science that fails, and that history has proven that the nature of sciences is more complex than expected.  I found the Half of the cancer drugs journalists called "miracles" and "cures" were not approved by the FDA article interesting, because it is a non-scientific article that exposes some of the truths about scientific claims.   When reading buzzed or popular media I often find myself questioning the medical hype articles, but the general population does not. When we deceive the general population with "miracle cures" we create false hopes, which is much more deceiving and goes against the "do no harm" tenant for medicine or seeking the truth of research. As individuals it is important to hold ourselves to the highest standards of ethical procedures, but it is important to find other solutions for maintaining integrity in our scientific community.

Ethics seminars could help improve the integrity of our research, because as Dan Ariely discussed when reminded about morality cheating decreased. Other forums like PubPeer also help regulate the scientific community and ensure more regulation of our findings. Even though articles like Why you can't always believe what you read in scientific journals point out our failures they also provide us with regulatory measures and awareness of the importance of unbiased research. 


I typically expect rigorous review from NEJM and to see a study of convalescent plasma trial in patients with EVD (http://www.nejm.org/doi/full/10.1056/NEJMoa1511812) in which historical controls are used (Chronology Bias) and the administered product is not quantitated in any way (ELISA, Neut titers etc) to evaluate for virus specific activity prior to patient administration (Issues of internal validity) is disappointing (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2917255/). The wrong question was asked to begin with. It should not be, “Does convalescent serum administration improve patient outcome in EVD?” It should be “Does convalescent serum with activity X given at dose X during acute phase X of EVD improve patient outcome?”
The authors share some of the blame for poor study design, but they did what they could with what they had to work with – rigorous clinical research during an outbreak in a developing nation is a formidable task.  I put the bulk of the responsibility squarely on the shoulders of the editors and reviewers. Why let this through? On what basis was publication recommended? The journal has incentive to publish on “hot topics”, but this should not be done at the expense of the quality of science.  Now we have two papers in the EVD literature using survivor blood products (convalescent whole blood- http://www.ncbi.nlm.nih.gov/pubmed/9988160, and convalescent plasma- NEJM), neither of which was done correctly, and neither of which provides informative, generalizable data for use by clinicians or researchers.

It appears that the topical nature of Ebola virus disease is the underlying reason for such high profile journal publications, not the quality of the work.  It begs the question of whether or not my EBOV related publications fall into the same category. I don't think that this is the case since my work is purely observational and designed to generate testable hypotheses about pathogenesis rather than prove causality...but maybe i'm wrong?

Scientists Won't Go Against the Grain

I found Dan Ariely’s TED Talk on dishonesty (https://www.youtube.com/watch?v=onLPDegxXx8&feature=youtu.be) very compelling, particularly the part where group mentality can have a major effect on the prevalence of cheating.  

He begins by asserting that most people are likely to cheat, though not in proportion to a cost-risk-benefit analysis.  Instead, people tend to cheat a marginal amount in most circumstances in order to be able to both reap the benefits offered by cheating, as well as maintain the image that they are ‘a good person’.  

What does affect this marginal amount, it seems, is the perceived level by which other ‘good people’ cheat.  He gave one of his studies as an example, where students taking an exam were likely to cheat more extensively if an obvious cheater was perceived to be part of the in-group (a fellow university student), as opposed to when that person was part of an out-group (a student from a rival university).  

I think this phenomenon has great relevance to the scientific community, particularly in regards to lab environments.  This phenomenon speaks to a relativistic kind of morality, as well as an unwillingness to go against the grain.  If a particular lab has a specific method that they use for analyzing data or preparing figures, it may be difficult for a newcomer to challenge that method, even if the newcomer suspects that it may not be completely accurate.  It is all too easy to accept a method as just the way the lab does things; stirring up trouble about this point could lead to a variety of problems: ridicule at the newcomer’s lack of understanding of the technique, backlash from other lab members, even resentment at the implication that the data the lab generates may be flawed in some way.  As such, it may be difficult for labs to perform quality checks on themselves.  Even if all the members of a lab truly believe that their methods are utterly sound, they may be unwilling to go to great lengths – using time and resources – to try alternate methods to test their belief.  

Any uncertainty at the data-producing level is only compounded as any papers move on to higher levels of review, as the process of fact-checking becomes more arduous, as more reverse-engineering of protocols and raw data is required.  Therefore, I think the lack of replicability in scientific data (as discussed at length in the Economist’s article http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble) is largely a ground-up issue.  Until researchers are willing to hold themselves, their labs, and their techniques to the highest possible level of integrity, the quality of work that the scientific community as a whole produces cannot be improved.



Bias prevention versus presence: reframing the scientific perspective of bias


First and foremost, it is important to note how Dan Ariely pointed out the inherent human qualities that most (if not all) of us are willing to cheat “a little bit” at some point and are resistant to test our intuitions. While many scientists like to believe that we are extremely ethical and would never falsify data, it is likely the tendency to “cheat a little bit” and/or not test our intuitions will lead to bias. The bigger problem is that we don’t always realize when this occurs. For example, if a few early sets of data do not fit your hypothesis while the rest of the data do, you may reason that these are due to improper technique as you were learning the protocol. After concluding this, you do not analyze these inconsistent results along with the rest of the data sets.  Though you may think it is ethical to exclude data that you believe to be incorrect, you are also introducing the bias of your “intuition.”

This relates to Pannuci and Wilkins stating, “… only the most rigorously conducted trials can completely exclude bias as an alternate explanation for an association.” This does not only apply the analysis of results, it can also apply to the planning of experiments. If a researcher strongly postulates a specific hypothesis, they may not have the insight to conduct other assays that would throw a wrench in their proposed mechanism. Thus, the researcher may be blind to other possibilities and cannot truly exclude an alternate explanation for an association. I believe that many scientists are inflexible about their working hypothesis due to the current “publish or perish” emphasis in scientific culture. Unfortunately, this publish or perish sentiment may push some researchers to test their ethical boundaries in “cheating a little bit” just so they can get published in a high-impact journal an maintain their career.

It is crucial for the scientific community to focus on how to recognize potential sources of bias and avoid them. I really enjoyed Pannuci and Wilkins’ emphasis on the question of how bias is prevented, not on how it is present. If scientists were to reframe their perspective of bias to reflect this, it would allow for a more thorough approach in planning, executing, and analyzing an experiment.  Focusing on prevention rather than presence of bias forces a more active thought process in conducting scientific research.

From Enron to Nature: How money clouds our moral code around cheating


Dan Ariely mentioned - albeit briefly - Enron in his discussion of our "buggy moral code"(https://www.youtube.com/watch?v=onLPDegxXx8&feature=youtu.be). I was only eight at the time that Enron began to fall, so I was completely unaware of what was going on in the stock market and the US economy (http://www.investopedia.com/articles/stocks/09/enron-collapse.asp). Nevertheless, I have heard the company name thrown around, usually followed by the word “scandal”, and so when Ariely mentioned it, I took the liberty to look up the Enron scandal and figure out exactly what happened. The higher-ups and the book-keepers were essentially playing a game of hide-and-seek with the company’s assets. From what little I know of Enron, it seems to be entirely driven by money and the desire to keep up appearances. In this case, I mean the appearance of being a consistently successful company.

In terms of science, then, this begs the question: is all that we do, all our desire to publish, just simply to get (and keep) funding? As a young scientist, not yet jaded by the stresses of grant writing and the like, I know that this is not what science is about. Yet, many of the papers that are retracted are blamed to a fault by the “publish or perish” mentality (http://blogs.scientificamerican.com/guest-blog/the-replication-myth-shedding-light-on-one-of-sciencee28099s-dirty-little-secrets/ ). Science is supposed to be a community, much like the economy is supposed to be a community, but the difference lies in that science is self-policing and structured around peer review. On the contrary, the economy has external regulation. The difficulty for external regulation in science is that people without a vested interest do not understand the data and cannot review it critically. So we come to the same issue of how to give scientists more security in their funding so that they are freed to do good research, research that is reproducible, and research that can be a diving board for future discoveries. As Jared Horvath (in the above cited blog) said, scientific reputation needs to be put aside in order to make the scientific community more honest. But without forcing authors to swear on a bible or recall the Ten Commandments before publishing, as Ariely did, how can we discourage cheating and remove the temptation when there is so much money at stake?

Tuesday, January 12, 2016

Scientific funding pressures: Can statistics overcome human nature?

 Full comic
(Click here for full comic on xkcd)


How often have we believed “facts” presented in peer-reviewed papers without questioning the methods or statistical analyses – simply trusting that the researchers were honest, that no good or self-respecting scientist would fabricate data, and those who did would ultimately be found out and dealt with? Until I started doing full-time research, I wasn't aware that papers could be retracted – and for seemingly minor reasons, such as technical errors (papers are actually retracted quite often; there’s even a website that tracks them!). It wasn’t until the STAP cell fiasco last year that I started to seriously (and sadly and angrily, all at the same time) consider the reality of data fabrication and the consequences that a retraction as spectacular as that one had on the scientific community as well as the public, who trust scientists to perform research not only with innovation and accuracy, but also honesty and integrity.

Two befuddling questions I had were: How do fabricated results evade the scrutiny of the most knowledgeable scientists in the field during the peer-review process and, what pushed scientists to put their careers at stake and publish fraudulent data?

Several of the assigned articles strongly reflected and addressed my views on the flaws of the peer-review process, the pressures of being a researcher, and the inherent fallibility of human nature and judgment. And I feel that the way funding is awarded plays one of the biggest roles in exacerbating these things.

Research funding is more competitive than ever. With that competitiveness comes the increased pressure to publish more and publish faster (Read: Peter Higgs: I wouldn’t be productive enough for today’s academic system). Standard criteria for grants defined by the NIH include (grouped based on my own observations): your proposal (significance, innovation, approach), your institution (environment), and yourself (investigators and overall impact). When credibility and merit are met with the stress of vying for limited resources that could determine someone's future and career, it is no wonder that data fraud occurs.

I feel that before we can move forward from the anxiety of accepting scientific data at face value, the thing that needs to change is that scientists have to value integrity and reliability of unbiased results. In a society and scientific environment where quantity seemingly takes precedence over quality, the pressure to produce copious amounts of papers and data can be crippling in furthering scientific knowledge and decreasing public skepticism of “breakthroughs” and “high impact research.” I wonder: can statistics really overcome human nature's fallibility and bias in the face of personal interest?

Wednesday, January 6, 2016

If Bias Is Hardwired Use Statistics to Deal With It



Behavioral economist Dan Ariely discusses how, and why, we cheat in this video. My perspective on this is colored by organizing for a new class of mostly biomedical science PhD students in my experimental statistics course this spring.

These thoughts are much less concerned about them cheating in the course than about ensuring they don't join the ranks of scientists generating results that are shrouded by the mists deception.

Ariely's view is that cheating is an inherent human attribute. That there's simply no denying our proclivity to be deceptive. It is human nature.

Therefore, it makes sense that we all should simply operate with the understanding that scientists are like any other humans. We all suffer from inherent impulses to make our work and ourselves seem a little better than they truly are.

This is just one of the many biases that confront scientists as they go about their work.

Fortunately, we can use a tool that helps us deal with bias in a systematic way.

Experimental statistics is lot of things, It is applied math. It is a variety of models that simplify our data to make them more easy to understand and to explain to others. But at its very core, experimental statistics mostly functions as an abstract machine that helps us control our tendency to deceive others....and, perhaps most importantly, ourselves.