Unbiased
research is the “gold standard” of science and aims to prevent scientists from
achieving “self-fulfilling prophecies” and deceiving themselves into believing
an idea or hypothesis is true when it is actually not true or completely due to
random chance. Properly applied statistics, experimental design, replication, among
other methods are ways to prevent bias from happening, but preventing bias is
actually against our normal human nature as discussed in Dan Ariely’s TED Talk.
Additionally, the many forms that bias
takes whether it be known or unknown to the person performing the experiments
can confound studies and/or hinder scientific progress. Panucci et al. (2010) reviews numerous types and
examples of bias in tangible research situations that most biomedical and/or
clinicians will find themselves involved in during their careers, whether it be
conducting research with such biases or reviewing it. Even though there are
many types of research bias, I think that one of the major forms that I
currently face and/or am frustrated with as a fourth year graduate student is
publication bias, which in my view is the preference of publishing positive
results and/or the inability to publish data that goes against the current
paradigm of a particular field.
Recently,
I designed an experiment to analyze the cytokine response in nonhuman primates
with malaria using two different species of malaria parasites. I went through
the standard process of generating my samples, developing and designing the
protocol, etc. Then, I proceeded to perform the experiment. In one set of
samples, I found that Interferon gamma, a hallmark cytokine in malaria involved
in immune responses, was upregulated as we had predicted. In my other set of samples,
the cytokine was undetectable. I immediately thought, “How could this be? The
assay must be wrong because this has been seen before and is found in all
malaria infections from mice to humans to monkeys.” Like a good student, I
proceeded to validate the data on a different platform and confirmed my results,
which took about three more weeks considering the need to standardize yet
another assay, place orders, etc. Whenever I presented the results to my
advisor, he or she responded, “I am not surprised; I’ve seen this before.” I
was shocked because in the literature, this cytokine is always reported, and my
advisor had even advised me to use this cytokine as a “calibrator”. However, it
turns out that this cytokine has a temporal dynamic during the infection so if
one misses a certain time frame, the cytokine may not be detectable. To my
knowledge, not a single paper addresses this observation, and this is largely
due to the need to have a positive result to get the data published. Obviously,
it is much more interesting to talk about an increase in a particular cytokine
than no change at all. Furthermore, the dogma and published literature are
skewed to presenting this cytokine as central in malaria so if it is not
present in detectable levels, researchers and reviewers begin to question the integrity
and quality of the experiment and samples.
Publication bias thus
caused me to lose time in finishing this component of my project. If the
literature was not biased based to current opinions and also by the need for a positive
result, I and others likely in my situation would not be surprised by such
results and spend time triple checking all aspects of an assay, repeating
results too many times, etc. Publishing negative results and results that may
go against current understanding or dogma should be welcomed in science and are
ultimately what will help current scientists address new and interesting
questions in novel way. Of course, this opinion is qualified by the fact that
the experiments are designed appropriately and replicated.
No comments:
Post a Comment