From the early 1930’s and the famous Tuskegee Syphilis Study to the much more recent Japanese
scientist who claimed to have created a novel method for generating iPS cells,
the scientific community has seen many extremes of scientific fraud. But what
about the smaller more realistic daily challenges we face everyday in our own
laboratories? It is easy to say “I would never do that!” after reading an
article about a man using a Sharpie to draw results on his mice. It is a lot
harder to tell your lab mate that you believe they are introducing bias into
their experiment
In an article published by the Economist, researcher Dr. Daniele Fanelli, of the University of Edinburgh, conducted a
survey of academics that performed between the years 1987 and 2008 and found
that “…28% of respondents claimed to know of colleagues who engaged in
questionable research practices.”
When it comes to being critical it is
important to not be hypocritical at the same time. I believe you must first be
able to hold yourself accountable before being able to hold others accountable.
However, is it possible to point out mistakes if you were never taught the
correct way to perform an unbiased experiment?
Bruce Alberts (previous editor of Science)
believes that in order to increase credibility in science “…scientists must be
taught technical skills, including statistics, and must be imbued with scepticism
towards their own results and those of others.” Having an understanding of problems
and issues we are likely to face as scientists everyday and how to combat them
will give new scientists the confidence to speak out and be critical of
themselves and others.
Negative results are important to
emphasize, such as what is done on websites like Retraction Watch, so that scientists do
not spend a large portion of time wasting energy on a project or an idea that
has already been shown to have negative results. The
pressure to “publish or perish” is a major factor pushing scientists to only
focus on positive results that in turn leads to more pressure on not wasting
time.
It is disheartening to hear that many
more papers are retracted now than they were a few decades ago. This leads the
general public to believe that science cannot be trusted and that federal money
is being wasted. However, because of the advancements in science and technology
and an increase in the number of journals accepting papers, retractions still “make
up no more than 0.2% of the 1.4m papers published annually in scholarly journals.”
Will there ever be a world of unbiased research?
Probably not. But we can move in that direction by first holding ourselves
accountable and acknowledging and avoiding our own biases.
Could the increasing rate of retractions have to do with the easier dissemination of research and open access while also advancements in technology allowing for competitors to refute or disprove a previously published idea? It would be interesting to delve into the force that begins a retraction.
ReplyDeleteOne of the founders of Retraction Watch, Ivan Oransky, actually gave a talk at Emory last semester and he seemed to agree with the point you are making Ashley. Even though the retraction numbers small (0.2% out of 1.4m papers), he mentioned that the majority of retractions stem from honest mistakes in data interpretation or technical complications discovered after collection of data. Very few retractions are the result of outright fraud. This is good news, because better education in experimental design and statistics can limit the number of honest mistakes made with data, however the bigger issue is that there isn't a standardized mechanism for dealing with retractions of published papers. That means that even papers that a retracted because of instances of outright fraud can still often times be accessed online with confusing or inconspicuous notifications about a paper's status. That means bias papers can still be cited years after an official processes or author's confession judges them to be bad science. In order to help bring about "a world of unbiased research", we as a community need a better way to identify the biased work that is already out there. That starts with a standard for how to deal with retractions across all journals.
ReplyDelete