First and foremost, it is important to note how Dan Ariely pointed out
the inherent human qualities that most (if not all) of us are willing to cheat
“a little bit” at some point and are resistant to test our intuitions. While
many scientists like to believe that we are extremely ethical and would never
falsify data, it is likely the tendency to “cheat a little bit” and/or not test
our intuitions will lead to bias. The bigger problem is that we don’t always
realize when this occurs. For example, if a few early sets of data do not fit
your hypothesis while the rest of the data do, you may reason that these are
due to improper technique as you were learning the protocol. After concluding
this, you do not analyze these inconsistent results along with the rest of the
data sets. Though you may think it is
ethical to exclude data that you believe to be incorrect, you are also
introducing the bias of your “intuition.”
This relates to Pannuci and Wilkins stating, “… only the most rigorously conducted trials can completely exclude bias as an alternate explanation for an association.” This does not only apply the analysis of results, it can also apply to the planning of experiments. If a researcher strongly postulates a specific hypothesis, they may not have the insight to conduct other assays that would throw a wrench in their proposed mechanism. Thus, the researcher may be blind to other possibilities and cannot truly exclude an alternate explanation for an association. I believe that many scientists are inflexible about their working hypothesis due to the current “publish or perish” emphasis in scientific culture. Unfortunately, this publish or perish sentiment may push some researchers to test their ethical boundaries in “cheating a little bit” just so they can get published in a high-impact journal an maintain their career.
This relates to Pannuci and Wilkins stating, “… only the most rigorously conducted trials can completely exclude bias as an alternate explanation for an association.” This does not only apply the analysis of results, it can also apply to the planning of experiments. If a researcher strongly postulates a specific hypothesis, they may not have the insight to conduct other assays that would throw a wrench in their proposed mechanism. Thus, the researcher may be blind to other possibilities and cannot truly exclude an alternate explanation for an association. I believe that many scientists are inflexible about their working hypothesis due to the current “publish or perish” emphasis in scientific culture. Unfortunately, this publish or perish sentiment may push some researchers to test their ethical boundaries in “cheating a little bit” just so they can get published in a high-impact journal an maintain their career.
It is crucial for the scientific community to focus on how to
recognize potential sources of bias and avoid them. I really enjoyed Pannuci
and Wilkins’ emphasis on the question of how bias is prevented, not on how it
is present. If scientists were to reframe their perspective of bias to reflect
this, it would allow for a more thorough approach in planning, executing, and
analyzing an experiment. Focusing on
prevention rather than presence of bias forces a more active thought process in
conducting scientific research.
No comments:
Post a Comment