I found Dan Ariely’s TED Talk on dishonesty (https://www.youtube.com/watch?v=onLPDegxXx8&feature=youtu.be)
very compelling, particularly the part where group mentality can have a major
effect on the prevalence of cheating.
He
begins by asserting that most people are likely to cheat, though not in
proportion to a cost-risk-benefit analysis.
Instead, people tend to cheat a marginal amount in most circumstances in
order to be able to both reap the benefits offered by cheating, as well as
maintain the image that they are ‘a good person’.
What does affect this marginal amount, it
seems, is the perceived level by which other ‘good people’ cheat. He gave one of his studies as an example,
where students taking an exam were likely to cheat more extensively if an
obvious cheater was perceived to be part of the in-group (a fellow university
student), as opposed to when that person was part of an out-group (a student
from a rival university).
I think this
phenomenon has great relevance to the scientific community, particularly in
regards to lab environments. This
phenomenon speaks to a relativistic kind of morality, as well as an
unwillingness to go against the grain. If
a particular lab has a specific method that they use for analyzing data or
preparing figures, it may be difficult for a newcomer to challenge that method,
even if the newcomer suspects that it may not be completely accurate. It is all too easy to accept a method as just
the way the lab does things; stirring up trouble about this point could lead to
a variety of problems: ridicule at the newcomer’s lack of understanding of the
technique, backlash from other lab members, even resentment at the implication
that the data the lab generates may be flawed in some way. As such, it may be difficult for labs to
perform quality checks on themselves.
Even if all the members of a lab truly believe that their methods are utterly
sound, they may be unwilling to go to great lengths – using time and resources –
to try alternate methods to test their belief.
Any uncertainty at the data-producing level is only compounded as any
papers move on to higher levels of review, as the process of fact-checking
becomes more arduous, as more reverse-engineering of protocols and raw data is
required. Therefore, I think the lack of
replicability in scientific data (as discussed at length in the Economist’s
article http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble)
is largely a ground-up issue. Until researchers
are willing to hold themselves, their labs, and their techniques to the highest
possible level of integrity, the quality of work that the scientific community
as a whole produces cannot be improved.
No comments:
Post a Comment