Scientist tend to place a lot of importance in the P value.
PIs loose grants over P values, grad students have nervous break downs over “insignificant
data”, undergraduates desperately look for outliers they can redo to correct
that p-value. Is the P-value really that important?
As we have learned in class this semester, the p-value is
the probability of obtaining a result equal to or more extreme than predicted
with the null hypothesis. So a low p-value indicates that it was unlikely that
random chance gave you such an outlying result. Many scientist would stop when
they see that low number, wipe the sweat off their brows, and submit their grant
or paper.
However is a p-value of 0.051 really all that much worse
than a p value of 0.049? Is the difference between those p-values really worth
a nervous breakdown? The p-value is an arbitrary line drawn by scientists, and
often young scientists fail to look beyond it. A p-value is just one statistical
number that can be used in conjunction with other statistical values such as
the mean, confidence interval, and error, to make a decision about the implications
of an experiment.
A scientist needs to use his/her judgment to conclude if an
experiment is worth pursuing further or if it is a waste of time; a significant
or insignificant p value should not make that decision. An insignificant p-value
does not mean the data was worthless, but only a clear minded scientist can
determine what it actually means. Likewise, but much less considered, a
significant p-value does not mean that the experiment showed a result worth celebrating
over. Your significant p-value might indicate that your drug decreased the
stress levels of mice 10%, but is a 10% decrease in mice stress really
significant for any application?
These sorts of questions are what scientists should be
thinking about. In one experiment, a p-value may hold a lot of weight, but in another
experiment, the confidence interval might tell much more about the results. Scientist
must be able to analyze their data, not just statistically, but critically to
know what their statistics are actually telling them.
Great post! I completely agree. A scientist's most unique skill is to think critically about the problems and hypotheses they are interested in (as well as thinking EXTRA critically about other people's problem's and hypotheses). However, this ability seems to completely disappear at the end of an experiment, when most of us find ourselves staring at the computer and hoping at least one asterisk pops up in our analysis. I wonder if this is a cognitive issue, a cultural issue, or both. Regardless, I agree that we as a community need to be more conscious about fostering important research rather than just "significant" research.
ReplyDelete