The flexibility of statistical
analysis.
The American Statistical Association
assembled a set of ethical guidelines that provide statisticians in all fields
a support in quantitative research. In the Executive Summary of the Guidelines,
they state that, “It is important that all statistical practitioners recognize
their potential impact on the broader society and the attendant ethical
obligations to perform their work responsibly. Furthermore, practitioners are
encouraged to exercise ‘good professional citizenship’ in order to improve the
public climate for, understanding of, and respect for the use of statistics
throughout its range of applications.”
This statement acknowledges that
statistical analysis is very flexible and the analysis should be designed with
the recognition of their potential impact. I reviewed some of the handbook and
was surprised to notice that there are not a lot of regulations on how to
designed statistical analysis. This is probably because it would be too
difficult to attempt to make protocols for all possible statistical designs. In
the future I would like to see a more detailed handbook with more strict
regulations on statistical design. Even the standards of science and analysis have been shown to be flexible. For example, the p-value lacks a standard and can vary form scenario to scenario. I discovered this recently when writing my "Bad Stats Assignment" which was on p-value adjustments for neuroscience imaging.
P-values are used as a way to separate true findings from ones generated by chance. The p value of 0.05 or less is considered “statistically significant” in many fields. Although the p-value provides a threshold, it cannot tell you if the hypothesis is correct. A common misconception is that the p-value determines the probability that the result occurred by chance. This statement is incorrect because the p-value only tells you something about the probability of seeing your results given a particular hypothetical explanation. The p-value cannot tell you the probability that the results are true or due to random chance. The p-value cannot also tell you the size of an effect, the strength of the evidence or the importance of a result. Although it seems like I am limiting what the p-value represents, it is used as a way to separate true findings from spurious ones. Unfortunately, researchers have the incentive to “adjust” data and keep trying different analyses until something fits the p-value.
P-values are used as a way to separate true findings from ones generated by chance. The p value of 0.05 or less is considered “statistically significant” in many fields. Although the p-value provides a threshold, it cannot tell you if the hypothesis is correct. A common misconception is that the p-value determines the probability that the result occurred by chance. This statement is incorrect because the p-value only tells you something about the probability of seeing your results given a particular hypothetical explanation. The p-value cannot tell you the probability that the results are true or due to random chance. The p-value cannot also tell you the size of an effect, the strength of the evidence or the importance of a result. Although it seems like I am limiting what the p-value represents, it is used as a way to separate true findings from spurious ones. Unfortunately, researchers have the incentive to “adjust” data and keep trying different analyses until something fits the p-value.
No comments:
Post a Comment