tag:blogger.com,1999:blog-7310946608587805029.post2355927549630642091..comments2024-03-13T01:48:29.943-04:00Comments on Unbiased Research: P values: Statistically Significant, but Relevant? TJ Murphyhttp://www.blogger.com/profile/17292359594683490598noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-7310946608587805029.post-35433807455592302632016-04-15T22:10:40.675-04:002016-04-15T22:10:40.675-04:00I think your comments about the need for a more th...I think your comments about the need for a more thorough understanding of statistical analysis and experimental design are spot on, Jacob, and it seems like we all agree that the p-value is overused but not entirely worthless. I’d like to throw out one other possible justification for the p-value that comes to mind for me. As scientists, we have a greater sense than the average person of just how intricate the systems are that operate in this world, and we recognize that even subtle changes can ultimately have big consequences. At least, we can’t rule out that possibility. Plus, many of us are interested in understanding everything, even relatively trivial details, especially when we have to work so hard experimentally to get those answers. We also all know that we’re riding an upward trend in the amount of data required to make a paper, the number of students in research, the number of papers required to graduate, the number of publications expected for postdocs, junior faculty, senior faculty, etc. Given all this, it can be argued that there’s reason to study and try to publish just about anything, no matter how inconsequential the results may actually be. While the p-value is far from perfect, it at least gives a very clear cutoff that even the newest science students can understand. If your data don’t at least meet that conventionally-accepted threshold, it forces you to go back and reevaluate your hypothesis, your experimental design, etc., make changes if possible, and most of us won’t try to publish that result if we have a choice. So essentially, the p-value may in a small way help to focus the energies of scientists on more substantial effects rather than allowing us to mire ourselves in the sea of slight modifications present in everything.Anonymoushttps://www.blogger.com/profile/14954868285595601320noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-84621188090225477022016-04-15T17:41:03.635-04:002016-04-15T17:41:03.635-04:00I feel like the following statement released from ...I feel like the following statement released from the American Statistical Association may be interesting to you: http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108 (Sorry, I can't edit it to show as a link) <br /><br />It addresses head on the problem of P<0.05 being taught as a universal law, specifically, that "we teach it because it's what we do, and we do it because it's what we teach" <br /><br />It should also be noted that this is the first time the ASA has come out specifically on matters of statistical practice, SO IT'S A PRETTY BIG DEAL. And while it may have seemed easy for a group of statistics nerds who all basically share the same opinion on the P value being dangerous, and being egregiously misused, it took from October 2015- February 2016 for them to draft this statement. <br /><br />The statement is long, but it's worth a skim. At its most basic, it outlines the use and misuse of the P value, with basic language and definitive statements like this, <br /><br />"Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or<br />about the probability that random chance produced the observed data. The p-value is neither. It is<br />a statement about data in relation to a specified hypothetical explanation, and is not a statement<br />about the explanation itself." <br /><br />It definitely agrees with what you have to say about the importance of designing your experiment and your analysis before starting. <br /><br /><br />wozniakjehttps://www.blogger.com/profile/05738794813829741601noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-80577178228857790242016-04-15T17:40:39.028-04:002016-04-15T17:40:39.028-04:00I feel like the following statement released from ...I feel like the following statement released from the American Statistical Association may be interesting to you: http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108 (Sorry, I can't edit it to show as a link) <br /><br />It addresses head on the problem of P<0.05 being taught as a universal law, specifically, that "we teach it because it's what we do, and we do it because it's what we teach" <br /><br />It should also be noted that this is the first time the ASA has come out specifically on matters of statistical practice, SO IT'S A PRETTY BIG DEAL. And while it may have seemed easy for a group of statistics nerds who all basically share the same opinion on the P value being dangerous, and being egregiously misused, it took from October 2015- February 2016 for them to draft this statement. <br /><br />The statement is long, but it's worth a skim. At its most basic, it outlines the use and misuse of the P value, with basic language and definitive statements like this, <br /><br />"Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or<br />about the probability that random chance produced the observed data. The p-value is neither. It is<br />a statement about data in relation to a specified hypothetical explanation, and is not a statement<br />about the explanation itself." <br /><br />It definitely agrees with what you have to say about the importance of designing your experiment and your analysis before starting. <br /><br /><br />wozniakjehttps://www.blogger.com/profile/05738794813829741601noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-48892598181963518632016-04-15T08:21:10.419-04:002016-04-15T08:21:10.419-04:00I agree with you. The BASP article from Nature New...I agree with you. The BASP article from Nature News was really interesting, but mostly due to the varied responses people had to it. Some graduate students were left despairing, but other saw it as an exciting challenge. But I also think that we as a scientific community need to focus not just is there a difference or is there an impact, but also what is the scientific significance. <br />There’s nothing worse for me than reading paper that starts strong, and is decently convincing until a small detail catches your eye, and then the whole sweater unravels in an instant. You start to break down the design and the analysis of the experiments, and then you question their statistical analysis, and then the paper is just pushed to the edge of your desk in disgrace. <br /> But I agree with a lot of what you say, and I’ve mentioned in my post and in other comments similar thoughts. I think completely eliminating the P-value from statistical analysis is a step too far. Removing it may also be too difficult of a change for some scientists, not to mention it’s outright banning completely ignores all of it actually tells us. I think the shift made in regards to the p-value needs to be more mental than physical. If we start actually understanding what we’re reporting, design experiments better, and have a better comprehensive analysis of our findings then we would probably stop using the p-value as a crutch. <br />Anonymoushttps://www.blogger.com/profile/14643787769424001784noreply@blogger.com