tag:blogger.com,1999:blog-7310946608587805029.post2989393177346352864..comments2024-03-13T01:48:29.943-04:00Comments on Unbiased Research: The cause of feelings of hoplessness and failure in graduate school: P-Values and Statistical SignificanceTJ Murphyhttp://www.blogger.com/profile/17292359594683490598noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-7310946608587805029.post-82595920998868644922016-05-01T18:36:41.953-04:002016-05-01T18:36:41.953-04:00Some of those early exclusive discoveries were mad...Some of those early exclusive discoveries were made by scientists with no statistical or relative training. Especially for findings that are qualitative instead of quantitative, it's impossible sometimes to do the "significance" check. But we couldn't deny the importance of these findings. I would agree that nowadays we focus too much on "statistically significant" data especially when it comes to good publication. I once showed a patient data to my thesis committee which showed a trend of upregulation of my target gene in human patients but with a p-value bigger than 0.05. I thought this was informational though not statistically important but a few commented it was OK to show the data but not good to publish it. Then I would expect others might ask the question: would it be relevant to patients if you only show us the mouse model data?That contribute to some extent of my "feelings of hopelessness or failure". My way of handling it is to use my own judgement. I will make the decision as to whether to show the "ns" data, whether to do statistics in some parts of the research project, all based on the experimental experience and relevant readings, as well as stats in mind. And I think that's also part of we our graduate trainings- to build up our own unbiased way of doing research. <br />Anonymoushttps://www.blogger.com/profile/02174920734022385870noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-75018757938996065212016-04-25T14:13:35.526-04:002016-04-25T14:13:35.526-04:00I do think that science as a field is too focused ...I do think that science as a field is too focused with statistical significance, but I also agree with Julia in that we need some measure of validity or uniformity with our results. You can't just give a talk and say that since there is a slight difference between two columns in your figure that you've cured cancer. It's another thing to say that as you increase dose of a drug you see a general increasing trend in the response variable, even if it isn't found "significant" by a one way ANOVA or the appropriate statistical test.<br /><br />One thing we also forget is that negative data, or non-significant p values, are still valuable results. It's not as "sexy" as getting that p < 0.05, but even if something is not significant it is a result and tells you more about the phenomenon at hand.Anonymoushttps://www.blogger.com/profile/08595402689944441324noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-9157201420313301192016-04-08T09:41:54.782-04:002016-04-08T09:41:54.782-04:00Amen, sistah! Nearly every time I present data on ...Amen, sistah! Nearly every time I present data on one of my ongoing experiments to faculty, they ask it I have performed statistics on it yet. When I explain that I am waiting until the end, when I have all my data, I am often met with confusion, suggestions to do t-tests on random groups (just to see), or hostility. It is one thing to want to know the results (that's why we do the tests) but it's another to only care about them when they're below a certain p value. <br /><br />I've also recently gone to several student talks where they present statistically tested and nonsignificant "trends" as if there are real differences between groups. Our culture is one so obsessed with the idea that things must be <0.05 that we pretend like we have it even when we don't.Anonymoushttps://www.blogger.com/profile/13869272419192113480noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-67955193267474218132016-04-05T16:06:04.099-04:002016-04-05T16:06:04.099-04:00I actually think of statistics as empowering. If t...I actually think of statistics as empowering. If the null has been given a fair test through a proper experimental design, and the outcome is "negative", it's a lot easier to face the music. In fact, it's gratifying. It forces you to move on to another problem.<br /><br />There's a lot less angst when the process is followed, compared to the typical scenario of a never-ending run of preliminary experiments, each with a slightly different flair than the previous. Alone and together they are never powered up enough to give the null a fair test. The doubt always nags. <br /><br />TJ Murphyhttps://www.blogger.com/profile/17292359594683490598noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-83151085156103469762016-04-05T10:36:36.924-04:002016-04-05T10:36:36.924-04:00I'm glad you brought up the example of Einstei...I'm glad you brought up the example of Einstein. It really shows the progression of scientific thinking. Maybe you are right in that we need to embrace the use of statistical significance and think of better applications for it. Instead of complaining and discouraging the use of statistical significance, we should move forward and find ways to better utilize p value. Anonymoushttps://www.blogger.com/profile/13034971838695510781noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-85724846859723765122016-04-05T10:36:32.291-04:002016-04-05T10:36:32.291-04:00This comment has been removed by the author.Anonymoushttps://www.blogger.com/profile/13034971838695510781noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-77660510938011341742016-04-05T09:07:29.043-04:002016-04-05T09:07:29.043-04:00Has statistics always been a part of the scientifi...Has statistics always been a part of the scientific method? Einstein produced the theory of relativity in the beginning of the last century, but it took 100 years for that to actually be proven by science. Maybe in the past, before the ease of fancy calculators or software, the field didn't require the checkmark of statistical significance. Perhaps sometimes applying common sense, or a broad understanding of the particular field was enough. Or maybe scientific knowledge has been growing exponentially as we obtain new tools and education, and the only way to keep up with this knowledge to apply rules, such as passing as statistically significant. Maybe instead of shifting back towards scientific significance, the move should be to create better applications and rules for statistical significance, so that we are all analyzing the data in a fair and unanimous method. juliahttps://www.blogger.com/profile/18352242386883984186noreply@blogger.comtag:blogger.com,1999:blog-7310946608587805029.post-53253221283468610292016-04-05T08:23:30.698-04:002016-04-05T08:23:30.698-04:00I agree that a shift towards focusing on scientifi...I agree that a shift towards focusing on scientific, rather than statistical, significance is needed. The question, however, remains as to how to acheive that shift in the field overall. Certainly, the emergence of "negative data" journals is a move in that direction, but those journals still lack the prestige of traditional journals. In fact, I think separating "negative data" into separate journals is an unfortunate dismisal of the importance of that data. It seems to me that we need traditional journals (and those on our thesis committees!) to accept the importance of negative data.Anonymoushttps://www.blogger.com/profile/03638579808391302056noreply@blogger.com