Friday, February 23, 2018

Simulating correlated variables

Measures collected in paired/related measure experimental designs are...correlated!

There is a lot of utility in simulating experimental outcomes...for example, when doing a priori power analysis by Monte Carlo, when generating bootstrap distributions, etc.

To simulate paired observations appropriately its important to account for their paired correlations.

Here's a simple R script that accomplishes that, including a check for accuracy, illustrated for a case when 80% correlation is expected.

Monday, February 12, 2018

Accuracy v Precision

This week we're going to touch a bit on the concepts of accuracy and precision.

In biomedical science, we're usually measuring outcomes that nobody has every measured before in quite the same way (and probably never will measure them again!)

As a consequence, we rarely have any sense about how accurate our results are. To know accuracy, we'd have to know what the "true" value in a population we've sampled for some outcome variable . That truth is usually elusive.

So most of the time the best we can do is estimate precision. As we move into the statistics of measured data, the standard error of the mean (sem) is the statistic that we use for assessing precision. The lower the sem, the greater then precision.

I'm omitting from the slide deck this year one of my favorite pics that illustrates wonderfully how the battle for greater accuracy and improved precision plays out over time.

This graph is from the Particle Data Group at the Lawrence Berkely Lab. The pdf that has 11 other graphs much like it. This one shows the evolution of the mass of the eta particle, over time.

For over 20 years the mass was one value. Since then, the mass value has bounced around quite a bit. The most current estimate of the mass appears very precise.

But is it accurate?

The true mass of these eta has remained the same, while the accuracy and precision by which it is measured continues to fluctuate.

Thursday, February 8, 2018

"Hype" in Scientific Research

There are many problems with the current state of published research, many of which I have noticed in my four short years being involved in this world, and many of which appear not to be recent phenomena but rather problems that have plagued science since its inception. Of all these issues, the one that I think is often most harmful to the progress of scientific research and the proper application of the scientific method is the tendency for researchers, research institutions and the press to overhype and often completely misrepresent the significance of their findings.

As the October 2015 Vox article states, “the media has a penchant for hailing new medical ‘breakthroughs’ and ‘miracles’ – even when there’s no evidence to back up those claims.” But the media are not solely to blame. As the author points out, the researchers, doctors, institutions and companies involved in the production of these “miraculous” studies often overstate the significance of their research in the first place. 

One example that comes to mind is a recent study showing the effect of retooled diabetes drugs on the progression of Alzheimer’s in transgenic mice. I was surprised to find headlines in national news sources, such as “Report: Scientists Find Alzheimer’s Treatment While Trying To Cure Diabetes” that fail to acknowledge that the study is simply in mice and light-years away from the clinical trial data required to make such a claim. To even suggest it was a valuable mouse study, careful reviews of potential biases, statistical and experimental methods are required.

I think overhyping and overstating the significance of research often leads to cutting corners in the proper application of the scientific method. Hypotheses quickly become regarded as theory and a lot of time and money is invested in ideas that are flawed to begin with. I think increasing criticism and analysis of publications via comment sections and forums such as PubPeer is an important step in reducing scientific hype. Most importantly I agree with the Economist article, which suggests that graduate student education in statistical methods, as well as encouraging a skeptical outlook on scientific research are key to combating this issue.

Monday, February 5, 2018

Prospective Solutions to the Lack of Peer Review Rigor and Reproducibility in Science


The Vox article about PubPeer discussed how the platform provides a simple, yet interesting, solution to the lack of rigor in peer review. We as media consumers observe every single day how the court of public opinion makes for an important swing vote when judging the actions of the most elite within society. PubPeer capitalizes on this idea, though its court is filled with a much more niche audience of scientists. I wholeheartedly support this platform as bias and unreliable data is so easily missed during the peer review process. PubPeer provides additional support to ensure that articles that slipped through the cracks of a broken peer review process are retracted. I also see PubPeer as a mechanism to provide a forum for collaboration. Its electronic platform could be a nidus for the formation of collaborations between research groups. In my opinion, these types of collaborative research projects would foment independent replication of experiments and ultimately promote scientific rigor.

However, I believe the systematic problem with peer review that cannot be addressed by PubPeer is the issue surrounding publication bias and the fact that unpublished articles are enriched with null results. Changing this requires a paradigm shift in which scientists and editors embrace the knowledge that a null result provides whilst simultaneously railing against the mentality that “impact factors” reign supreme. As was mentioned in the Economist article about unreliable research, “researchers ought to be judged on the basis of the quality, not the quantity, of their work.” However, the scientific community needs to begin to realize that quality science does not always lead to hypothesis confirmation.

























The Economist article about unreliable science also sparked thoughts on additional ways in which the peer review process could be improved on the front end. For example, better training should be provided to editors. In addition, editor performance should be incentivized. Lastly, journals should foster a closer working relationship with statisticians so that raw data of prospective journal articles can be cross-checked by a journal-affiliated statistician.


Overall, I believe that PubPeer and replication initiatives by PLoS ONE are good starting points when attempting to fix the peer review and replication problems that run rampant within the scientific community. However, as mentioned above, the most difficult change to make relates to the pervasive mentality about science that propagates these issues upstream of where our current initiatives are working.