Tuesday, April 12, 2016

Meta-Analysis of Meta-Analyses

Reading through the chapters of Motulsky’s Intuitive Biostatistics, my attention was caught by the section on meta-analyses in Chapter 43. Given the problem of reproducibility in scientific research, the meta-analysis seems to be one of the great tools for addressing the problem of reproducibility; confirming or refuting prevailing scientific theories and advancing humanity’s body of scientific knowledge.

Looking at the “Assumptions of Meta-Analysis”, I was intrigued by the idea that meta-analyses fall into two general categories, each based on a different assumption. Either

(A)    All subjects are sampled from one large population; each scientific study is estimating the same effect. Measurement error comes from random selection of subjects, or
(B)    Each study population is unique, and the differences in population and random selection of subjects both contribute to the error.

Sound familiar? To me, these two assumptions seemed derived from the two philosophies of scientific realism we discussed at the beginning of class. One being that the truth (the large population) exists somewhere, and that science’s job is to uncover it, the other being that unobservable truth is irrelevant, and that utility of knowledge is paramount as is relates to advancement of medicine or technology. The idea that a population’s true response to a therapeutic exists is reflected in assumption (A), whereas (B) describes the anti-realist philosophy that there is no “global” population response, only individual subset responses as described in each sub-study of the meta-analysis.


The value of the meta-analysis under the framework of (B), then, would be to predict the efficacy of a given therapeutic in the next population of patients, given all those who have been tested before. Motulsky goes on to described this second model as the more commonly used one underlying most meta-analyses. The anti-realist philosophy is commonly associated with being applied and utilitarian, though I’m wondering if there’s a fundamental application of the anti-realism paradigm. Specifically, what does the idea that each sub-population in a meta-analysis is inherently disconnected from the others mean for drawing scientific conclusions from a meta-analysis? Is there a connection to the scientific philosophies of confirmation and falsification inherent in the above assumptions? Do the answers to these questions even affect the conclusions we can draw from meta-analyses, or are they irrelevant exercises in navel-gazing?

4 comments:

  1. I really enjoyed reading this post, because I also wrote about meta-analysis for this assignment and found it really fascinating and challenging. It was also interesting hearing about it in class on Thursday.

    When I read Motulsky's chapter, I approached it from point A) from your blog post-- That a meta analysis assumes a larger identical population, and it is the responsibility of the researcher to exhaust every option of data collection, including unpublished data (to avoid excluding potentially important "negative" data), AND published data in other languages. It seemed HONESTLY impossible, and when I wrote my blog post, that's basically what I said. How can anyone perform a meta-analysis properly?

    But I wish I had listened to Dr. Conneely's lecture before having written my blog post, because she taught meta-analysis from a very 'point B)' frame of mind. She specifically mentioned understanding that each group of pooled data (say, from Emory, Michigan, and Duke) represented a different population of patients. From this perspective, you are given statistical data from each hospital and have to work with what you've got.


    All of these things in mind it seems like two things are completely true of the meta-analysis: 1) they can definitely make OBVIOUS trends true when it comes to the efficacy of therapeutics, and 2) they can definitely make non-significantly significant data LOOK important when they're not.

    ReplyDelete
    Replies
    1. Sorry, a clarification:
      1) they can bring to light OBVIOUS trends when they're actually there, and 2) they can make non-significant data look significant.

      I didn't write it very clearly

      Delete
    2. Sorry, a clarification:
      1) they can bring to light OBVIOUS trends when they're actually there, and 2) they can make non-significant data look significant.

      I didn't write it very clearly

      Delete
  2. I really enjoyed reading this post, because I also wrote about meta-analysis for this assignment and found it really fascinating and challenging. It was also interesting hearing about it in class on Thursday.

    When I read Motulsky's chapter, I approached it from point A) from your blog post-- That a meta analysis assumes a larger identical population, and it is the responsibility of the researcher to exhaust every option of data collection, including unpublished data (to avoid excluding potentially important "negative" data), AND published data in other languages. It seemed HONESTLY impossible, and when I wrote my blog post, that's basically what I said. How can anyone perform a meta-analysis properly?

    But I wish I had listened to Dr. Conneely's lecture before having written my blog post, because she taught meta-analysis from a very 'point B)' frame of mind. She specifically mentioned understanding that each group of pooled data (say, from Emory, Michigan, and Duke) represented a different population of patients. From this perspective, you are given statistical data from each hospital and have to work with what you've got.


    All of these things in mind it seems like two things are completely true of the meta-analysis: 1) they can definitely make OBVIOUS trends true when it comes to the efficacy of therapeutics, and 2) they can definitely make non-significantly significant data LOOK important when they're not.

    ReplyDelete