I’m going to confess that I had
never given meta-analysis much thought before reading this section of Motulsky.
I can also confess that I’m really, really glad this has never applied to my
research so far.
The subject itself is conflicting.
Motulsky introduces meta-analysis as a way to combine evidence from multiple
studies, usually clinical trials that are used to determine the effectiveness
of some therapeutic. At first read, it doesn’t sound like the worst idea—the fact
is, not every study has the resources to follow and collect the thousands of patient
samples necessary to determine efficacy of a treatment. Pooling many, well
designed, smaller trials offers a solution for researchers interested in
meta-analysis. It also offers a giant problem.
A quick google search of “meta-analysis
and research bias” yields over one million results. It seems from reading this
chapter, that there’s a good reason for this. Meta-analysis lends itself to
publication bias, and not necessarily because people bloodthirsty, competitive,
publish-or-perish, nightmare monsters. Not that this doesn’t happen, but
honestly, performing a non-biased seems incredibly difficult. There’s a huge
table of challenges in Motulsky’s chapter on it (seriously, check it out: p.
412 table 43.1). Some of the challenges involved in performing a meta-analysis include
needing to seek out ALL relevant data, including unpublished studies and
studies published in other languages.
One survey published in The BMJ
states that of the 31 most recent met-analyses they examined only 9 included participant
data from unpublished studies. Many of the studies included in this survey didn’t
list any limitations to their analysis, which leads the authors of this
particular survey to strongly caution reviewers when reading meta-analyses.
It seems like there are some ways to detect bias in your meta-analyses by the generation of funnel plots, where
small sample bias can be seen as asymmetry on an axis:
But even this isn’t uncontentious.
At this point in my researching of
meta-analysis, I’m genuinely glad that I have no interest in clinical efficacy
of therapeutics. Honestly, I think I would stick to well written narrative review
articles.
I recently encountered this problem in an epidemiological meta-analysis. The paper was discussing the incidence of dystonia, a rare movement disorder with some identified causal mutations but most incidences are idiopathic. The paper used about 8 clinical studies to estimate the incidence of dystonia. However, in the discussion they admitted that their estimate could be off by as much as 100-fold because incidence data derived from clinical studies are notoriously unreliable for rare, idiopathic diseases. I imagine some authors might be much less willing to acknowledge the limitations of their data, especially when the margin of error might be so high.
ReplyDelete