Tuesday, April 19, 2016

Homeopathic Treatment of Migraine in Children: Results of a Prospective, Multicenter, Observational Study

The paper, “Homeopathic Treatment of Migraine in Children: Results of a Prospective, Multicenter, Observational Study” published in 2012, examines the use of different homeopathic treatments to treat migraines in children under the age of 15. The researchers evaluated the effectiveness of the treatments by comparing the number and severity of migraine attacks before and after homeopathic treatment. In addition, the researchers evaluated the effect of these treatments on the children’s education by determining the number of missed school days before and after treatment.The overall findings of the study found that the use of these different treatments decreased the number of migraine episodes and had a beneficial effect on children’s education, by decreasing the number and severity of migraine attacks and by decreasing the number of sick days. However, this paper is built upon a badly designed experimental plan and used incorrect statistical tests in order to “verify” its statistically significant events.

The main problem with this paper is the overall design of the study. The subjects included were children under the age of 15 who had been diagnosed with migraines by the doctors involved in the study. The subjects included in the study were chosen on a “first-come, first-serve” policy, so the researchers did not impose additional criteria upon which subjects should be included in the study. This may not have been a major problem, but it probably decreased the possible significance that could be found from this study by narrowing the possible criteria for acceptance into the study. The study was also not randomized or blinded, which allows for the introduction of bias.
Another major problem was that the doctors involved in the study participated voluntarily. This suggests that the doctors who willingly became involved in the study had the preformed belief that homeopathic treatments could be beneficial in treating migraines. This suggests the doctors could have introduced bias into the study, and this bias could have played in a role in the results that they reported back to the researchers. In addition, each doctor in the study prescribed different homeopathic treatments to their patients, so the study was noncomparative. Based on this, the researchers could not say if one treatment worked better than the other. In addition, the dosage and the timeframe of treatment varied between each individual. This means that the individual patients within the study really can’t be compared to each other due to the great variation in treatment between all the subjects. The researchers also only included one follow-up time point three months post-treatment to test the effect of the treatment on the patient. As a result, the researchers could not test the effect of the treatments long-term.

A large problem with this paper is that the majority of data collected was subjective, determined by the individual doctors or from the guardians of the patients. As a result, different doctors may have interpreted the suggested scale differently. In addition, guardians may exaggerate the severity of migraine attacks. In addition, the majority of the criteria that the doctors and the guardians used were not included in the paper. The few examples were very vague, referencing the presence of an “aura” with the patient; the characteristics of this aura never are explained in the paper.
One of my main problems with the paper was the fact that no control or placebo group was included into the study. The researchers even admit that previous studies have shown that the placebo effect can play a big role in studies like this. This means that any significant results could be due completely to the placebo effect.
When examining the effect of the treatment on the subject’s education, the researchers chose to use the number of the days of school each patient missed. However, this is not the best way to determine whether the treatment had a beneficial effect on the patient’s education. I believe that examining the change in grade performance before and after treatment, in addition to days absent, would give a better indication of whether the treatment helped to improve the patient’s performance in school.

When determining whether the results of their study were significant, the researchers chose to use the Chi-square test and the Student’s t-test. However, based on the design of the experiment, it is clear that the researchers should have used a paired t test when analyzing their data since they are comparing results before and after treatment from the same individuals. However, the researchers did not include their data in the paper, meaning that I could not reanalyze the data with the paired t-test to re-evaluate the study's significance. In the small amount of data and figures provided, the researchers failed to include any error bars into their graphs. Upon closer examination, I discovered this was because values such as a mean of 10 reported migraine attacks, had a standard deviation of 14.1. As a result, instead of having a graph with decisive results, you get a graph like you see below.

The results of the graph above show a great amount of variability in the study’s results, indicating that the overall findings of the study aren’t as significant as the researchers would like us to believe. All of the figures included in the study lacked error bars based on the standard deviation, due to the fact that the standard deviation was always greater than or equal to the calculated mean.

Overall the findings in the paper cannot taken as significant due to a terrible experimental design and a misuse of statistics to analyze the data.


  1. Generating good statistical data on variables which must be self supported has the potential to generate a lot of bias. One interesting point is the researcher were using children as their subject pool. The extent to which each individual feels pain is going to differ between people, and this difference is only exacerbated when the subjects are young teens/children. Children are also more likely to be influenced by authority figures, and a doctor who believes that their homeopathic treatment has an increased likelihood of altering that child's perception of treatment effectiveness. More thought should have been given in the approach this study took so that valid data could have been generated.

  2. Though I generally default to trusting journal authors, I'm always wary when you can tell that they really believe in whatever subject they're investigating and want the results to end up a certain way to support some cause. The same thing happened in the paper I reviewed. When someone's support for a drug/treatment is strong enough to come out in their writing, it's a lot harder to think of them as an impartial, empirical investigator-- and that casts a shadow on their results. All of the problems you pointed out with study design and statistics and that first commentor mentioned about ways it could be skewed in favor of significance are just more reasons to distrust a paper.

  3. When you look at the origin of the paper, Journal of Alternative and Complementary Medicine, you can pretty much stop there. With featured content including articles on anti-malarial tea and how acupuncture helps dermatologists, we might be ready to write it off. I feel like, if there was some well-done and interesting work published about homeopathy, it wouldn't be in this journal. I don't want to use a broad brush to paint the journal or anyone who publishes in it, but a lot of alternative medicine requires a certain suspension of disbelief. You have to look past mountains of data to actually begin asking a question which has already been put to bed by modern science. Inb4 "but what about people who challenged current models and were right". They didn't publish in J Altern Complement Med. Their findings were likely groundbreaking, hotly contested, and the matter was likely not "put to bed" in the first place, just widely accepted. Semantics.

    What might be more useful is to look at papers published in more respected journals, since statistical and experimental issues are sure to be rampant there as well, and that's where we should be worried about them. This study will likely excite a few stay-at-home moms who don't vaccinate their kids, but then it will sputter out and never be heard of again. On the other hand, papers published in mainstream scientific journals will often be the basis for further work and potentially even treatments, which makes the risk that statistical errors will have a broader and more deleterious effect, higher.