Right (rolls up sleeves), I said I would try to track down the reference that the Mail Online used in their comment adverse and misleading article by Jenny Hope, so that I could comment further. It has been tracked down – not by me I am ashamed to admit, but by EoR who commented on a blog about the same article at Thinking is Real.
Here it is in all its glory in the BMJ ( 19 August 2000) pp. 321:471-476 . Notice the date? It’s nine years old, which explains why I couldn’t find it – after all, it was supposed to be news, so I foolishly expected it to be new. Silly me.
So what does the paper actually try to find out?
To test the hypothesis that homoeopathy is a placebo by examining…
So the experiment is only trying to assess whether homeopathy is a placebo. Notice that this is in no way about whether homeopathy “works”, it’s about whether homeopathy differs from placebo. This may sound like nit-picking, but it means that a negative reaction counts as a difference from placebo. One reported outcome that suggested homeopathy might not be placebo was:
Initial aggravations of rhinitis symptoms were provoked more by homoeopathy than by placebo. By 48 hours after randomisation seven (29%) patients in the homoeopathy group reported a worsening of rhinitis, twowith wheeze, compared with two (7%) patients in the placebo group, neither of whom had wheezing (P=0.04, Fisher’s exact test). By 14 days, 11 (46%) patients in the homoeopathy group had reported adverse events, 10 of whom had rhinitis related aggravations, compared with seven (26%) in the placebo group, five of whom had rhinitis related aggravations (2=3.28, P=0.07).
This may suggest that homeopathy is not just placebo, but it hardly supports the “Homeopathy works” title given to the Mail article. Unless ‘working’ is equated to making a symptom worse. I stress that this result does not offer proof that homeopathy is counterproductive, because the study has some weaknesses that render its outcomes rather tenuous. Most immediately I have suspicions arising from the statistics – why the use of a Fisher’s exact test and a 2? Because the sample of available data was too small for a parametric test and it required non-parametric tests optimised for small samples. There’s nothing wrong with using methods appropriate to the data, and to be fair to the authors, they clearly state:
With a choice of 5% significance and 80% power, we estimated that 60 patients would be required in each group to avoid false negative results.
What they got was:
Fifty patients completed the study.
Of whom 23 were treated with homeopathy and 27 with placebo. I would point out that this is less than half the number in each group required to avoid false negatives – unfortunately there isn’t a figure for the number required to avoid a false positive, but consider that the absolute bare minimum in a sample distribution for 5% confidence is 20 (the 5% is derived from a one in 20 chance) and bigger is better when it comes to sample size (as a way of reducing the impact of a few spurious results that might give a false positive). This study just scraped in with 23 & 27, which leaves them a sample that is incredibly sensitive to small variations. I am not a statistician, so if there is any glaring error in my reasoning please let me know.
My final thought about the paper is that there is no proper control group. Normally the placebo group acts as the control and hypotheses centre around the effects of treatments. Here the hypothesis is that homeopathy is placebo. To my mind this requires a non-placebo control group (people with symptoms who receive no treatment) in order to assess magnitude of effects within the placebo-homeopathy comparison. After all, it is scale of response rather than difference in response that should be considered given the amount of variability in a given sample – particularly when the sample is so small.
Once again we see that the press have managed to over-interpret and misrepresent the results of a limited (old) trial of mediocre quality, low statistical validity and equivocal outcome. Not only does this reporting misrepresent a controversial area of study which in desperate need of honest and clear communication, it also misleads the public about appropriate treatments for an affliction that is debilitating for millions and may cost the UK £7 billion a year. Way to go Jenny Hope, you’re an inspiration to us all.