Field of Science

Fake acupuncture works as well as "real" acupuncture

I haven't looked at acupuncture in this space yet, but a recent study in the Archives of Internal Medicine provides some entertainment - and food for thought. The authors, Brinkhaus et al., compared the treatment of lower back pain with "real" acupuncture to two alternatives: "sham" acupuncture, in which needles are placed in random locations and inserted only superficially (not as deep), and no acupuncture at all.

The study found that both sets of patients with acupuncture reported reduced lower back pain as compared to the "no treatment" group; however, there was no difference between the real and sham treatment groups. In other words, if you thought you were getting acupuncture, you reported less pain. The authors here look at this as a victory, of sorts (and they clearly believe in acupuncture themselves - raising questions of bias in the study). It's also worth noting that one of the authors has a clear financial interest in a positive outcome - he receives fees for teaching acupuncture to professional societies; and several of the authors work at centers for complementary medicine.

I'm not surprised that the sham treatment worked as well as the "real" acupuncture. However, I don't believe that either treatment really works. As with many studies of pseudoscience, the condition (here, lower back pain) is subjective and very difficult to measure. The placebo effect is obviously at work: the patients know they had a treatment, which was a painful one, and they want to believe it was worth it. A better study would have included controls who received another treatment - massage for example - rather than nothing.

Worse yet, this study is riddled with other methodological problems. First, the physicians weren't "blinded" to the treatments. In other words, the physicians knew that they were giving real vs. sham acupuncture, introducing another source of bias. A questionnaire given to patients revealed that at least some of them figured out that they had had sham treatment.

Acupuncture has been studied countless times before (sometimes with funding from our friends at NCCAM), and the best studies show, unsurprisingly to me, that it just doesn't work. (Not surprising because there is no known physiological mechanism that would allow it to work. It's just magical thinking.) Now, although meta-analyses are highly problematic themselves, the best meta-analyses are usually the Cochrane studies, so I'll mention here that there was a Cochrane study of precisely this: "The effectiveness of acupuncture in the management of acute and chronic low back pain." You can read the abstract here. They found, after reviewing many other studies, that "there was no evidence showing acupuncture to be more effective than no treatment"; in other words, it just doesn't work.

I was interested to see that Brinkhaus et al. mentioned the Cochrane study above, but dismissed it and said that more recent studies supported their opinion that acupuncture (sham or real) is better than no treatment at all. They cited a more recent (2005) Cochrane study as one of these. But ironically (or should I say "blindly" on the part of Brinkhaus et al.?) this new study did not find evidence that acupuncture works at all. The newer Cochrane study found that almost all the studies were of low quality, and that "the data do not allow firm conclusions about the effectiveness of acupuncture for acute low-back pain." But that didn't stop Brinkhaus et al. from asserting that there is an effect. As long as they study hard-to-measure phenomena like pain, with poorly designed experiments, they'll continue to be able to find weak effects and then claim that "more research is needed."

So, if you want to enrich your local alternative healer, by all means let him stick you full of needles. But if you want to feel better, there are much more pleasant choices that won't cost you a thing.

NIH and Ayurveda, part 2

Another in my continuing series on some astonishingly stupid research funded by NCCAM, the NIH center for sham medicine (oops, I meant "alternative" medicine). This post looks at grant R21AT001969, to Cathryn Booth-Laforce at the University of Washington in Seattle, titled "Ayurvedic Center for Collaborative Research."

If you thought I'd picked out a couple of oddities in my previous posts on the pseudoscience funded by NCCAM, I'm afraid - sadly - that you're mistaken. There are plenty more examples, of which this project is just one. This project will spend thousands of your hard-earned tax dollars to set up a center for collaboration between the PI's university and a place called The Ayurvedic Trust in India, supposedly to conduct research projects. However, this nice-sounding goal ignores the fact that Ayurvedic is little more than a set of ancient superstitions, founded on ignorance and magical thinking, with no scientific merit to any of them. I recommend the article on Ayurvedic mumbo-jumbo at quackwatch.com for those who are interested.

A quick primer: in Ayurveda, all of the body's functions, including health, sickness, and so on, are regulated by three "doshas", which are really quite meaningless from a scientific point of view. For example, the dosha called vata "governs all bodily functions concerning movement" and accumulates during cold, dry, windy weather. Is there any basis for this? No. What's worse is that Ayurveda "medicines" (I have to put that word in quotes here) contain well-known toxins such as mercury, arsenic, and lead. In fact, a scientific study in the Journal of the America Medical Association (Saper et al., JAMA (2004)292:2868-2873) found:
One of 5 Ayurvedic HMPs [herbal medicine products] produced in South Asia and available in Boston South Asian grocery stores contains potentially harmful levels of lead, mercury, and/or arsenic. Users of Ayurvedic medicine may be at risk for heavy metal toxicity, and testing of Ayurvedic HMPs for toxic heavy metals should be mandatory.
What is a bit unusual about this grant is that the PI, Dr. Booth-LaForce, is a professor of nursing at a highly regarded university. Her bio reveals the source of her interest in Ayurveda: "she is studying yoga and Ayurveda-based meditation as possible therapies for menopausal symptoms." Well, I'm not exactly sure what Ayurveda-based meditation is, but I'm pretty certain it isn't science.

Ayurveda isn't medicine, and it isn't "complementary" to medicine either. NIH shouldn't fund it, no matter who the PI is. Research dollars are too precious to waste, and NCCAM should be shut down.

PLoS Biology, vanity publisher?

I'm a big fan of open access, and I'm rooting for PLoS Biology to succceed in their goal to publish papers that have the same quality and impact as Nature and Science. Today, though, I find myself both surprised and disappointed, as PLoS Biology shows that its hunger for publicity threatens to turn it into a vanity publisher.

You see, in an article appearing tomorrow morning (4 Sept 2007), PLoS Biology is publishing a paper by Craig Venter and colleagues about....(pause)... Craig Venter's genome! So when I get enough money to sequence my genome, will PLoS Biology publish that too? Is this the Donald Trump-izing of science?

But wait, you might object - maybe there's some science in the paper. Well, first you might want to recall that the original human genome paper published by Celera, in early 2001, was predominantly based on Venter's DNA, as he later admitted in a letter to Science. (Disclosure: I worked with Craig and was a co-author on the 2001 paper, and I fully stand behind it.) So most of the science was published over six years ago.

Still, I have the paper here and I've read it, and I can re-assure you that there isn't anything new, unless you are interested in how many differences there are between Craig Venter's genome and the reference human genome (the one in Genbank, which is very nearly complete). If anything, it's kind of a weirdly voyeuristic read, in which you an learn details of Craig's family history (English ancestry, three siblings, etc.), and you can see a full-page picture of his karyotype. Then there's a lengthy description of how they compared the genomes and found all the differences, but I confess I couldn't go on beyond page 10. In what I read, there wasn't much new science, certainly not of the caliber I've come to expect in PLoS Biology.

PLoS Biology will probably get plenty of attention for this paper. In politics there's a saying: any publicity is good publicity. In science, though, we think otherwise. PLoS: why'd you have to do this? I guess I shouldn't have expected so much of you.