Dr. House Was Right

I’m a big fan of the show House, which ran on the Fox TV network for 8 years and is now on Netflix. Gregory House, masterfully played by actor Hugh Laurie, is an brilliant doctor who diagnoses patients whose bewildering symptoms have stymied other doctors. He’s also unbelievably rude: he insults and harrasses his patients and his fellow doctors alike, but they tolerate him - usually - because he’s almost always right.

I doubt that any real doctor is as rude as House. But like any group of professionals, doctors vary widely in their people skills. Recently, though, Medicare has started using patient satisfaction as a component of how it pays hospitals. In response, some doctors now try harder to give patients what they want, rather than what they need, as described last year by Kai Falkenberg at Forbes.  On Medscape recently, William Sonnenberg wrote that “patient satisfaction is overrated” and said of Press Ganey, a company that runs patient satisfaction surveys:
Press Ganey has become a bigger threat to the practice of good medicine than trial lawyers.”
You can find Medicare's hospital survey online. Dr. House would fail, big time.

What’s wrong with giving patients what they want? It turns out that patient satisfaction is tied to higher costs and, even worse, a higher death rate. A large survey covering 52,000 patients, published by a team led by Joshua Fenton at the University of California-Davis  found that the most satisfied patients not only spent about 9% more than average, but had a 26% higher death rate. From the study: “The most satisfied patients had statistically significantly greater mortality risk compared with the least satisfied patients.”

For patients who think a nice doctor is a good doctor, this might come as very disappointing news. Was the effect due to patients already in poor health, who might be more inclined to like their doctors? No: when the researchers excluded the sickest patients and looked only at the healthier ones, the risk of dying was even higher. (It’s important to note here that this is relative risk; only 3.8% of patients died during the six-year followup to the study.)

Over at The Daily Beast last week, Daniela Drake summarized this trend as “You can’t Yelp your doctor.” (Not that Yelp isn’t useful for finding a good pizza place.) And yet, as Scott Hensley reported on the NPR blog Shots, online ratings of doctors are becoming very popular, even though they don’t measure how good a doctor is at diagnosing and treating illness.

This study has implications for so-called “alternative medicine” as well. Patients who frequent alternative providers such as acupuncturists, homeopaths, and naturopaths often report high levels of satisfaction, as if satisfaction correlated with better care. Now we have a large study showing that this is simply not the case.

It makes sense that patient satisfaction is related to cost: patients often demand treatments that they don’t need, and “Patient requests have also been shown to have a powerful influence on physician prescribing behavior”, as Fenton and colleagues reported. It is less clear why the most satisfied patients died at higher rates.

Obviously, doctors don’t need to act like Dr. House to be effective. But doctors need to be able to tell patients things they don’t want to hear. Just because you want an antibiotic for your sore throat or your child’s ear ache doesn’t mean you should get one.  The UC Davis study demonstrates that using patient satisfaction surveys to adjust reimbursement rates, as Medicare is currently doing, is a recipe for higher costs and lower quality of care.

Given a choice between a friendly doctor who gives me what I want and a brilliant doctor who gives me what I need, I’ll take Dr. House.

The PSA test for prostate cancer: more harm than good?

Millions of men are tested each year for high levels of prostate-specific antigen, or PSA, which is designed to detect early signs of prostate cancer. The test is covered by insurance, so most men readily agree to it. After all, what's the harm?

Well, plenty. PSA screening, we now know, "leads to a substantial overdiagnosis of prostate tumors." Many of these cancers grow very slowly, and men with slow-growing prostate tumors may never have symptoms. However, once a man is told he has cancer, there is a a strong tendency to treat it, and treatment has serious, often harmful side effects: 20-30% of men treated with surgery and radiation will have long-term incontinence and erectile dysfunction.

There is a furious debate going on right now over the evidence for and against PSA screening. The debate started with a large-scale US study called PLCO, which found no benefit from annual PSA screening. Soon after that, the US Preventive Services Task Force recommending that most men should not get regular PSA tests.  They concluded:
"Many more men in a screened population will experience the harms of treatment than will experience the benefit.... The USPSTF concludes that there is moderate certainty that hte benefits of PSA-based screening for prostate cancer do not outweigh the harms."
The American Academy of Family Physicians agrees, and has adopted a clear recommendation:
"Don't routinely screen for prostate cancer using a prostate-specific antigen (PSA) test or digital rectal exam."
which I wrote about last November.

In contrast, the American Urological Association responsed to the USPSTF report by issuing a statement that it was "outraged and believes that the Task Force is doing men a great disservice." Prostate surgery is big business for urologists, which may have biased their reaction. However, to their credit the AUA modified its guidelines on PSA screening, which now state
"The [AUA] Panel does not recommend routine screening in men between ages 40 to 54 years at average risk."
For men ages 55 to 69, they recommend "shared decision-making," but they still insist that there is a benefit for men in this group. Their 2013 press release says "the highest quality evidence for screening benefit was in men ages 55 to 69 screened at two- to four-year intervals."

Why does the controversy continue? One reason is that a large European study, called ERSPC, reported a small benefit from PSA screening. The European study is actually a combined analysis of 7 studies in 7 countries, each of which was run a bit differently.  Five of the studies reported no benefit, and just two, from Sweden and the Netherlands, showed a benefit.

So what was going on in those two countries? Did they do screening differently, or treatment differently? Well, it seems they did. In a letter published in Uro Today on May 6, Ian Haines and George Miklos lay out an explanation: in the Swedish study, many more patients in the control group (the group that did not receive PSA screening) were treated with androgen deprivation therapy, ADT, which recent evidence indicates may increase the risk of death. Haines and Miklos published a more detailed analysis last October, accompanied by an editorial by Otis Brawley, the Chief Medical Officer of the American Cancer Society.

Brawley pointed out that
"the harms of screening have been consistently demonstrated in all screening trials to date."
He calls for "an objective panel of experts with access to all of the data" to address the controversy over the possible bias in some of the European trials. Carlsson et al. responded last month in the Journal of the National Cancer Institute, defending their methods, but Haines and Miklos fired back in the same issue, arguing that the benefits found in the European study "rests entirely on the ... Goteborg trial from a single city.

Regardlesss of the evidence from that one city, though, the evidence today is strong that until we have much better treatments for prostate cancer, routine screening with PSA tests causes more harm than good. The side effects of surgery can be life-altering and devastating. Guys: unless you have a special reason to be concerned about prostate cancer, tell your doctor "no thanks" if he offers you a PSA test at your next checkup. That's what I did.

Stem cell therapy offers hope for “irreversible” heart damage

In December 2011, I reported on one of the first attempts to inject stem cells into damaged hearts. In that study, published in The Lancet, scientists grew stem cells from patients’ own hearts after the patients had suffered serious heart attacks. These were patients who had serious, irreversible heart damage. As the study leader, Dr. Roberto Bolli, said at the time
“Once you reach this stage of heart disease, you don’t get better. You can go down slowly, or go down quickly, but you’re going to go down.”
Amazingly, in that study, the patients got better. 14 of the 16 patients had improved heart function after 4 months, and the results were even better after one year. The stems cells grew into new, functioning heart cells.

That was just one study. Now there have been more, and the results continue to be very encouraging. Just last week, the Cochrane Collaboration published a review of 23 trials, all of them attempting stem cell therapy for heart disease. These trials looked at the use of bone marrow stem cells in patients whose hearts were failing. Unlike the 2011 study, which looked at heart attack patients, these studies looked at patients with advanced heart disease who had not suffered a heart attack. The results: overall, stem cell treatments reduced the risk of death and improved heart function, though the benefits were not as dramatic as in the patients with heart attacks. 

What is most exciting in the newest studies is the long-term reduction in the risk of death. Six of the studies reported long-term results (more than one year) on mortality. In these studies, 8 patients died out of 241 who received stem cell therapy (3.3%). In contrast, 30 patients died out of 162 (18.5%) who did not receive stem cells. The numbers are small, but this is a huge benefit: patients were about 5 times less likely to die. The Cochrane review concluded that
“The risk of mortality over long-term follow-up was significantly lower for those who received BMSC [bone marrow stem cell] therapy.”
An important caveat is that this is still “low quality” evidence, meaning that we need to see more data, on many more patients, before we can have confidence in the results. But it is still very encouraging, especially when no other treatment offers anything remotely this promising for advanced heart disease.

The evidence continues to build that stem cells can repair heart tissue damaged by heart attacks. Just a couple of months ago, Britain launched the largest study yet of stem cell treatments for heart attacks, involving 3,000 patients in Europe. This new review shows that they can help repair some of the damage from other types of heart disease as well.


Heart disease is the leading cause of death in the United States, and we should be pursuing every plausible treatment, though very few exist. Stem cells offer the hope that, for the first time ever, we might be able to reverse heart damage that was previously thought to be irreversible. Stem cell treatments are a true breakthrough, and rather than cutting medical research, as we have been doing for the past five years, we should be pouring resources into this remarkable new medical technology and the therapies that it makes possible.

Medicare data reveals that U.S. wastes half a billion dollars per year on chiropractic

Hold on, you're about to get "adjusted."
Ten days ago, the federal government released a huge data set detailing how it spent $77 billion in Medicare funds in 2012 to over 880,000 health care providers. The release of this data is part of a new transparency effort by the government, which many of us applaud.

The data reveal some troubling things.

Most news organizations focused on who the biggest beneficiaries are: the New York Times described how just 100 doctors received $610 million. A Washington Post story focused on the top 10 Medicare billers, including one ophthalmologist in Florida who was paid $20 million by Medicare, mostly to cover Lucentis, a drug for macular degeneration. The Post pointed out that Medicare would have saved over $10 million if that doctor used Avastin, which is equally effective. “Medicare pays a doctor more for injecting the more expensive drug,” the Post pointed out.

But until now, no one has pointed out another, far more egregious waste revealed by the Medicare data: we are spending a huge amount of money on the highly dubious practice known as chiropractic.

To be precise, the 2012 Medicare data reveals that in 2012, Medicare paid $496 million for chiropractic.

This is a stunning amount. It dwarfs the funding that NIH wastes on alternative medicine through NCCAM, which is itself an egregious waste of money.

Chiropractors are not medical doctors. They primarily treat back pain, but they claim to treat a wide range of other conditions, which some of them believe are related to mis-alignments of the spine, called subluxations. This belief has no scientific basis. Nevertheless, chiropractors have succeeded in convincing the government to cover their treatments through Medicare.  

Now we know how successful they have been: half a billion dollars a year spent “adjusting” the spines of patients, all funded by Medicare.

(And they have recently been lobbying furiously, as I wrote last summer, to force private health care providers to cover chiropractic and other alternative practices under Obamacare.)

But wait, you might ask, don’t chiropractors provide pain relief? And don’t they have medical degrees? Well, on the second question, the answer is that they have special Doctor of Chiropractic (D.C.) degrees, which are given out by just 15 special chiropractic colleges in the U.S. The entire field was invented out of thin air by D.D. Palmer in 1895, and later popularized by his son. In his book-length expose Chiropractic Abuse: An Insider's Lament, chiropractor Preston Long lists "20 things most chiropractors won't tell you," including
  1. "Chiropractic theory and practice are not based on ... knowledge related to health, disease, and health care. 
  2. Many chiropractors promise too much.
  3. Our education is vastly inferior to that of medical doctors."
These are just the first three: you can see the full list in a review of Long's book by physician Harriet Hall, or read the book itself.

Sam Homola, a retired chiropractor, summarized his concerns about chiropractic in an article at Science-Based Medicine. Homola concluded that 
“There is no credible evidence to support the use of spinal manipulation for anything other than uncomplicated mechanical-type back pain and … no evidence at all to support chiropractic subluxation theory.”
Perhaps most alarming, especially given that Medicare is paying for millions of treatments per year, is that chiropractic manipulation can cause a stroke, by causing a tear in a major artery running through the neck. As reported recently in the Journal of Neurosurgery:
Chiropractic manipulation of the cervical spine can produce dissections ... of the vertebral and carotid arteries. These injuries can be severe, requiring endovascular stenting and cranial surgery. In this patient series, a significant percentage (31%, 4/13) of patients were left permanently disabled or died as a result of their arterial injuries.” [Albuquerque et al., J. Neurosurg 2011 115(6):1197-1205]
If this weren’t enough to cause concern, many chiropractors are also anti-vaccine, a problem that is apparently serious enough that some chiropractors themselves have spoken out against the anti-vaxxers in their own ranks.

Over a century ago, D.D. Palmer believed, mistakenly, that he cured a man’s deafness by manipulating his neck. His son built Palmer’s beliefs into a profitable business, but neither of them would have dreamed that the U.S. government would one day spend half a billion dollars per year on chiropractic manipulations. 

If we want to start controlling the cost of Medicare, here’s an easy first step: stop covering chiropractic. We will save $496 million a year, and people’s health will improve.

Raw milk enthusiasts want you to drink a bacterial stew. Yum.

Sometimes it is astonishing how ignorant people can be. Now it's the turn of fans of "raw milk," a new fad that is sweeping the U.S.

I still remember reading milk cartons as a kid, and asking my parents what "pasteurized" meant. While I don't remember exactly what they said, I'm sure they told me that it made the milk safe by killing bacteria. Even as a kid, I understood that bacteria in my milk were probably a bad thing.

Louis Pasteur is one of the most famous scientists in history, and rightly so. In 1862, he invented the process of heating milk to kill the bacteria in it. Pasteurization, as we now call it, has saved millions of lives in the 150 years since. Pasteur also created the first vaccine for rabies. He was a true giant.

Fans of raw milk appear to be stunningly unaware of Pasteur's achievements, and equally ignorant of the dangers of bacterial infections. Many of the health and safety claims for raw milk can be found on sites such as RealMilk.com, a site that is chock full of

  • Conspiracy theories: the Government is hiding the truth from you.
  • Denialism: raw milk never hurt anyone, and even protects you against bacteria.
  • Shifting the blame: infections are caused by other contaminated foods, not raw milk.

One of the more chilling facts about RealMilk.com is their emphasis on feeding raw milk to infants, who are at the greatest risk of dying from infections.  Their "Campaign for Real Milk" advocates
"universal access to clean raw whole milk from pasture-fed cows, especially access for pregnant and nursing mothers and for babies and growing children."
We have plenty of good science about raw milk. CDC review of infectious disease outbreaks across the U.S. from 1993-2006 found that
Langer et al., Emerging Infectious Diseases 18:3 (2012).
Among other findings, a
"The rate of outbreaks caused by unpasteurized milk (often called raw milk) and products made from it was 150 times greater than outbreaks linked to pasteurized milk."
Dr. Robert Tauxe, an infectious disease specialist at the CDC, debunks some of the myths about raw milk in an article freely available on Medscape. John Snyder and Mark Crislip have both written compellingly about the dangers of raw milk at sciencebasedmedicine.org.

If I were being cynical, I might say that the wild-eyed proponents of raw milk deserve whatever infections they get. But their children don't. 3-year-old Kylee Young, who the Washington Post wrote about this past Sunday, didn't deserve to suffer kidney failure and a stroke after her mother fed her raw milk that was infected with E. coli O157:H7. Kylee can no longer walk or talk and needs constant care. Her mother now says
"If I had known what I know now, I would never have fed [raw milk] to my daughter." 
Louis Pasteur and his wife had five children, three of whom died of childhood infections. These tragedies were Pasteur's motivation for studying infectious disease. Thanks to his work 150 years ago, no one today needs to die from drinking unpasteurized milk. The raw milk movement insists, despite the evidence, that they know better.

So go ahead, drink your raw milk and eat a paleo diet too, while you're at it. But don't ask our modern medical system to pay for your treatment when you get sick. And most of all, don't subject innocent children to the unnecessary risks of raw milk.

Why Google Flu is a failure: the hubris of big data

It seemed like such a good idea at the time.

People with the flu (the influenza virus, that is) will probably go online to find out how to treat it, or to search for other information about the flu. So Google decided to track such behavior, hoping it might be able to predict flu outbreaks even faster than traditional health authorities such as the Centers for Disease Control (CDC).

Instead, as the authors of a new article in Science explain, we got "big data hubris."  David Lazer and colleagues explain that:
“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis.
The folks at Google figured that, with all their massive data, they could outsmart anyone.

The problem is that most people don't know what "the flu" is, and relying on Google searches by people who may be utterly ignorant about the flu does not produce useful information. Or to put it another way, a huge collection of misinformation cannot produce a small gem of true information. Like it or not, a big pile of dreck can only produce more dreck. GIGO, as they say.

Google's scientist first announced Google Flu in a Nature article in 2009. With what now seems to be a textbook definition of hubris, they wrote:
"...we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day."
They obtained this remarkable accuracy entirely from analyzing Google searches. Impressive - if true.

Ironically, just a few months after announcing Google Flu, the world was hit with the 2009 swine flu pandemic, caused by a novel strain of H1N1 influenza. Google Flu missed it.

The failures have continued. As Lazer et al. show in their Science study, Google Flu was wrong for 100 out of 108 weeks since August 2011.

One problem is that Google's scientists have never revealed what search terms they actually use to track the flu. A paper they published in 2011 declares that Google Flu does a great job. The official Google blog last October makes it appear that they do an almost perfect job predicting the flu for previous years.

Haven't these guys been paying attention? It's easy to predict the past. Does anyone remember the University of Colorado professors who had a model that correctly predicted every election since 1980? In August 2012, they confidently announced that their model showed Mitt Romney winning in a landslide. Hmm.

Flu cases this year, which are dominated by H1N1.
A bigger problem with Google Flu, though, is that most people who think they have "the flu" do not. The vast majority of doctors' office visits for flu-like symptoms turn out to be other viruses. CDC tracks these visits under "influenza-like illness" because so many turn out to be something else. To illustrate, the CDC reports that in the most recent week for which data is available, only 8.8% of specimens tested positive for influenza.

When 80-90% of people visiting the doctor for "flu" don't really have it, you can hardly expect their internet searches to be a reliable source of information.

Google Flu is still there, and you can still look at its predictions, even though we know they are wrong. I recommend the CDC website instead, which is based on actual data about the influenza virus collected from actual patients. Big data can be great, but not when it's bad data.

A DNA Sequencing Breakthrough for Pregnant Women

DNA sequencing has made its way to the clinic in a dramatic new way: detecting chromosomal defects very early in pregnancy.  We've known for 25 years that traces of fetal DNA can be detected in a pregnant women's blood. But these traces are very small, and until now, we just didn't have the technology to detect an extra copy of a chromosome, where the DNA itself is otherwise normal.

Last week, in a study published in The New England Journal of Medicine, Diana Bianchi and colleagues showed how DNA sequencing can detect an extra copy of a chromosome with remarkable accuracy. This report heralds a new era in prenatal DNA testing.

First, some background: three copies of chromosome 21 causes Down syndrome, a genetic disease that causes intellectual disability and growth delays. Down syndrome is also called trisomy 21, where trisomy = 3 copies of a chromosome instead of the normal 2 copies. Much less common is Edwards syndrome, caused by three copies of chromosome 18. Edwards syndrome, or trisomy 18, has much more severe effects, with the vast majority of pregnancies not making it full term. Having an extra copy of any other chromosome almost always causes an early miscarriage. For many reasons, prospective parents want to know if a fetus carries any of these abnormalities.

The accuracy of the new test is remarkable. Out of 1914 young, healthy pregnant women, there were just 8 pregnancies where the fetus had an extra chromosome, and the test detected all 8. What was most impressive was its low false positive rate: in total, the new DNA-based test had just 9 false positives (for either chromosome 21 or chromosome 18 trisomy).  By contrast, the conventional screening test, which also identified all 8 true cases, produced 80 false positives, nearly 9 times as many as DNA sequencing.

Why does this matter? In most cases, women with a positive result on one of these tests will opt for amniocentesis ("amnio"), an invasive procedure where a doctor inserts a long needle directly into the womb and collects a sample of amniotic fluid. Amnio almost always gives a definitive answer about Down syndrome. With the conventional method, its false positive rate is so high that even with a positive test, over 95% of amnios will be negative, versus 55% with the new DNA sequencing test. Or to put it another way, as Bianci et al. wrote:
"if all women with positive results had .. decided to undergo an invasive procedure, there would have been a relative reduction of 89% in the number of diagnostic invasive procedures."
89% fewer invasive procedures is a huge reduction, not only in costs but in stress for the parents and risk to the baby (because amnio carries a small risk of miscarriage).

With DNA sequencing getting faster and cheaper every year, it might be surprising that we are only now seeing it used to detect trisomy. The difficulty with detecting an extra copy of a chromosome is that the DNA sequence itself is normal. If you sequence the genome, you won't find any mutations that indicate that the fetus has an extra chromosome copy. This is where the remarkable efficiency of next-generation sequencing comes in.

In a matter of hours, modern sequencing machines can sample millions of small fragments of DNA. We can use computational analysis to determine which fragments come from the fetus, and how many came from each chromosome. If any chromosome has three copies, we'll see a 50% increase in DNA from that chromosome. The power of sequencing lies in large numbers: because we can sequence many fragments from each chromosome, a 50% increase is easy to detect.

The method that Bianchi used to detect trisomy was published in 2011 by Amy Sehnert and colleagues from 2011, some of whom are contributors to the new NEJM study. [Side note: they use a software program called Bowtie, developed by my former student Ben Langmead, to do the analysis.] The method is likely to get even better over time, further reducing the false positive rate.

The American College of Obstetricians and Gynecologists has already recommended DNA testing for pregnant women at high risk of fetal aneuploidy (an extra chromosome). To be precise, they recommend that high-risk pregnant women be offered fetal DNA testing as an option, after they get genetic counseling. This new study, which was conducted in a low-risk population, shows that the benefits of prenatal DNA testing should offered to all women.