The US will try treating opioid addiction with fake medicine

If you can't afford to offer real medical care, why not offer fake medicine? The U.S. Medicare system is about to give this strategy a try, for treating back pain.

Last week, Medicare announced that it wants to start paying for studies of acupuncture as a treatment for low back pain, as reported by the Washington Post and Stat. The government's reason, according to Secretary of Health and Human Services Alex Azar, was that we need this option to help solve opioid addiction:
“Defeating our country’s epidemic of opioid addiction requires identifying all possible ways to treat the very real problem of chronic pain, and this proposal would provide patients with new options while expanding our scientific understanding of alternative approaches to pain.”
If you break down HHS Secretary Azar's statement, it's mostly correct. Yes, treating opioid addition should explore all methods for treating chronic pain. And yes, this program will provide "new" options, even though the option in question is nonsense.

But (3) no, studying acupuncture will not expand our scientific understanding of "alternative approaches" to pain. Why not? Because thousands of studies have already been done, and the verdict was in, long ago, that acupuncture is nothing more than an elaborate placebo.

The problem is, acupuncture proponents never give up. Every time a study shows that acupuncture fails (and this has happened, repeatedly), they claim it wasn't done properly or make another excuse. I've even seen proponents argue that studies in which acupuncture failed were in fact successes, because acupuncture and placebo treatments both outperformed the "no treatment" option.

(Aside: we use placebo treatments because we've known for decades that any treatment, even a sugar pill, may show a benefit as compared to no treatment at all. Acupuncture research has created placebos by using fake needles that don't actually pierce the skin, or by placing needles in random places rather than the so-called acupuncture points. Scientifically speaking, if a treatment doesn't outperform a placebo, then the treatment is a failure.)

To make matters worse, the new HHS program will fund "pragmatic" clinical trials rather than the usual, gold-standard randomized trials (RCTs). Without going into details, let's just say that pragmatic trials are much less well-controlled than RCTs, allowing more room for mistakes and misinterpretation. This is a bad idea even when the intervention being studied is legitimate. It's an even worse idea here, where trials have shown, over and over, that acupuncture doesn't work.

Secretary Azar might be confused because the acupuncture industry has managed to get hundreds of studies published, many of them positive–but most of them are poorly designed, and who has time to read all that bad science? (The rare well-designed studies always show that acupuncture doesn't work.) Acupuncturists have even created pseudoscientific journals devoted entirely to acupuncture, as I wrote about in 2017. Some of these journals are published by respected scientific publishers, but they are still little more than fake journals.

Not surprisingly, with entire journals trying to fill each issue with acupuncture articles, last week's Medicare announcement noted that
"the agency [CMS] recognizes that the evidence base for acupuncture has grown in recent years". 
No, it hasn't. What has grown is the number of articles. Adding more garbage to a pile doesn't make it smell better.

For those who aren't familiar with the claims of acupuncture, let's do a very quick summary: acupuncturists stick needles in a person's body at specific points in order to manipulate a mystical life force that they call "qi" (proounced "chee"). This idea is "a pre-scientific superstition" that has no basis in medicine, physiology or biology, and has never had any good scientific evidence to support it.  Acupuncturists don't even agree on where the acupuncture points are, which should make it impossible to do a scientific study. It's not at all surprising that acupuncture doesn't work; indeed, if it did work, modern medicine would have to seriously examine what mechanism could possibly explain it.

But wait, argue proponents, what about all the wise traditional doctors in China who developed acupuncture over thousands of years? Well, it turns out that acupuncture wasn't popular in China until the mid-20th century, when Chairman Mao pulled a fast one on his population because he couldn't supply enough real medical care. Mao didn't use acupuncture himself and apparently didn't believe in it. I highly recommend this expose of Mao's scam, by Alan Levinovitz in Slate.

So rather than spend millions of dollars on yet another study of acupuncture for pain, I have a better suggestion for HHS: invest the funds in basic biomedical research, which has had a flat budget for more than a decade now. As long as it goes through proper peer review, almost any research will be far better than wasting the money on acupuncture.

Now, I'm not naive enough to think that Medicare will take my advice, but I can tell them right now what their new "pragmatic" trials will reveal. Acupuncturists will happily take the money, treat people suffering from back pain, and report that some of them experienced reductions in pain. Some of the patients will invariably agree, because back pain comes and goes, and it's hard to know why it went away.

Then the acupuncturists will say, "look, it works! Now please cover acupuncture for all Medicare patients." Then we'll spend more tax dollars on pseudoscience, and patients will be in just as much pain as ever. If Medicare falls for this (and I fear they will), then Chairman Mao will have fooled the U.S. government, just as he fooled many of his own people half a century ago.

The loneliest word, and the extinction crisis

We're in the midst of an extinction crisis. Just two months ago, an international committee known as IPBES released a report, compiled over 3 years by 145 experts from 50 countries, that said 1,000,000 plant and animal species are threatened with extinction, many within the next few decades.

Martha, the very last passenger
pigeon, shown when she was
still alive.
Before getting to that report, I want to introduce a word that I only just learned: endling. An endling (the word was coined in 1996) is the last surviving member of a species. One example was Martha, the very last passenger pigeon, who died in the Cinncinnati Zoo in 1914. Passenger pigeons numbered in the billions in the 19th century, but humans wiped them out.

In 2012 we lost another endling, Lonesome George–the very last Pinto Island tortoise from the Galapagos Islands–who died at around age 100.

If you want to see a particularly poignant example of an endling, watch this rare and heartbreaking video of Benjamin, the very last Tasmanian tiger (or thylacine), pacing around his cramped enclosure in Hobart, Tasmania. This film from 1933 is the last known motion picture of a living thylacine. Benjamin died in 1936.
Two Tasmanian tigers in the Washington, D.C. zoo, in a photo
taken around 1904. Photo credit: Baker; E.J. Keller. from the
Smithsonian Institution archives

We have records of other endlings too: the last Caspian tiger was killed in the 1950s in Uzbekistan, and the last great auks were killed for specimen collectors in 1844.

Unfortunately, we're likely to see more and more endlings in the years to come. The causes of extinction are varied, and many of them are related to human activities. The IPBES ranked the culprits, in descending order, as:

  1. changes in land and sea use,
  2. direct exploitation of organisms,
  3. climate change,
  4. pollution, and
  5. invasive alien species.

In response to the IPBES report, the House of Representatives held a hearing in May to discuss the findings. Republicans on the committee took the opportunity to display a new form of denialism: extinction denialism. As reported in The Guardian, Representatives Tom McClintock and Rob Bishop used their time to attack the reputations of the report's authors, rather than addressing the very serious consequences of large-scale extinction. They called two climate-change deniers as witnesses, who also used their time to attack the authors.

This is a classic strategy used by deniers: attack the messenger, rather than dealing with the substance of the report. Let's consider just a few of the report's main findings (see much more here):

  • Across the planet, 75% of the land and about 66% of the marine environments have been significantly altered by human actions.
  • Up to $577 billion in annual global crops are at risk from pollinator loss (bees and other insects)
  • In 2015, 33% of marine fish stocks were being harvested at unsustainable levels; 60% were maximally sustainably fished.
  • Plastic pollution has increased tenfold since 1980, 300-400 million tons of heavy metals, solvents, toxic sludge and other wastes from industrial facilities are dumped annually into the world’s waters, and fertilizers entering coastal ecosystems have produced more than 400 ocean ‘dead zones’, covering a combined area greater than that of the United Kingdom.

The report is a call to action. It explains that transformative change is needed to protect and restore nature, and collective action is needed to overcome special interests such as the fossil fuel industry, which donates heavily to politicians. The Congressional hearing was a vivid demonstration of how effective the anti-environmental lobbyists have been.

Endling is the saddest word in any language. If we humans continue to treat nature as we've done in the past, we're going to see many more videos like the one of Benjamin, the last Tasmanian tiger. Let's hope we can do better.

Does the length of your fingers predict sexual orientation?

Imagine my surprise last week when I saw an article in Science that claimed "finger lengths can predict personality and health."* Huh?

The author, science writer Mitch Leslie, gives us the rather startling number that over the past 20 years, more than 1400 papers have been published linking finger lengths to personality, sexual orientation, cardiovascular disease, cancer, and more.

What is this magical finger length ratio? Simple: it's the ratio between the lengths of your index (2nd) and ring (4th) fingers, also called the 2D:4D ratio. Take a look: is your index finger longer than your ring finger?

It turns out that most people have slightly longer ring fingers than index fingers, and in men the difference is a bit larger. If the ringer finger is longer, than the 2D:4D ratio is less than one. One recent study reported that this ratio was 0.947 in men and 0.965 in women. Another study found average values of 0.984 and 0.994 for men and women. Not only is this a tiny difference, but in every study, the 2D:4D ratio among men and women overlapped, meaning the number alone doesn't tell you very much.

Nonetheless, some researchers have taken this tiny physiological difference and run with it. Nearly 20 years ago, Berkeley psychologist Marc Breedlove (now at Michigan State) published a study in Nature where he and his colleagues measured finger-length ratios in 720 adults in San Francisco. Based on this data, they concluded that finger-length ratios show
"evidence that homosexual women are exposed to more prenatal androgen than heterosexual women are; also, men with more than one older brother, who are more likely than first-born males to be homosexual in adulthood, are exposed to more prenatal androgen than eldest sons."
Whoa! They are not only claiming that the 2D:4D ratio is predictive of homosexuality, but also that exposure to prenatal androgen is the root cause of both finger lengths and sexual orientation. (Confusing correlation with causation, perhaps?) Not surprisingly, this claim is not widely accepted.

There are many, many more claims out there. In 2010, the BBC boldly reported that 
“The length of a man's fingers can provide clues to his risk of prostate cancer, according to new research.”
based on this study in the British Journal of Cancer. That study found that men whose index fingers were longer than their ring fingers had a reduced risk of cancer. (I don't believe it for a second, but if it makes you feel better, go ahead.) And a 2016 report found that both men and women with a low 2D:4D ratio (longer ring fingers) had better athletic abilities. 

The Science article goes on to explain, though, that "the results often can't be replicated." Most of these studies are small, the measurement techniques vary widely, and efforts to reproduce them (when others have tried, which isn't that often) usually fail. It didn't take me long to find a few, such as this study from 2012, which swas the 2nd failure to replicate a result claiming a link between sex hormone exposure and the 2D:4D ratio.
The author's left hand

After reading the whole Science article, one comes away with the impression that finger ratio science is almost certainly bogus. The presentation, though, gives far more space to the claims of those who believe in it, and one gets the strong impression that the journalist (Mitch Leslie) is on their side. A hint to that is in his last sentence where, after saying that the two sides are "talking past one another," he writes "more than 20 papers using the digit ratio have already come out last year."

And since the last sentence is often a giveaway for what the writer really thinks, let me conclude by saying that both my ring fingers are longer than my index fingers.

[*The print version of Science contains precisely this claim in the subheading to the article: "Some researchers say a simple ratio of finger lengths can predict personality and health." Interestingly, the online version of the same article does not have this headline. Instead, it reads "Scientists try to debunk idea that finger length can reveal personality and health." It appears as if the online editors were more skeptical than the print editors.]

Google ran a secret experiment to search for cold fusion. Did they find it?

A non-working cold fusion apparatus
at the San Diego Naval Warfare 
Center. Source: Wikipedia
The journal Nature last week revealed the results of a 4-year, $10 million experiment to test cold fusion. The experiments were kept secret in order to avoid the negative publicity that cold fusion attracted when it burst upon the scene 30 years ago.

I've been talking to a few non-scientists about this, and it appears that many people don't know about the cold fusion saga, so here's a quick recap: back in 1989, two chemists at the University of Utah, Stanley Pons and Martin Fleischmann, held a press conference to announce a startling discovery: they had generated fusion energy at room temperature. If true, this would have been a profound, civilization-changing discovery: cold fusion had the potential to provide nearly free energy to the entire world, eliminating our dependence on fossil fuels and promising unheard-of economic and environmental benefits.

[A physics aside for those who might be curious: fusion energy is produced when two atoms are smashed together to form a new, heavier atom. Four hydrogen atoms can be fused to form one helium atom, for example. A tiny bit of mass is converted to energy in the process, and that tiny amount produces enormous amounts of energy, as given by Einstein's famous equation, E=MC2. Fusion is the process that powers the sun and other stars, but humans have never been able to control it. It's also the source of the energy released by a thermonuclear bomb. The only nuclear energy we humans can control is fission, which is what nuclear power plants use. And the only fusion we know about requires crazily high temperatures, which is why room temperature would be "cold."]

Unfortunately for Pons and Fleischmann, whose reputations were forever tarnished, the 1989 experiments were fatally flawed. Many scientists tried to reproduce the results, but they all failed, and the criticism mounted quickly. Pons and Fleischmann never published their findings, and cold fusion later became a meme for flawed or impossible scientific results. Even today, calling something "cold fusion" is form of ridicule.

Despite the dramatic failure 30 years ago, cold fusion isn't fundamentally impossible, unlike homeopathy, acupuncture, reiki, or other forms of pseudoscience. Fusion is a very real phenomenon, and no one really knows if it might be possible to sustain a fusion reaction at low temperatures, or what those temperature limits might be. This is what led Google and the scientific team they funded to give cold fusion another serious look.

The new Google-funded experiments were run by a team of about 30 graduate students, postdoctoral fellows, and professors. The seven leaders of the team, who include scientists from UBC, MIT, the University of Maryland, LBL, and Google, described their findings in a paper just published in Nature. After four years of careful experiments, they conclude:
"So far, we have found no evidence of anomalous effects claimed by proponents of cold fusion."
In other words, they couldn't get cold fusion to work. They tried 3 different experimental setups that have been proposed by others, but despite their best efforts, nothing produced any signs of fusion energy.

The news isn't all negative. The scientists emphasized that in the course of trying to produce cold fusion, they had to design new instrumentation and study new types of materials that have received little attention before now. They wrote:
"... evaluating cold fusion led our programme to study materials and phenomena that we otherwise might not have considered. We set out looking for cold fusion, and instead benefited contemporary research topics in unexpected ways."
They cite go on to say:
"Finding breakthroughs requires risk taking, and we contend that revisiting cold fusion is a risk worth taking."
I have to agree with them here. As the scientists themselves pointed out, even though their experiments didn't produce cold fusion, "this exploration of matter far from equilibrium is likely to have a substantial impact on future energy technologies." In other words, if we keep trying, who knows what we might find?


Does ginkgo biloba enhance memory? I forgot.

I recently saw an ad that claimed ginkgo biloba can treat the signs of dementia. A quick search on Amazon.com yielded hundreds of products, many claiming that gingko is a "brain sharpener" or that it "supports focus, memory, brain function and mental performance," or other similar claims.

Ginkgo biloba is a supplement made from the leaves of the gingko biloba tree, which is native to China. The supplements industry claims that gingko has been used for thousands of years to improve memory and stave off dementia. While that may be true (though I doubt it), the argument that a medical treatment was used by pre-scientific cultures is not exactly compelling. After all, people died very young in ancient times, and medical knowledge was little more than superstition, for the most part. I don't know about you, but when I'm looking for medicine, I want the latest stuff.

"But wait!" say ginkgo biloba's advocates: maybe those ancient folk doctors were onto something. Maybe so–and it didn't take me long to find multiple studies testing what those ancients supposedly believed about gingko biloba:

  1. Here's a review from 2009 that looked at gingko biloba for dementia and milder cognitive impairment. It reported that "the evidence that Ginkgo biloba has clinically significant benefit for people with dementia or cognitive impairment is inconsistent and unreliable." Not exactly a ringing endorsement.
  2. Here's another study, from 2012, looking at the effect of gingko biloba on memory in healthy individuals. Is it a "brain sharpener"? Well, no. This study found that gingko "had no ascertainable positive effects on a range of targeted cognitive functions in healthy individuals." In other words, a total dud.
  3. And here's an even more recent study, from 2015. The result: "no convincing evidence ... that demonstrated Ginkgo biloba in late-life can prevent the development of dementia. Using it for this indication is not suggested."

Given that the science says this doesn't work, you might wonder how it is that hundreds of gingko biloba products are still on the market, all of them with claims about memory. Simple: it's a dietary supplement, not a drug, which means that it is essentially unregulated (in the U.S.). The FDA won't step in unless the marketing claims get so outrageous that they cross the line into medicine–and even then, the FDA rarely does more than send a sternly worded letter.

As I've written before, supplement marketing is like the wild west. You generally can't trust anything you read from the manufacturers, except perhaps the ingredients list. And even the ingredients are sometimes inaccurate and contaminated.

(By the way, I find it especially amusing when a pill that has no effect is advertised as "double strength," as Walgreens does for one of their gingko products, here.)

So be skeptical about the marketing claims for gingko biloba. Even NCCIH, the NIH institute whose mission is to promote "alternative" medicine, is remarkably clear about this, stating that:

  • "There’s no conclusive evidence that ginkgo is helpful for any health condition.
  • Ginkgo doesn’t help prevent or slow dementia or cognitive decline.
  • There’s no strong evidence that ginkgo helps with memory enhancement in healthy people, blood pressure, intermittent claudication, tinnitus, age-related macular degeneration, the risk of having a heart attack or stroke, or with other conditions."

I must say, I'm feeling a bit better about NCCIH these days. They got this one right. The bottom line: don't waste your money on gingko biloba.

Why does anyone believe this works? The dangers of cupping.

Cupping therapy. If this looks painful and possibly damaging
to the skin, that's because it is.
People are easily fooled. Even smart people.

I'm not talking about voters in the U.S. and the UK, although both groups have recently demonstrated how easily they can be conned into voting against their own interests. You can read plenty of articles about that elsewhere.

No, I'm talking about the wide variety of health treatments that call themselves alternative medicine, integrative medicine, traditional Chinese medicine, energy medicine, and other names. These are all just marketing terms, but many people, including some physicians and scientists, seem captivated by them.

This week I'm going to look at "cupping," a rather bizarre treatment that, for reasons that escape me, seems to be growing in popularity.

I just returned from a scientific conference, where I happened to speak with an editor for a major scientific journal who also follows this blog. She remarked that she liked some of my articles, but she disagreed with me about cupping, which I wrote about during the 2016 Olympics, where swimmer Michael Phelps was observed to have the circular welts that are after-effects of cupping. This editor's argument boiled down to "it works for me," which left me somewhat flabbergasted.

And just two weeks ago, when I was at my physical therapist's office getting treatment for a shoulder injury, I heard her discussing cupping with another therapist. I then noticed a large box containing cupping equipment on one of the counters. Thankfully, my therapist didn't suggest cupping for me; I'm not sure how I would have replied.

What is cupping? It's a technique where you take glass cups, heat the air inside them, and then place them on the skin. Because hot air is less dense, it creates suction as it cools, which sucks your skin up into the glass. (Some cupping sets use pumps rather than heat to create this effect.) Imagine someone giving you a massive hickey, and then doing another dozen or so all over your back, or legs, or wherever the cupping therapist thinks you need it. If that sounds kind of gross, it is.

Quacktitioners Practitioners of cupping think that it somehow corrects your "qi," a mysterious life force that simply doesn't exist. When pressed, they often remark that it "improves blood flow," a catch-all explanation that has no scientific basis and that is more or less meaningless. What really happens, as the physician and blogger Orac noted, is this:
"The suction from cupping breaks capillaries, which is why not infrequently there are bruises left in the shape of the cups afterward.... If you repeatedly injure the same area of skin over time ... by placing the cups in exactly the same place over and over again, the skin there can actually die."
So maybe cupping isn't so good for you.

Cupping is ridiculous. There's no scientific or medical evidence that it provides any benefit, and it clearly carries some risk of harm. A recent review in a journal dedicated to alternative medicine–one of the friendliest possible venues for this kind of pseudoscience–concluded that
"No explicit recommendation for or against the use of cupping for athletes can be made. More studies are necessary."
Right. That's what proponents of pseudoscience always say when the evidence fails to support their bogus claims. Let us do more studies, they argue, and eventually we'll prove what we already believe. That's a recipe for bad science.

Even NCCIH, the arm of NIH dedicated to studying complementary and integrative nonsense medicine, can't bring itself to endorse cupping. Their summary states:

  • There’s been some research on cupping, but most of it is of low quality.
  • Cupping may help reduce pain, but the evidence for this isn’t very strong.
  • There’s not enough high-quality research to allow conclusions to be reached about whether cupping is helpful for other conditions.

In other words, some bad scientists have conducted a few studies but haven't proven anything. But wait, it gets worse. NCCIH goes on to warn that:

  • Cupping can cause side effects such as persistent skin discoloration, scars, burns, and infections, and may worsen eczema or psoriasis. 
  • Rare cases of severe side effects have been reported, such as bleeding inside the skull (after cupping on the scalp) and anemia from blood loss (after repeated wet cupping). 

And still, otherwise intelligent people say "it works for me." I'm left speechless.

The bottom line: save your money and your skin. Don't let anyone suck it into those cups.

Measles is back. Blame the anti-vaxxers.

In the year 2000, the CDC announced that measles had been eliminated from the U.S. This was a fantastic public health achievement, made possible by the measles vaccine, which is 99% effective and which has virtually no side effects.

Unfortunately, measles is back. Just last week, the CDC announced that we've had at least 695 cases this year, the most since 2000, primarily from 3 large outbreaks, one in the state of Washington and two in New York. Because the CDC's surveillance is far from perfect, the true number of measles cases is likely much higher. And we're only four months into the year.

Also this week, UCLA and CalState-LA had to quarantine over 700 students and staff members who were exposed to measles from an outbreak in the Los Angeles area. At UCLA, one student who had measles attended multiple classes while still contagious, exposing hundreds of others to the highly contagious virus, according to a message from the university's chancellor.

No one has died as of yet, but if we don't quash these outbreaks, it's only a matter of time before someone will die. Measles has a fatality rate of 0.2%, or 2 deaths per thousand cases. That may sound small, but it's truly frightening when you consider that the U.S. had an estimated 500,000 cases per year before the vaccine was introduced in 1963.

Given the risks of measles, and given the remarkable effectiveness and safety of the vaccine, why don't people vaccinate their children? The primary reason is simple: it's the highly vocal, supremely confident, and utterly misinformed anti-vaccine movement. Anti-vaxxers spread their message daily on Facebook, Twitter, websites, and other media outlets. (I will not link to any of them here because I don't want to increase their influence.) They have launched systematic efforts throughout the U.S. and in other countries to convince parents not to vaccinate their children, claiming that vaccines cause a variety of harms, none of which are correct. (I won't list those here either, because even mentioning them gives the claims credibility.)

In one of the two outbreaks in New York, anti-vaxxers distributed highly misleading pamphlets in an effort to convince parents in an ultra-religious Jewish community not to vaccinate their kids. The anonymously-published pamphlet was "filled with wild conspiracy theories and inaccurate data," but it seems to have worked, as least among some of the parents.

The anti-vax movement is also behind the state-by-state effort to allow parents to opt out of vaccinations for their children in public schools. We're finally seeing some states roll this back, but it is still far too easy for parents to claim an "ethical" or "religious" exemption, allowing them to put their unvaccinated kids in school and thereby expose countless other children to measles and other preventable diseases. (I put those words in quotes because there is no valid ethical or religious objection to vaccines. All major religions strongly support vaccination.) Anti-vax websites provide how-to instructions telling parents how to get exemptions for their kids, and a small number of anti-vax doctors (I'm looking at you, Bob Sears) readily dispense large numbers of anti-vax exemptions. This needs to end.

The modern anti-vaccine movement started in 1998, with a fraudulent paper about the measles, mumps, and rubella vaccine, published by a former doctor who lost his medical license after the fraud was revealed. The lead author was eventually revealed to have taken large sums of money (unbeknownst to his co-authors) from lawyers who were trying to build a case to sue vaccine makers. That same ex-doctor, who I also won't name here (his initials are AW), is now a hero to the anti-vax movement, and he travels the world spreading his toxic message. He's even made an anti-vax movie.

I sincerely hope we won't see any children die before the anti-vaccine movement finally goes away. For any parents who are thinking that they won't vaccinate their kids, I urge them to read the heartbreaking words of Roald Dahl (author of Charlie and the Chocolate Factory, The BFG, and many other wonderful books), whose oldest daughter Olivia died of measles in 1962, at the age of seven:
"...one morning, when [Olivia] was well on the road to recovery, I was sitting on her bed showing her how to fashion little animals out of coloured pipe-cleaners, and when it came to her turn to make one herself, I noticed that her fingers and her mind were not working together and she couldn't do anything.
'Are you feeling all right?' I asked her.
'I feel all sleepy,' she said.In an hour, she was unconscious. In twelve hours she was dead.The measles had turned into a terrible thing called measles encephalitis and there was nothing the doctors could do to save her." 
The measles vaccine was a miracle of modern medicine, and it's been administered safely to hundreds of millions of people. Measles is a dangerous illness, but we can prevent it. No parent should have to go through what Roald Dahl went through.

Climate change is making us sneeze

Allergy sufferers are having a rough time of it this spring. If you're among them, and if you think it's getting worse, you're right–and climate change is at least partly to blame.

Admittedly, warming climate has far more severe consequences, such as the eventual flooding of entire coastal cities. On a personal level, though, pollen allergies make people pretty miserable. (I write this as a lifelong sufferer myself.) When springtime comes and trees burst into buds, some of us shut all the windows and huddle inside.

I hadn't thought that climate change would affect the pollen season until I read a newly published study in a journal called The Lancet Planetary Health. (Aside: yes, there really is a journal with that name, a specialty journal created two years ago by the venerable publishers of The Lancet.)

The new study, by USDA scientist Lewis Ziska and colleagues from 15 other countries, looked at airborne pollen data from 17 locations, spanning the entire globe, and stretching back an average of 26 years. The news isn't good for allergy sufferers:
"Overall, the long-term data indicate significant increases in both pollen loads and pollen season duration over time."
In other words, it's a double whammy: we getting more pollen than ever before, and the allergy season last longer. Okay, not that much longer, only an average of one day. But if you have hay fever, every day is one too many.

To be fair, not every location experienced a significant increase in pollen. Here are the 12 (of 17) that did:
  • Amiens, France
  • Brussels, Belgium
  • Geneva, Switzerland
  • Kevo, Finland
  • Krakow, Poland
  • Minneapolis, USA
  • Moscow, Russia
  • Papillion, USA
  • Reykjavik, Iceland
  • Thessaloniki, Greece
  • Turku, Finland
  • Winnipeg, Canada
Perhaps not coincidentally, the pollen season this spring is making headlines in the U.S. As the NY Times reported this week, "extreme" pollen has blanketed the middle of North Carolina this week. It's so bad that the air has taken on a yellowish tinge, as shown in this unaltered photo, one of several taken by photographer Jeremy Gilchrist and shared last week on social media.
A yellow haze caused by pollen over Durham, North Carolina
in April 2019. Photo credit: Jeremy Gilchrist via Facebook.

According to Ziska et al.'s study, more pollen-filled springs are the new normal. Their projections indicate that pollen seasons will continue to get longer in the future, and that the amount of pollen in the air will also increase during the spring and again in the fall, when ragweed pollen is at its peak.

What can you do about spring allergies? I wrote about this last year: for some people, over-the-counter antihistamines help, although they only treat the symptoms. Allergy shots can provide long-term relief, if you have the time to go through the months-long regimen. Other than these options, the best you can do is stay inside and wait for pollen season to end. You can always catch up on your reading of The Lancet Planetary Health.

NEJM says open access is unnecessary. Right.

Surprise: the New England Journal of Medicine thinks open access is a bad idea. Open access is the model of scientific publishing in which all results are freely available for anyone, anywhere, to read.

This week NEJM published an editorial by one of their correspondents,  Charlotte Haug, that purports to present an objective look at open access publishing, and finds that the "experiment" has failed, and that free access to scientific publications hasn't delivered on its promises.

What is NEJM worried about? Their expensive, exclusive model of publishing–where everyone has to pay high subscription fees, or else pay exhorbitant fees for each article they read–is threatened by scientists who want all science to be free. Pesky scientists!

NEJM is especially worried about "Plan S", a proposal in Europe to require that all scientists whose work is funded by the public be required to publish their results in open-access venues. Plan S is due to take effect very soon, in 2020 for 11 research funders in Europe.

The NEJM article is a clever but deeply flawed effort to prove that open access isn't working. It's full of fallacies and straw men, so much so that it's hard to know where to begin. Since they're not playing fair, though, I won't either: I'll cherry-pick three of Haug's arguments and explain why she's wrong about each one.

But first, to set the stage, let's remind everyone of what we're talking about. Scientific papers are written by scientists (like me), who are largely funded to do their work by governments, non-profit organizations, and occasionally by commercial companies. The writing is done by the scientists themselves, who submit papers to journals for peer review. The peer reviewing is also done by scientists (again, like me) who do this work for free. The journals pay nothing for all this work.

In other words, we do all the work for free, using funding provided by the public, and the journals then take that work and sell it for a very tidy profit. (Richard Smith estimates that NEJM itself has an income of $100 million with a 30% profit.) The vast majority of scientific and medical journals are owned by five for-profit corporations, as the NEJM points out:
"The five largest publishing houses (SAGE, Elsevier, Springer Nature, Wiley-Blackwell, and Taylor & Francis) continue to grow, with high profit margins."
For the past two decades, scientists have spoken out more and more over the outrageous practices of for-profit publishers, whose subscription fees and profits have grown while the costs of distribution have plummeted. Virtually everyone gets scientific papers online now. Why sign over copyright when we can distribute our work so cheaply now? The open access movement was founded to provide an alternative: open access journals allow everyone to read all the content for free, and the authors retain their copyrights.

Now let's look at the NEJM article. Haug starts by pretending to agree that open access is a good thing, writing:
"The idea — that the results of research should be available to be read, discuss, and examine... — has few, if any, opponents in either the scientific community or the public."
Reading this, you might think that Haug (and NEJM, by extension) are fans of open access. They are not.

Haug then proceeds (she thinks) to dismantle the arguments in favor of open access. First, she states that publishing costs have not dropped, but have increased. As evidence, she asserts that "Electronic production and maintenance of high-quality content are at least as expensive as print production and maintenance." This claim is, frankly, nonsense, but since Haug doesn't cite any evidence to back it up, there's nothing really to refute. It's obviously much cheaper to post a PDF on a website than to print thousands of hardcopies and physically ship them to libraries around the world. If costs are going up (and again, Haug cites no evidence), that could be simply because publishers are paying themselves higher salaries (NEJM reported compensation of  $703,324 for its chief editor in 2017), or hiring large staffs, or renting luxurious offices–who knows? Haug doesn't explain.

In any case, the costs of publishing at NEJM, a closed-access, subscription-based journal, have little to do with whether or not scientific and medical research should be freely available.

Her next argument against open access is that the most highly-cited journals are subscription-based, like (ahem) NEJM. My response: so what? Everyone within academia knows that it takes a very long time to establish a reputation as a "top" journal, and young scientists will always want to publish in those journals, regardless of how expensive they are. This has given closed-access journals like NEJM (and Nature, Science, JAMA, and Cell, to name a few more) tremendous power, which they have wielded to fight against open access at every opportunity. This editorial represents another example of that fight. The fact that many scientists still want to publish in these journals doesn't mean they should keep the results locked behind a paywall.

Setting aside this tiny number of "prestige" journals, open access papers do get cited more, as was demonstrated by this study from 2016. The evidence shows that open access does lead to higher impact: papers that are freely available are read more and cited more.

Finally, let's turn to Haug's coup de grace, which she wields near the end of her piece, as a sort of "proof" that open access is really unnecessary. Here she argues that NEJM is already open, mostly:
"About 98% of the research published in the Journal since 2000 is free and open to the public. Research of immediate importance to global health is made freely accessible upon publication; other research articles become freely accessible after 6 months."
First, let's acknowledge that merely by pointing this out, Haug is admitting that the main arguments for open access are legitimate; i.e., that it's a huge benefit to society to make research freely available. I'm going to agree with her here.

What Haug doesn't mention here is that there is one reason (and only one, I would argue) that NEJM makes all of its articles freely available after some time has passed: the NIH requires it. This dates back to 2009, when Congress passed a law, after intense pressure from citizens who were demanding access to the research results that they'd paid for, requiring all NIH-funded results to be deposited in a free, public repository (now called PubMed Central) within 12 months of publication.

Scientific publishers fought furiously against this policy. I know, because I was there, and I talked to many people involved in the fight at the time. The open-access advocates (mostly patient groups) wanted articles to be made freely available immediately, and they worked out a compromise where the journals could have 6 months of exclusivity. At the last minute, the NIH Director at the time, Elias Zerhouni, extended this to 12 months, for reasons that remain shrouded in secrecy, but thankfully, the public (and science) won the main battle. For NEJM to turn around now and boast that they are releasing articles after an embargo period, without mentioning this requirement, is hypocritical, to say the least. Believe me, if the NIH requirement disappeared (and publishers are still lobbying to get rid of it!), NEJM would happily go back to keeping all access restricted to subscribers.

The battle is far from over. Open access advocates still want to see research released immediately, not after a 6-month or 12-month embargo, and that's precisely what the European Plan S will do.

With Plan S looming, I've no doubt we'll see more arguments against open access in the coming months, but scientists have at least one ace up our sleeves: we're the ones who do all the work. We do the experiments, we write the papers, and we review the papers. Without us, the journals would cease to exist. The journals will have no choice but to go along with plan S, because without the scientists, they'll have nothing to publish. Let's hope the U.S. will follow suit in the very near future. It's long past time to change the archaic, closed-access policies that have kept medical and scientific results–results that were funded by the public–locked behind the paywalls of for-profit publishers.

Salty and saltier: fast food has more sodium than ever before

High blood pressure is one of the biggest health problems in the U.S. today. The CDC estimates that 75 million American adults, about one-third of the adult population, has high blood pressure. Even more alarming is that high blood pressure "was a primary or contributing cause of death for more than 410,000 Americans in 2014," the last year for which the CDC reports data.

One of the main causes of high blood pressure (a.k.a. hypertension) is too much salt in the diet. As Americans have eaten out more and more, they've grown less aware over how much salt goes into their foods. Salt is tasty but invisible: you can't know exactly how much salt is in your food if you didn't prepare it yourself.

Everyone knows that fast foods can be salty, especially those (like French fries) that have salt sprinkled all over them. What they don't know, though, is that over the past 30 years, the amount of salt in fast foods has increased dramatically, as revealed in a new study just published by Megan McCrory and colleagues at Boston University.

The new study looked at how portion sizes, calories, sodium (salt), calcium, and iron changed in major fast food chains between 1986 and 2016. They analyzed data from these 10 restaurants:
Arby’s, Burger King, Carl’s Jr, Dairy Queen, Hardee’s, Jack in the Box, KFC, Long John Silver’s, McDonald’s, and Wendy’s
They would have looked at more, but others either didn't have data available or didn't have foods in all the categories under study.

Portions and calories all increased over the past 30 years, but I want to focus on the salt.

Back in 1986, sodium content in entrees averaged 36% of the recommended daily allowance–which is pretty high for a single entree (a burger, say). Bad as that is, though, by 2016 this had increased to 47%. Thus a single fast-food entree has nearly half of an entire day's allowance of salt. Sides increased from 14% of the RDA to 26%, which means that if you have an entree and a side (fries!), you're getting 75% of your daily salt allowance. On average, they're adding 50% more salt today than in 1986.

And that's just the average: if you order larger sizes, or one of the saltier choices (though you may not be able to tell what those are), or more than one side dish, you can easily exceed 100% of your recommended salt intake for the day. (And by the way, for the vast majority of people, having less than the "allowance" of salt is just fine.)

I remember having fast food burgers and fries back in the 1980s, and they were quite tasty. I haven't noticed that they taste better today, and it's not clear why the chains increased the amount of salt so much. Presumably they did consumer testing and found that people like more salt, but it could also be simply that adding salt, which is a preservative, allows them to store the food supplies longer and save money.

Now, most people don't think fast food is healthy. It's popular because it tastes good and it's convenient. Nonetheless, for the large numbers of people who have high blood pressure (or pre-hypertension), the fact that salt has increased should be worrisome.

For those who want to do a little homework, you can easily find detailed nutrition facts for all the major chains online now. It took me only a few seconds to find downloadable lists for McDonaldsKFCWendy's, SubwayBurger King, and others, so you can compare all their items before your next visit.

For example, a McDonald's quarter pounder with cheese has 1110 mg of sodium, or 46% of your daily allowance. Their Bacon Smokehouse Artisan Grilled Chicken sandwich has far more, 1940 mg (81%), while their Filet-O-Fish, in contrast, has only 560 mg of sodium (23%). Side dishes can be surprisingly bad (or good) too. KFC's corn on the cob is a gem, with no sodium at all, but their BBQ baked beans weigh in with 820 mg of sodium.

The bottom line, though, is that if you want to eat less salt, the best way is to prepare your own food.  It's more trouble, but it's well worth the effort.

Scientists restart bioweapons research, with NIH's blessing

For more than a decade now, two scientists–one in the U.S. and one in the Netherlands–have been trying to create a deadly human pathogen from avian influenza. That's right: they are trying to turn "bird flu," which does not normally infect people, into a human flu.

Not surprisingly, many scientists are vehemently opposed to this. In mid-2014, a group of them formed the Cambridge Working Group and issued a statement warning of the dangers of this research. The statement was signed by hundreds of scientists at virtually every major U.S. and European university. (Full disclosure: I am one of the signatories.)

In response to these and other concerns, in October 2014 the U.S. government called for a "pause" in this dangerous researchNIH Director Francis Collins said that his agency would study the risks and benefits before proceeding further.

Well, four years later, the risks and benefits haven't changed, but the NIH has (quietly) just allowed the research to start again, as we learned last week in an exclusive report from Science's Jocelyn Kaiser.

I can't allow this to go unchallenged. This research is so potentially harmful, and offers such little benefit to society, that I fear that NIH is endangering the trust that Congress places in it. And don't misinterpret me: I'm a huge supporter of NIH, and I've argued before that it's one of the best investments the American public can make. But they got this one really, really wrong.

For those who might not know, the 1918 influenza pandemic, which killed between 50 and 100 million people worldwide (3% of the entire world population at the time), was caused by a strain of avian influenza that made the jump into humans. The 1918 flu was so deadly that it "killed more American soldiers and sailors during World War I than did enemy weapons."

Not surprisingly, then, when other scientists (including me) learned about the efforts to turn bird flu into a human flu, we asked: why the heck would anyone do that? The answers were and still are unsatisfactory: claims such as "we'll learn more about the pandemic potential of the flu" and "we'll be better prepared for an avian flu pandemic if one occurs." These are hand-waving arguments that may sound reasonable, but they promise only vague benefits while ignoring the dangers of this research. If the research succeeds, and one of the newly-designed, highly virulent flu strains escapes, the damage could be horrific.

One of the deadliest strains of avian flu circulating today is H5N1. This strain has occasionally jumped from birds to humans, with a mortality rate approaching 50%, far more deadly than any human flu. Fortunately, the virus has never gained the ability to be transmitted directly between humans.

That is, it didn't have this ability until two scientists, Ron Fouchier in the Netherlands and Yoshihiro Kawaoka at the University of Wisconsin, engineered it to gain this ability. (Actually, their work showed that the virus could be transmitted between ferrets, not humans, for the obvious reason that you can't ethically test this on humans.)

Well, Fouchier and Kawaoka are back at it again. NIH actually lifted the "pause" in December 2017, and invited scientists to submit proposal for this type of research. Fouchier confidently stated at the time that all he had to do was "find and replace" a few terms in his previous proposal and it would likely sail through peer review. It appears he was correct, although according to the Science article, his study has been approved but not yet actually funded. Kawaoka's project is already under way, as anyone can learn by checking the NIH grants database.

And by the way: why the heck is a U.S. funding agency supporting research in the Netherlands anyway? If Fouchier's work is so great (and it isn't), let the Netherlands fund it.

I've said it before, more than once: engineering the flu to be more virulent is a terrible idea. It appears the review process at NIH simply failed, as multiple scientists stated to Vox last week. This research has the potential to cause millions of deaths.

Fouchier, Kawaoka, and their defenders (usually other flu scientists who also benefit from the same funding) like to claim that their project to engineer a deadlier bird flu will somehow help prevent a future pandemic. This argument is, frankly, nonsense: influenza mutates while circulating among millions of birds, and no one has any idea how to predict or control that process. (I should mention that I know a little bit about the flu, having published multiple papers on it, including this paper in Nature and this paper on H5N1 avian flu.)

Fouchier and Kawaoka have also argued that we can use their work to create stockpiles of vaccines in advance. Yeah, right. We don't even stockpile vaccines for the normal seasonal flu, because it mutates too fast, so we have to produce new vaccines each year. And the notion that anyone can predict a future pandemic strain so precisely that we could design a vaccine based on their prediction is laughable.

I can't quite fathom why NIH seems to be so enraptured with the work of these two labs that, rather than simply deny them funding, it has ignored the warnings of hundreds of scientists and now risks creating a new influenza pandemic. Much as I hate to say this, maybe it's time for Congress to intervene.

Does RoundUp cause cancer?

(Quick answer: probably not. See my update at the bottom of this post.)

For many years, environmental activists have been concerned about the herbicide glyphosate, which is the main ingredient in RoundUp®, the world's most widely-used weed killer. Since 1996, global usage of glyphosate has increased 15-fold, in part due to the widespread cultivation of "RoundUp Ready" crops, which are genetically modified to be resistant to RoundUp®. This allows farmers to use the herbicide freely, killing undesirable weeds without harming their crops.

RoundUp®'s manufacturer, Monsanto, has long claimed that glyphosate is safe, and they point to hundreds of studies that support their argument.

Nonetheless, a new study raises the question again.

First let's look briefly at another recent study. A bit more than a year ago, in November 2017, a large study in the Journal of the National Cancer Institute looked at nearly 45,000 glyphosate users (farmers and other agricultural workers who apply glyphosate to crops). These "users" have a much higher exposure to RoundUp® than ordinary people. That study concluded:
"no association was apparent between glyphosate and any solid tumors or lymphoid malignancies overall, including NHL [non-Hodgkin lymphoma]."
They did find, though, that there was a trend–not quite significant–towards an increased risk for one type of leukemia, AML. This trend appeared in users who had the highest exposure to RoundUp®.

In the new study, by a group of scientists from UC Berkeley, Mount Sinai School of Medicine, and the University of Washington, the authors (L. Zhang et al.) decided to focus exclusively on people with the highest exposures to glyphosate. They point out that including people with low exposure, who might have no increased risk of cancer, tends to dilute risk estimates. Statistically speaking, this is undeniably correct, but it also means that their results may only apply to people with high exposures, and not to ordinary consumers.

The punchline from the new study: people with the highest exposure to glyphosate had a 41% higher risk of non-Hodgkin lymphoma.

One caveat to this finding is that it's a meta-analysis, meaning the authors did not collect any new data. Instead, they merged the results from six earlier studies including over 65,000 people, and they focused on those with the highest exposure levels.

Meta-analyses can be prone to cherry-picking; that is, picking the studies that tend to support your hypothesis. However, I couldn't find any sign of that here. The authors include a frank assessment of all the limitations of their study, and they also point out that multiple previous studies had similar findings, although most found smaller increases in relative risk. In the end, they conclude:
"The overall evidence from human, animal, and mechanistic studies presented here supports a compelling link between exposures to GBHs [glyphosate-based herbicides] and increased risk for NHL."
A couple more caveats are important. First, this finding is all about relative risk. Non-Hodgkin lymphoma is one of the most common cancers in the U.S. and Europe, but the lifetime risk for most people, according to the American Cancer Society, is just 1 in 42 (2.4%) for men and 1 in 54 (1.9%) for women. A 41% increase in relative risk increases those numbers to 3.4% (men) and 2.6% (women).

Second, this higher risk only applies to people with very high exposure to glyphosate: primarily people who work in agriculture and apply RoundUp® to crops. Ordinary consumers (including people who eat "Roundup Ready" crops) have a far, far lower exposure, and dozens of studies have failed to show any increased risk of cancer for consumers. For most of us, then, this new study should not cause much concern, but for agricultural workers, it does raise a warning flag.

[Update 18 Feb 7:45pm] After I posted this article, the scientists at the Genetic Literacy Project pointed me to Geoffrey Kabat's piece about the Zhang et al. study. Kabat did a deep dive into the studies that Zhang et al.'s work is based on and uncovered a critical flaw in the study, one that I hadn't found. More than half of the "weight" of the meta-analysis by Zhang, and by far the largest number of cancer cases, come from a single study by Andreotti et al. published in 2018. That study reported risks for 4 different time points: 5, 10, 15, and 20 years. It turns out, as Kabat reports, that only the 20-year period showed any increase in risk of cancer. The relative risks of cancer at 5, 10, and 15 years were actually lower in the group exposed to glyphosate, and yet Zhang et al. didn't mention this fact.


Now, no one thinks that glyphosate lowers the risk of cancer, but Zhang et al. did not report that they had cherry-picked in this way. At a minimum, they should have reported what their findings would be if they used the other time periods. I suspect that they'd have found no increased risk of cancer–but this wouldn't make for such a catchy headline. This omission on their part is a serious flaw, indicating that they (and their results) might have been unscientifically biased.

The bottom line: even in those with very high exposures to glyphosate, the evidence that it causes any type of cancer is very weak. And for ordinary consumers, there's nothing to worry about.

The NY Times is far too worried about 23andMe's genetic test

The New York Times decided to publish an editorial this weekend warning people to "be careful about 23andMe's Health Test." What are they worried about?

Although the NY Times article is accurate, the warning suggests that 23andMe misleading its customers somehow. Is it? I decided to take a look.

 I'm a customer of 23andMe, and I'm also a researcher in genetics and genomics, so I know quite a bit about how their technology works and about what it can reveal. I've looked at 23andMe's latest genetic health reports, and they are remarkably clear and accurate.

Let me illustrate by revealing part of my own genetic test results. First I looked at my results for BRCA1 and BRCA2, the two genes that the NY Times article discusses.

I don't have any of the harmful mutations in the BRCA genes, which 23andMe reports like this:
Notice that they immediately provide the caveat that more than 1,000 other variants in these genes have been linked to cancer risk, and they make it abundantly clear that they didn't test any of those. Here's what the NY Times article said:
"The 23andMe test can miss other possible mutations.... there are more than 1,000 other BRCA mutations that contribute to your breast cancer risk. The 23andMe test doesn’t look for any of them."

A reader of the Times might think, upon reading this, that 23andMe somehow hides this fact. But the Times article's warning is little more than a paraphrase of what 23andMe's website states.

The Times editors also caution that
"Just because you test negative for the few mutations that 23andMe screens for doesn’t mean that you won’t get breast cancer."
Duh. 23andMe explains this as well, and much more, such as:
"This test does not take into account other risk factors for breast, ovarian, prostate, and other cancers, such as personal and family health history." [from 23andMe]
23andMe also provides a wealth of information about the BRCA genes, including links to the scientific papers describing the genes and their link to cancer. I was very impressed by how thorough they are.

The Times editors focused only on the BRCA genes, but 23andMe also tests a handful of others (9, in my case). I looked at the APOE gene report, which has a mutation that has been linked to Alzheimer's disease. The bad variant is called ε4, and fortunately I don't have it.

Once again, the 23andMe site was very clear about what this means, providing a detailed table showing the risks for people with and without the mutation. In my case, they tell me that:
"Studies estimate that, on average, a man of European descent has a 3% chance of developing late-onset Alzheimer's disease by age 75 and an 11% chance by age 85."
Looking at the detailed table, one learns that if you have one copy of APOE ε4 variants, your risk of developing Alzheimer's by age 75 is 4-7%, and if you have 2 APOE ε4 variants–the worst case–then your risk jumps to 28%. The website provide links to 10 scientific papers with far more detail, for those who want to know the basis of these numbers. This is far more than most people will want to know, and I couldn't find any flaws in 23andMe's description of the science.

The Times editorial concludes with this:
"23andMe has said that its health tests can raise awareness about various medical conditions and empower consumers to take charge of their health information. But doctors and geneticists say that the tests are still more parlor trick than medicine."
That last statement is the most egregious misrepresentation by the Times editors. Who are these geneticists who call DNA testing a "parlor trick"? The genetic tests run by 23andMe, which use technology that is run daily at thousands of labs around the world, are nothing of the sort. They are a highly accurate assay that has been repeated millions of times and validated by hundreds of peer-reviewed studies. Usually the NY Times is one of the most thorough and accurate sources in the media, but they really dropped the ball this time.

The fact is, genetics is not fate. Identical twins, who share identical DNA, rarely die of the same causes. Thus even if you knew your genetic risks perfectly, for every mutation in your DNA, you might not find anything to change in your behavior. At most, you might learn that you should get mammograms or colonoscopies slightly more often. It's legitimate to argue that you won't learn anything useful from the 23andMe tests, but you will learn something about genetics.

As far as changing your behavior to reduce health risks, you don't need a sophisticated genetic test for that. Just eat more leafy greens.

[Note: although I'm a customer of 23andMe, I have no financial relationship with the company and I'm neither an advisor to nor an investor in them.]

Relish that coffee now. It might be extinct in 20 years.

Most people think there are two major kinds of coffee, arabica and robusta. (No doubt some people think the two kinds of coffee are regular and decaf, but I digress.) And it's true that almost all the coffee that you can find in the market, or at your local coffee shop, is made from one of these beans or from a blend of both.

Actually, there are 124 species of coffee. Unfortunately, as we learned in a new paper published last week in the journal Science60% of them are currently in danger of going extinct. The primary threats are habitat loss (caused by humans) and climate changes (also caused by humans).

Even though the term "endangered species" is more often used to refer to animals, we humans have already wiped out countless plant species, primarily through deforestation, and many more are going extinct each year. We will never know how many species have already been lost as we've chopped down rich rainforests to create grazing lands for cattle or monoculture plantations, but we do know that it's still going on.

Of the two major beans that we use for coffee, the better-tasting bean, arabica, is already endangered, according to the new study. Robusta coffees aren't bad, but as the new paper explains:
"Although robusta has some negative sensory qualities (e.g., tasting notes of wood and tobacco), it is favored in some instances for its taste, high caffeine content, and ability to add body to espresso and espresso-based coffees; it is now the species of choice for instant coffee."
If we don't do something to protect wild coffee species, we might soon be drinking nothing but robusta coffee.

If that seems implausible, recall that this already happened to the banana. In the mid-1900s, the entire worldwide production of bananas was basically wiped out by Panama disease, caused by a fungus. Before then, a tasty variety called Gros Michel was the dominant species, but thanks to the fungus:
"By 1960, the Gros Michel was essentially extinct and the banana industry nearly bankrupt. It was saved at the last minute by the Cavendish, a Chinese variety that had been considered something close to junk." (Source: NYTimes)
Now, the robusta bean is far better than "junk," but I for one prefer my arabica coffee.

The main threats to arabica (and robusta) are outbreaks of mold and fungal infections, not unlike the disease that wiped out bananas. Those 122 wild species–of which 60% are now endangered–are often resistant, allowing plant scientists to inter-breed the wild and domesticated varieties to create new strains that resist disease and taste just as good as the original. This wild "reservoir" of coffee is critical to saving coffee as we know and love it today.

You're probably accustomed to seeing coffee labelled by the region it's grown in, rather than the type of bean. This is similar to how we label wines as being from France, California, Australia, etc. Coffee is grown in many temperate regions, including Central and South America, Indonesia, central Africa, and Hawaii. Just as with wine, the climate makes a difference, but the bean itself is an even bigger factor. Consider the difference between cabernet, pinot noir, or sauvignon grapes for wine–in the same way, arabica bean coffees tastes quite different from robusta coffee.

If we don't pay attention to the threat to coffee, we might all be drinking a less-tasty brew in the years to come.

(Aside: I've been working for several years on a project to sequence Coffea arabica, the tetraploid genome of arabica coffee, and our results will likely be published soon. We're hoping that the genome will assist coffee scientists who are trying to breed new, disease-resistant varieties.)

The flu vaccine is working well this year. It's not too late to get it.

Current flu trends for 2018-19. Brown shows H1N1 strains,
red shows H3N2, and yellow indicates the strain was not
genotyped.
The flu is widespread and increasing right now, according to the CDC.  At least 42 states were reporting high levels of flu activity as of the end of December 2018, and the rates are still climbing. In other words, we're in the midst of flu season.

Other than that, though, the news is relatively good. Here's why.

First, the dominant strain of flu this year is H1N1, which is the "swine flu" that first appeared as a pandemic in 2009. But pandemics don't have to come with high mortality rates, and as it turned out–luckily for humankind–the 2009 flu was milder than the previous dominant strain, H3N2, which first appeared way back in 1968.

This season, nearly 90% of the flu cases tested by the CDC are turning out to be H1N1, the milder variety. Although 10% of people are still getting the much-nastier H3N2 flu, it's good news compared to last year, when H3N2 dominated.

Back to the bad news (although this is old news): the 2009 swine flu (H1N1) didn't completely displace the older flu strain. Instead, we now have both types of influenza circulating, along with two strains of the even milder influenza B virus. Since 2009, the flu vaccine has to combat all 4 of these flu viruses, which is why you might see the term "quadrivalent" associated with the vaccine. That just means it targets all 4 different strains.

Back to the good news again: the vaccine this year contains just the right strains! This doesn't always happen; actually it happens much less frequently than anyone would like. But now that the flu season is under way, the CDC can test the circulating flu viruses and compare them to the strains that are targeted by this year's vaccine. This year, both the H1N1 and the H3N2 viruses match the vaccine strains really well, which means that if you got the shot, you are likely to be very well protected.

(Keep in mind that even in a good year, the vaccine isn't 100% effective, and you can still get the flu. But you are much less likely to get it than anyone who is unvaccinated.)

While I've got your attention, let me answer one of the top 10 health questions of the year: "how long is the flu contagious?" According to the CDC,

  • the flu is most contagious in the first 3-4 days after becoming sick.

It continues to be contagious for up to a week, so if you have the flu, stay home! And make sure those around you avoid physical contact, as much as possible, and wash their hands frequently.

And while I'm at it, let's debunk a common myth:

  • No, you can't get the flu from the vaccine.

So if you've put off getting the flu vaccine, it's not too late! The season is in full swing, but if you get the vaccine today, you'll likely have excellent protection for the rest of the season. Go get it.