Does RoundUp cause cancer?

(Quick answer: probably not. See my update at the bottom of this post.)

For many years, environmental activists have been concerned about the herbicide glyphosate, which is the main ingredient in RoundUp®, the world's most widely-used weed killer. Since 1996, global usage of glyphosate has increased 15-fold, in part due to the widespread cultivation of "RoundUp Ready" crops, which are genetically modified to be resistant to RoundUp®. This allows farmers to use the herbicide freely, killing undesirable weeds without harming their crops.

RoundUp®'s manufacturer, Monsanto, has long claimed that glyphosate is safe, and they point to hundreds of studies that support their argument.

Nonetheless, a new study raises the question again.

First let's look briefly at another recent study. A bit more than a year ago, in November 2017, a large study in the Journal of the National Cancer Institute looked at nearly 45,000 glyphosate users (farmers and other agricultural workers who apply glyphosate to crops). These "users" have a much higher exposure to RoundUp® than ordinary people. That study concluded:
"no association was apparent between glyphosate and any solid tumors or lymphoid malignancies overall, including NHL [non-Hodgkin lymphoma]."
They did find, though, that there was a trend–not quite significant–towards an increased risk for one type of leukemia, AML. This trend appeared in users who had the highest exposure to RoundUp®.

In the new study, by a group of scientists from UC Berkeley, Mount Sinai School of Medicine, and the University of Washington, the authors (L. Zhang et al.) decided to focus exclusively on people with the highest exposures to glyphosate. They point out that including people with low exposure, who might have no increased risk of cancer, tends to dilute risk estimates. Statistically speaking, this is undeniably correct, but it also means that their results may only apply to people with high exposures, and not to ordinary consumers.

The punchline from the new study: people with the highest exposure to glyphosate had a 41% higher risk of non-Hodgkin lymphoma.

One caveat to this finding is that it's a meta-analysis, meaning the authors did not collect any new data. Instead, they merged the results from six earlier studies including over 65,000 people, and they focused on those with the highest exposure levels.

Meta-analyses can be prone to cherry-picking; that is, picking the studies that tend to support your hypothesis. However, I couldn't find any sign of that here. The authors include a frank assessment of all the limitations of their study, and they also point out that multiple previous studies had similar findings, although most found smaller increases in relative risk. In the end, they conclude:
"The overall evidence from human, animal, and mechanistic studies presented here supports a compelling link between exposures to GBHs [glyphosate-based herbicides] and increased risk for NHL."
A couple more caveats are important. First, this finding is all about relative risk. Non-Hodgkin lymphoma is one of the most common cancers in the U.S. and Europe, but the lifetime risk for most people, according to the American Cancer Society, is just 1 in 42 (2.4%) for men and 1 in 54 (1.9%) for women. A 41% increase in relative risk increases those numbers to 3.4% (men) and 2.6% (women).

Second, this higher risk only applies to people with very high exposure to glyphosate: primarily people who work in agriculture and apply RoundUp® to crops. Ordinary consumers (including people who eat "Roundup Ready" crops) have a far, far lower exposure, and dozens of studies have failed to show any increased risk of cancer for consumers. For most of us, then, this new study should not cause much concern, but for agricultural workers, it does raise a warning flag.

[Update 18 Feb 7:45pm] After I posted this article, the scientists at the Genetic Literacy Project pointed me to Geoffrey Kabat's piece about the Zhang et al. study. Kabat did a deep dive into the studies that Zhang et al.'s work is based on and uncovered a critical flaw in the study, one that I hadn't found. More than half of the "weight" of the meta-analysis by Zhang, and by far the largest number of cancer cases, come from a single study by Andreotti et al. published in 2018. That study reported risks for 4 different time points: 5, 10, 15, and 20 years. It turns out, as Kabat reports, that only the 20-year period showed any increase in risk of cancer. The relative risks of cancer at 5, 10, and 15 years were actually lower in the group exposed to glyphosate, and yet Zhang et al. didn't mention this fact.


Now, no one thinks that glyphosate lowers the risk of cancer, but Zhang et al. did not report that they had cherry-picked in this way. At a minimum, they should have reported what their findings would be if they used the other time periods. I suspect that they'd have found no increased risk of cancer–but this wouldn't make for such a catchy headline. This omission on their part is a serious flaw, indicating that they (and their results) might have been unscientifically biased.

The bottom line: even in those with very high exposures to glyphosate, the evidence that it causes any type of cancer is very weak. And for ordinary consumers, there's nothing to worry about.

The NY Times is far too worried about 23andMe's genetic test

The New York Times decided to publish an editorial this weekend warning people to "be careful about 23andMe's Health Test." What are they worried about?

Although the NY Times article is accurate, the warning suggests that 23andMe misleading its customers somehow. Is it? I decided to take a look.

 I'm a customer of 23andMe, and I'm also a researcher in genetics and genomics, so I know quite a bit about how their technology works and about what it can reveal. I've looked at 23andMe's latest genetic health reports, and they are remarkably clear and accurate.

Let me illustrate by revealing part of my own genetic test results. First I looked at my results for BRCA1 and BRCA2, the two genes that the NY Times article discusses.

I don't have any of the harmful mutations in the BRCA genes, which 23andMe reports like this:
Notice that they immediately provide the caveat that more than 1,000 other variants in these genes have been linked to cancer risk, and they make it abundantly clear that they didn't test any of those. Here's what the NY Times article said:
"The 23andMe test can miss other possible mutations.... there are more than 1,000 other BRCA mutations that contribute to your breast cancer risk. The 23andMe test doesn’t look for any of them."

A reader of the Times might think, upon reading this, that 23andMe somehow hides this fact. But the Times article's warning is little more than a paraphrase of what 23andMe's website states.

The Times editors also caution that
"Just because you test negative for the few mutations that 23andMe screens for doesn’t mean that you won’t get breast cancer."
Duh. 23andMe explains this as well, and much more, such as:
"This test does not take into account other risk factors for breast, ovarian, prostate, and other cancers, such as personal and family health history." [from 23andMe]
23andMe also provides a wealth of information about the BRCA genes, including links to the scientific papers describing the genes and their link to cancer. I was very impressed by how thorough they are.

The Times editors focused only on the BRCA genes, but 23andMe also tests a handful of others (9, in my case). I looked at the APOE gene report, which has a mutation that has been linked to Alzheimer's disease. The bad variant is called ε4, and fortunately I don't have it.

Once again, the 23andMe site was very clear about what this means, providing a detailed table showing the risks for people with and without the mutation. In my case, they tell me that:
"Studies estimate that, on average, a man of European descent has a 3% chance of developing late-onset Alzheimer's disease by age 75 and an 11% chance by age 85."
Looking at the detailed table, one learns that if you have one copy of APOE ε4 variants, your risk of developing Alzheimer's by age 75 is 4-7%, and if you have 2 APOE ε4 variants–the worst case–then your risk jumps to 28%. The website provide links to 10 scientific papers with far more detail, for those who want to know the basis of these numbers. This is far more than most people will want to know, and I couldn't find any flaws in 23andMe's description of the science.

The Times editorial concludes with this:
"23andMe has said that its health tests can raise awareness about various medical conditions and empower consumers to take charge of their health information. But doctors and geneticists say that the tests are still more parlor trick than medicine."
That last statement is the most egregious misrepresentation by the Times editors. Who are these geneticists who call DNA testing a "parlor trick"? The genetic tests run by 23andMe, which use technology that is run daily at thousands of labs around the world, are nothing of the sort. They are a highly accurate assay that has been repeated millions of times and validated by hundreds of peer-reviewed studies. Usually the NY Times is one of the most thorough and accurate sources in the media, but they really dropped the ball this time.

The fact is, genetics is not fate. Identical twins, who share identical DNA, rarely die of the same causes. Thus even if you knew your genetic risks perfectly, for every mutation in your DNA, you might not find anything to change in your behavior. At most, you might learn that you should get mammograms or colonoscopies slightly more often. It's legitimate to argue that you won't learn anything useful from the 23andMe tests, but you will learn something about genetics.

As far as changing your behavior to reduce health risks, you don't need a sophisticated genetic test for that. Just eat more leafy greens.

[Note: although I'm a customer of 23andMe, I have no financial relationship with the company and I'm neither an advisor to nor an investor in them.]

Relish that coffee now. It might be extinct in 20 years.

Most people think there are two major kinds of coffee, arabica and robusta. (No doubt some people think the two kinds of coffee are regular and decaf, but I digress.) And it's true that almost all the coffee that you can find in the market, or at your local coffee shop, is made from one of these beans or from a blend of both.

Actually, there are 124 species of coffee. Unfortunately, as we learned in a new paper published last week in the journal Science60% of them are currently in danger of going extinct. The primary threats are habitat loss (caused by humans) and climate changes (also caused by humans).

Even though the term "endangered species" is more often used to refer to animals, we humans have already wiped out countless plant species, primarily through deforestation, and many more are going extinct each year. We will never know how many species have already been lost as we've chopped down rich rainforests to create grazing lands for cattle or monoculture plantations, but we do know that it's still going on.

Of the two major beans that we use for coffee, the better-tasting bean, arabica, is already endangered, according to the new study. Robusta coffees aren't bad, but as the new paper explains:
"Although robusta has some negative sensory qualities (e.g., tasting notes of wood and tobacco), it is favored in some instances for its taste, high caffeine content, and ability to add body to espresso and espresso-based coffees; it is now the species of choice for instant coffee."
If we don't do something to protect wild coffee species, we might soon be drinking nothing but robusta coffee.

If that seems implausible, recall that this already happened to the banana. In the mid-1900s, the entire worldwide production of bananas was basically wiped out by Panama disease, caused by a fungus. Before then, a tasty variety called Gros Michel was the dominant species, but thanks to the fungus:
"By 1960, the Gros Michel was essentially extinct and the banana industry nearly bankrupt. It was saved at the last minute by the Cavendish, a Chinese variety that had been considered something close to junk." (Source: NYTimes)
Now, the robusta bean is far better than "junk," but I for one prefer my arabica coffee.

The main threats to arabica (and robusta) are outbreaks of mold and fungal infections, not unlike the disease that wiped out bananas. Those 122 wild species–of which 60% are now endangered–are often resistant, allowing plant scientists to inter-breed the wild and domesticated varieties to create new strains that resist disease and taste just as good as the original. This wild "reservoir" of coffee is critical to saving coffee as we know and love it today.

You're probably accustomed to seeing coffee labelled by the region it's grown in, rather than the type of bean. This is similar to how we label wines as being from France, California, Australia, etc. Coffee is grown in many temperate regions, including Central and South America, Indonesia, central Africa, and Hawaii. Just as with wine, the climate makes a difference, but the bean itself is an even bigger factor. Consider the difference between cabernet, pinot noir, or sauvignon grapes for wine–in the same way, arabica bean coffees tastes quite different from robusta coffee.

If we don't pay attention to the threat to coffee, we might all be drinking a less-tasty brew in the years to come.

(Aside: I've been working for several years on a project to sequence Coffea arabica, the tetraploid genome of arabica coffee, and our results will likely be published soon. We're hoping that the genome will assist coffee scientists who are trying to breed new, disease-resistant varieties.)

The flu vaccine is working well this year. It's not too late to get it.

Current flu trends for 2018-19. Brown shows H1N1 strains,
red shows H3N2, and yellow indicates the strain was not
genotyped.
The flu is widespread and increasing right now, according to the CDC.  At least 42 states were reporting high levels of flu activity as of the end of December 2018, and the rates are still climbing. In other words, we're in the midst of flu season.

Other than that, though, the news is relatively good. Here's why.

First, the dominant strain of flu this year is H1N1, which is the "swine flu" that first appeared as a pandemic in 2009. But pandemics don't have to come with high mortality rates, and as it turned out–luckily for humankind–the 2009 flu was milder than the previous dominant strain, H3N2, which first appeared way back in 1968.

This season, nearly 90% of the flu cases tested by the CDC are turning out to be H1N1, the milder variety. Although 10% of people are still getting the much-nastier H3N2 flu, it's good news compared to last year, when H3N2 dominated.

Back to the bad news (although this is old news): the 2009 swine flu (H1N1) didn't completely displace the older flu strain. Instead, we now have both types of influenza circulating, along with two strains of the even milder influenza B virus. Since 2009, the flu vaccine has to combat all 4 of these flu viruses, which is why you might see the term "quadrivalent" associated with the vaccine. That just means it targets all 4 different strains.

Back to the good news again: the vaccine this year contains just the right strains! This doesn't always happen; actually it happens much less frequently than anyone would like. But now that the flu season is under way, the CDC can test the circulating flu viruses and compare them to the strains that are targeted by this year's vaccine. This year, both the H1N1 and the H3N2 viruses match the vaccine strains really well, which means that if you got the shot, you are likely to be very well protected.

(Keep in mind that even in a good year, the vaccine isn't 100% effective, and you can still get the flu. But you are much less likely to get it than anyone who is unvaccinated.)

While I've got your attention, let me answer one of the top 10 health questions of the year: "how long is the flu contagious?" According to the CDC,

  • the flu is most contagious in the first 3-4 days after becoming sick.

It continues to be contagious for up to a week, so if you have the flu, stay home! And make sure those around you avoid physical contact, as much as possible, and wash their hands frequently.

And while I'm at it, let's debunk a common myth:

  • No, you can't get the flu from the vaccine.

So if you've put off getting the flu vaccine, it's not too late! The season is in full swing, but if you get the vaccine today, you'll likely have excellent protection for the rest of the season. Go get it.




What's the limit of the human lifespan? And what do World War I veterans have to do with it?

Graph showing lower rate of mortality (blue) in people aged
90-95 versus the rate in people aged 50-55 (orange). Figure
from S.J. Newman (2018) Errors as a primarycause of late-life
mortality deceleration and plateaus. PLoS Biol 16(12):
e2006776. https://doi.org/10.1371/journal.pbio.2006776
An intriguing phenomenon has emerged in recent years: among very old people, the rate at which people die appears to decline when they get past a certain age. In other words, as these authors claimed in their 2011 book, aging slows down and maybe even stops. Or at least the mortality rate levels off past the age of 100, according to another study published earlier this year. This has led some scientists to speculate that the upper limit on human lifespan may be much older than anyone alive today.

Not so fast, says a new study by Saul Newman in PLoS Biology. Newman looked at the data and found something quite different: it's all just a mistake. Well, perhaps not a mistake exactly, but a consequence of many small errors. Let me explain.

In almost all species, mortality rates increase with age. In other words, as you get older, your likelihood of dying in a given year slowly but inexorably increases. Intuitively, we all know this: if young people die, it's tragic because we don't expect it. When people in their eighties and nineties die, it's sad, but no one is really surprised.

The evidence for decreasing mortality among very old humans has emerged from a number of studies that provide seemingly solid evidence that people over 100 die at the same or even lower rates then people between 80 and 90, or between 90 and 100.

Not surprisingly, many people would like to believe that human lifespan is unlimited. Indeed, it's one of the hottest topics in Silicon Valley these days. And perhaps someone will invent some true life-extension technology someday. But Newman's analysis pours cold water on the notion that our natural longevity is unlimited.

One difficulty with studying very old people is that there simply aren't that many of them, so the studies tend to be small. Another problem–and this is what Newman zeroes in on–is that we don't have very good birth records for people over 100 years old. They were born a long time ago, when record keeping wasn't always so good. What if there are a few errors?

It might seem that this shouldn't matter, as long as the errors are random–in other words, as long as people's ages are both under- and over-estimated at the same rates. The problem is that even if the errors are random, they don't play out that way. Here's why.

For the sake of argument, let's imagine a set of people whose true ages are off by 5 years in either direction. (I know that's a lot, but bear with me.) By the age of 100, as Newman points out, virtually no one is alive from the cohort that underestimated their age; these are people who have a true age of 105. But many more will be alive from those who overestimated their age; these are the 95-year-olds who think they're 100.

Newman's paper points out that if only a few people are overestimating their age, this can cause mortality rates to flatten or decelerate–or at least they appear to decelerate, because these people aren't really as old as we (or they) think they are. He then shows, in considerable detail, that only a very small error rate is more than enough to explain all of the apparent decline in mortality rates from recent studies. In other words, the decline in mortality is simply an illusion.

What does World War I have to do with any of this? Newman explains:
"approximately 250,000 youths inflated their ages to enter the 1894–1902 birth cohorts and fight for the United Kingdom in World War I."
The same thing happened in the U.S. and other countries: 16- and 17-year-old boys said they were 18 so they could sign up. Coincidentally, these men would have been around 100 years old when many of the recent studies of centenarians were conducted, and it's very likely that some of these men were included in those studies. It wouldn't take many to distort the apparent mortality rates.

Who could have imagined that these brave young men who signed up to fight for their country (my grandfather was one of them), so many years ago, would have this completely unexpected effect on the science of aging, almost exactly 100 years after the war ended? It seems somehow appropriate that today, as the last veterans of the Great War leave us forever, they can still remind us of their sacrifice.

Russian homeopathy, hiding in plain sight

It turns out that Russia has its very own brand of bogus medicine:"release-active drugs," or RADs. Dozens of scientific articles have been published claiming that these substances can be used to treat or cure a remarkably broad range of illnesses, including:
"... influenza, hemorrhagic fever, meningococcal meningitis, herpes, HIV, diabetes, erectile dysfunction, sleep disorders, obesity, chronic inflammatory joint diseases, attention deficit hyperactivity disorder, ..., alcoholism, allergies, and many other health problems." 
If this sounds too good to be true, that's because it is.

Thanks to a new report published in the journal BMJ Evidence-Based Medicine (provocatively titled "Drug discovery today: no molecules required") we now know that RADs aren't drugs at all, because they don't actually contain anything. As the new study reveals,
"The problem [with RADs] is that typical dilutions of the active ingredient are so high (from 1:1024 to 1:101991) that no molecules of the initial antibodies should be present in the ‘drug’ itself."
In other words, RADs are simply homeopathy by another name. As I've written many times before, homeopathy is one of the most patently absurd forms of pseudoscience, and although it's been debunked countless times, it remains popular due in part to commercial interests that profit handsomely from selling ineffective but expensive sugar pills.

RADs are produced by a single Russian company, with the odd name "OOO NPF Materia Medica Holding." The papers promoting the benefits of RADs are authored by a variety of Russian authors, but they are nearly all co-authored by or associated with one person, Oleg Epstein, who is also the company's founder.

Epstein and his Russian compatriots have been very clever about disguising the fact that their "release-active drugs" aren't drugs at all. Their papers are full of scientific jargon, which has no doubt helped them get their work past reviewers who (as every scientist who has published papers knows) can sometimes be a bit lazy.

They've also taken advantage of–one might say abused–the U.S. clinical trials system, ClinicalTrials.gov, by registering 22 studies of RADs there, such as this one.

The authors of the BMJ report (Alexander Panchin, Nikita Khromov-Borisov, and Evgenia Dueva, all Russian, though Dueva is based in Canada) are unsparingly blunt in revealing how Epstein has manipulated the scientific system in Russia to gain approval (and lucrative sales) of his so-called drugs. For example, Epstein has published 90 papers on RADs in a single journal, including 48 in a special issue that he edited himself.

Panchin and colleagues also took a closer look at 6 papers about RADs that were published in English-language journals, all co-authored by Epstein. They report that
"the articles contained misleading descriptions of active substance concentrations, severe flaws in study design and methodology, as well as concealed conflict of interests.... the authors did not mention that MMH manufactures and sells RADs and holds the corresponding patents. Epstein was not mentioned as the founder and CEO of MMH."
They contacted all of these journals, and only one journal, PLoS ONE, went so far as to retract the bogus science on RADs. Kudos to PLoS ONE for doing so. The other journals, some of them published by reputable scientific publishers including Elsevier, Wiley, and Springer, either didn't respond or refused to take action.

Fortunately for consumers, RADs haven't yet spread into U.S. and European markets, although their manufacturers are trying. In a recent letter published in a the Journal of Medical Virology, Epstein and his co-authors write that
"Currently we are in the process of approving evaluation requirements for our products, taking into account their peculiarities and allowing their potential authorization in the USA and Europe."
Buyer beware. The Russian quacks are coming.

Don't smack your kids!

Spanking doesn't work. What's more, it may cause long-term harm to children.

With the holidays approaching, families will be spending more time together, and much of that time will be rich and rewarding. But kids being kids, many of them will misbehave. Parents, in their turn, may grow exasperated and frustrated.

There are many ways to discipline kids, but hitting them is a bad idea. The American Academy of Pediatrics just issued a new policy statement, its strongest ever, telling parents that spanking or hitting their kids is not only ineffective, it is harmful. They don't allow for any exceptions, stating that
"corporal punishment is invariably degrading."
This idea has been gaining traction for decades. In 1989, the UN Committee on the Rights of the Child called for all nations to ban corporal punishment of children, and over the years, many countries have heeded that call. Just two months ago, the Global Initiative to End All Corporal Punishment of Children reported that 54 countries around the world have banned spanking in all settings, including the home.

Nonetheless, the average American still thinks spanking is a good idea. A few years ago, FiveThirtyEight reported that 70% of U.S. adults still agreed that "it is sometimes necessary to discipline a child with a good, hard spanking." This was an improvement over 1986, when 84% of adults agreed with that sentiment, but we obviously have a long way to go.

Even worse, corporal punishment in school is still permitted in 19 states, as described in a study by Elizabeth Gershoff and Sarah Font earlier this year. These states are primarily in the southern U.S., with the top offenders being Arkansas, Mississippi, and Alabama, where over 50% of the schools still use corporal punishment. In these schools, as Gershoff and Font write,
“a teacher or administrator typically administers corporal punishment by using a large wooden board or 'paddle' to strike the buttocks of a child.”
Perhaps it's no coincidence that the schools in those states rank among the 10 worst in the nation: Arkansas is 42nd, Alabama is 43rd, and Mississippi is 48th.

The American Academy of Pediatrics (AAP) report warns of multiple harms that come from corporal punishment, including:

  • "spanking ... is associated with outcomes similar to those in children who experience physical abuse;
  • experiencing corporal punishment makes it more, not less, likely that children will be defiant and aggressive in the future;
  • corporal punishment is associated with an increased risk of mental health disorders and cognition problems."

The AAP report also includes warns that verbal abuse, including any language "that causes shame or humiliation," can be as harmful to children as physical punishment.

Why would violence against a child ever be a good idea? The answer is simple: it never is. Children, especially younger children, cannot understand that a thrashing is intended to send a message about their behavior. Instead, they often think that their parent is angry at them (and sometimes they're right), which in turn makes them fear their own parents.

Parents do need to discipline their children at times, as anyone who's had kids will know. Parents who want guidance can visit the AAP's parent-focused website, healthychildren.org, where they offer alternatives to spanking, for both younger and older children.

Many parents think that spanking works, because they were spanked and turned out okay. That might be true for some, but it's not true for everyone. The AAP report cites nearly 100 studies showing that spanking doesn't work.

So don't hit your kids–and if you're in one of the 16 states that allow schools to hit them, contact your local school district and ask them to change this outdated and abusive practice.