A new study out of Denmark tried to measure the benefits of masks. It didn't go so well.

Social media has been abuzz this week over a new study out of Denmark about the effectiveness of masks on the risk of getting Covid-19. Depending on what news source you looked at, you might have heard that masks might not protect the people wearing them, or that wearing masks doesn’t prevent the spread of the virus. You might also have read the near-immediate backlash in which scientists pointed out all the evidence that masks really do work.

I read the study. It doesn’t prove anything, as even its own authors admit.

Let’s dig into the actual results just a bit to see what all the fuss is about.

The study was conducted in Denmark in April and May of this year, and what it tried to do (not very effectively) was to measure the effect of a recommendation to wear masks. That’s right, they weren’t really measuring the benefits of masks directly at all!

The study enrolled about 6000 volunteers, and for half of them they recommended wearing masks whenever they went outdoors. They also provided masks to those volunteers. For the other half, they didn’t do anything. At the time (March and April), Denmark was recommending social distancing, but universal mask wearing wasn’t recommended.

Quite a few people dropped out, so in the end they only had 4862 people in the two groups, about 2400 per group.

What did they measure? Well, they did an antibody test (far from perfect, but let’s not digress) at the beginning of June to see whether or not people were infected with SARS-CoV-2, the virus that causes Covid-19.

(Note that they DID NOT measure how well the masks might have protected anyone else in the community. They were only measuring whether a mask might protect the wearer.)

And the results? 42 people in the mask-recommendation group were infected, and 51 people in the no-recommendation group were infected. (That’s 1.8% versus 2.1% of each group.) So there was a small reduction, but it was not statistically significant, which means we really can’t say if the mask recommendation helped prevent infection.

The study authors admitted this themselves, writing: “the findings are inconclusive ... compatible with a 46% decrease to a 23% increase in infection.” In other words, the results could mean that masks reduce self-infections by 46% or increase them (bizarre as that sounds) by 23%.

In other words, this experiment doesn’t tell us much. If it weren’t about Covid-19, I doubt that the Annals of Internal Medicine would have published it. (A commentary by my colleagues at Hopkins and Stanford suggested that Annals was right to publish it, as long as scientists “carefully highlight the questions that the trial does and does not answer.”)

Now some big caveats. First, in the mask recommendation group, only 46% of the participants wore masks as recommended. Second, the study didn’t ask if anyone in the no-recommendation group wore masks. Third, the study relied on self-reporting to determine who was actually wearing their masks consistently–that is, they simply asked the participants to tell them how often they wore their masks.

What? Imagine if you were studying the use of seat belts to reduce injuries, and only 46% of the people you told to wear seat belts wore them properly. On top of that, imagine that you relied entirely on self-reporting to determine who was actually wearing the seat belts. This study design is nearly worthless if you want to know the true benefits of seat belts.

So the Danish mask study was inconclusive, as its own authors report. Therefore it would be a huge mistake, scientifically speaking, to take this non-result and conclude that masks do not protect you. It would be an even bigger mistake to conclude that the study showed that masks don’t benefit the community. Unfortunately, that didn’t stop these two Oxford University scientists from jumping to exactly that conclusion. They claimed, in an article in The Spectator, that the study showed that “wearing masks in the community does not significantly reduce the rates of infection.” This is dead wrong. The study wasn’t even measuring community rates of infection.

(For an excellent Twitter-take on what the study does not prove, see this thread from The Health Nerd.)

And as the CDC has documented, multiple studies have already shown that masks are highly effective at limiting the spread of Covid-19. And a study last summer pointed out that increasing the use of masks by just 15% “could prevent the need for lockdowns and reduce associated losses of up to $1 trillion” in the U.S. alone. So yes, it’s a good idea to wear a mask.

It’s also simply common sense that if you wear a mask, the amount of virus that you breathe in or out will be reduced. You’re not just protecting yourself, you’re also protecting everyone around you.

Until we can get this pandemic under control, we all need to wear masks in public. It’s utterly ridiculous that this has become a political issue, as it has in the U.S. To those who think mask wearing somehow limits their personal freedom: get over it. When you see a red light at a busy intersection, do you race through it because you need the “freedom” to drive like a crazy person? No. Civilized society requires everyone to follow some basic rules to protect each other, and during a pandemic, wearing a mask is one of them. And to those who think that wearing a mask somehow shows their civic virtue? No, that’s wrong too. To use the same example, stopping at a red light doesn’t prove that you’re virtuous.

It’s just a mask. It’s not a political statement. Get over it.

New Alzheimer's disease treatment fails, then works, then fails again


 Alzheimer’s disease is one of the most devastating conditions of old age. By recent estimates, more than 5 million people in the U.S. have Alzheimer’s, and managing the disease will cost over $300 billion in 2020. As the population ages, this problem is growing worse, and yet we still have no effective treatment.

You might have seen rosy-looking ads for Alzheimer’s treatments, but nothing really works, not yet at least. That’s why many people were excited about the possibilities of a new drug, aducanumab, that showed early signs of being able to reduce the accumulations of “plaques” in the brain.

Background: in people with Alzheimer’s, a protein called beta-amyloid accumulates in the brain, forming plaques that seem to disrupt brain function. (This hypothesis is not fully proven, but it is widely considered credible.) Thus one way that we might treat Alzheimer’s would be to reduce or eliminate beta-amyloid plaques. That’s what Biogen’s new drug, aducanumab, is intended to do.

Biogen has run two separate trials, called “EMERGE” and “ENGAGE,” to test whether or not aducanumab (ADU for short) worked.

Here’s where things get murky. Back in March of 2019, both trials were halted due to “futility,” because ENGAGE was showing no benefits for the new drug. In EMERGE, the high-dose patients seemed to be getting some benefit, but Biogen had specified ahead of time that if either trial was failing, that would mean the drug wasn’t working. Thus they halted the trials, disappointing as it was.

Fast forward to October of 2019, though, and Biogen had a new story. They went back and looked at a subset of the patients in ENGAGE (the study that had failed), and said that there was a benefit after all, if they looked only at the high-dose patients. This past July, Biogen went to the FDA and applied for approval for ADU.

Just this past week, two very conflicting announcements about ADU appeared. First, on Wednesday, the FDA’s internal scientists released a very rosy report, saying that the data from one of the trials were “robust and exceptionally persuasive.” For the second trial, the scientists said that even though the drug initially seemed to fail, a closer review led them to conclude that overall, ADU did provide a benefit.

Biogen shares rose 45% that day, adding $17 billion to the company’s value.

Then on Friday, a panel of independent, external scientists released their conclusions, which resoundingly rejected the drug. The external panel said that the data from the two trials was unconvincing, and they pointed out “multiple red flags” in the analysis.

(Trading in Biogen stock was halted during the Friday meeting, but at the end of the day it was close to the high it reached on Wednesday.)

So what happened? It appears to be a classic case of cherry-picking: when the data from the ENGAGE study didn’t pan out, the company re-analyzed a subset of the data and found a more-positive picture.

That’s not really kosher, as explained by a separate group of scientists in a paper published just a few days ago in the journal Alzheimer’s & Dementia. In this paper, David Knopman and colleagues, from the Mayo Clinic and Stanford Medical School, analyzed data that Biogen has released from its two trials. (The trials haven’t been published, but some of the findings were released in a publicly-available slide presentation that the authors relied upon.)

Knopman and colleagues explained that after two conflicting trials, there simply isn’t enough evidence that ADU works, and they also offer alternative explanations for the positive findings. They argue that the best Biogen can do is to “perform another trial of high‐dose ADU of at least 78‐weeks duration,” which could determine whether or not the positive results were real or just coincidence.

It’s somewhat mysterious that the FDA’s internal panel released their rosy report about ADU on Wednesday, only to be slapped down just two days later by an independent outside panel of scientists. After reading the negative views of the external panel and the analysis in the paper by Knopman and colleagues, I’m very skeptical that ADU has any clinically significant effect. If it had a truly robust effect, it simply wouldn’t be so hard to tease it out.

So we still don’t have a good treatment for Alzheimer’s, but the world still needs one.