Should we all be on statins? (reprise)

Should you be on statins? New guidelines and an online calculator may allow you to answer this question yourself.

Back in 2011, I asked whether we should all be on statins. At the time, it was clear that statins offered benefits for people who had already suffered heart attacks or other serious cardiovascular problems. But for the rest of us, it wasn't clear at all. A number of studies had been published suggesting that millions more people (in the U.S. alone) might benefit from statin therapy, but most of those studies were published by drug companies that made statins. As I wrote at the time, "we need more data from completely unbiased studies."

So has anything changed? Actually, it has. Last year, the U.S. Preventative Services Task Force (USPSTF) reviewed all of the evidence and updated its former (from 2008) recommendations. The evidence now suggests that some people–even those who have never suffered a heart attack–would benefit from statins.

Here's what the current USPSTF recommendations suggest. If you've never had a heart attack and have no history of heart disease, you still might benefit from statins if:

  • you're 40-75 years old,
  • you have one or more "risk factors" for cardiovascular disease (more about this below), AND
  • you have a 10-year risk of cardiovascular disease (CVD) of 7.5%-10%, using a "risk calculator" that I'll link to below.

Now let's look at those risk factors for CVD. There are four of these, and any one of them puts you in the category of people who might benefit from statins: diabetes, high blood pressure (hypertension), smoking, or dyslipidemia.

Most people already know their status for the first 3, but "dyslipidemia" needs a bit more explanation. This is simply an unhealthy level of blood cholesterol, defined by USPSTF as either "an LDL-C level greater than 130 mg/dL or a high-density lipoprotein cholesterol (HDL-C) level less than 40 mg/dL." You can ask your doctor about these numbers, or just look at your cholesterol tests yourself, where they should be clearly marked.

For that last item, how do you calculate you 10-year risk of CVD? Most people should ask their doctor, but if you want to see how it's done, the calculator is at the American College of Cardiology site here. It's quite simple and you can fill it in yourself to see your risk.

A big caveat here, as the USPSTF explains, is that the "risk calculator has been the source of some controversy, as several investigators not involved with its development have found that it overestimates risk when applied to more contemporary US cohorts."

Another problem that I noticed with the risk calculator is that using it for the statin recommendation involves some serious double counting. That's because the risk calculator relies in part on your cholesterol levels and blood pressure, but those same measurements are considered to be separate risk factors for CVD. This puts a lot of weight on cholesterol levels–but on the other hand, statins' biggest effect is to reduce those levels.

The USPSTF is a much more honest broker of statin recommendations than industry-funded drug studies, so we can probably trust these new guidelines. Note that if the risk calculator puts you in the 7.5%-10% range, you will only get a very small benefit from statins–as the USPSTF puts it, "Fewer persons in this population will benefit from the intervention."

Don't rush to go on statins without giving it some serious thought. As Dr. Malcolm Kendrik put it last year (quoted by Dr. Luisa Dillner in The Guardian),
“If I was taking a tablet every day for the rest of my life, I would want to know how long I would have extra to live. If you take statins for five years and you are at higher risk, then you reduce the risk of a heart attack by 36%. But if you rephrase the data, this means on average you will have an extra 4.1 days of life.” 
So no, we shouldn't all be on statins. But until something better comes along (and I hope it will), they are worth considering for anyone who is in a higher-risk group for cardiovascular disease.

Clever food hacks from Cornell Food Lab might all be fake

Have you heard that serving your food on smaller plates will make you eat less? I know I have. I even bought smaller plates for our kitchen when I first heard about that study, which was published in 2011.

And did you know that men eat more when other people are watching? Women, though, behave exactly the opposite: they eat about 1/3 less when spectators are present. Perhaps guys should eat alone if they're trying to lose weight.

Or how about this nifty idea: kids will eat more fruits and vegetables at school if the cafeteria labels them with cool-sounding names, like "x-ray vision carrots." Sounds like a great way to get kids to eat healthier foods.

Or this: you'll eat less if you serve food on plates that are different colors from the food. If the plate is the same color, the food blends in and it looks like you've got less on your plate.

And be sure to keep a bowl of fruit on your counter, because people who do that have lower BMIs.

Hang on a minute. All of the tips I just described might be wrong. The studies that support these clever-sounding food hacks all come from Cornell scientist Brian Wansink, whose research has come under withering criticism over the past year.

Wansink is a professor at Cornell University's College of Business, where he runs the Food and Brand Lab. Wansink has become famous for his "kitchen hacks" and healthy-eating tips, which have been featured on numerous media outlets, including the Rachel Ray show, Buzzfeed, USA Today, Mother Jones, and more.

Last week, Stephanie Lee at Buzzfeed wrote a lengthy exposé of Wansink's work, based on published critiques as well as internal emails that Buzzfeed obtained through a FOIA request. She called his work "bogus food science" and pointed out that
"a $22 million federally funded program that pushes healthy-eating strategies in almost 30,000 schools, is partly based on studies that contained flawed — or even missing — data."
Let's look at some of the clever food hacks I described at the top of this article. That study about labeling food with attractive names like "x-ray vision carrots"? Just last week, it was retracted and replaced by JAMA Pediatrics because of multiple serious problems with the data reporting and the statistical analysis.

The replacement supposedly fixes the problems. But wait a second: just a few days after that appeared, scientist Nick Brown went through it and found even more problems, including data that doesn't match what the (revised) methods describe and duplicated data.

How about the studies that showed people eat more food when others are watching? One of them, which found that men ate more pizza when women were watching, came under scrutiny after Wansink himself wrote a blog post describing his methods. Basically, when the data didn't support his initial hypothesis, he told his student to go back and try another idea, and then another, and another–until something comes up positive.

This is a classic example of p-hacking, or HARKing (hypothesizing after results are known), and it's a big no-no. Statistician Andrew Gelman took notice of this, and after looking at four of Wansink's papers, concluded:
"Brian Wansink refuses to let failure be an option. If he has cool data, he keeps going at it until he finds something, then he publishes, publishes, publishes."
Ouch. That is not a compliment.

Soon after Gelman's piece, scientists Jordan Anaya, Tim van der Zee, and Nick Brown examined four of the Wansink's papers and found 150 inconsistencies, which they published in July, in a paper titled "Statistical Heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab." Anaya subsequently found errors in 6 more of Wansink's papers.

It doesn't stop there. In a new preprint called "Statistical infarction," Anaya, van der Zee and Brown say they've now found problems with 45 papers from Wansink's lab. Their preprint gives all the details.

New York Magazine's Jesse Singal, who called Wansink's work "really shoddy research," concluded that
"Until Wansink can explain exactly what happened, no one should trust anything that comes out of his lab."
In response to these and other stories, Cornell University issued a statement in April about Wansink's work, saying they had investigated and concluded this was "not scientific misconduct," but that Cornell had "established a process in which Professor Wansink would engage external statistical experts" to review many of the papers that appeared to have flaws.

And there's more. Retraction Watch lists 14 papers of Wansink's that were either retracted or had other notices of concern. Most scientists spend their entire careers without a single retraction. One retraction can be explained, and maybe two or even three, but 14? That's a huge credibility problem: I wouldn't trust any paper coming out of a lab with a record like that.

But how about those clever-seeming food ideas I listed at the top of this article? They all sound plausible–and they might all be true. The problem is that the science supporting them is deeply flawed, so we just don't know.

Finally, an important note: Brian Wansink is a Professor of Marketing (not science) in Cornell's College of Business. He is not associated with Cornell's outstanding Food Science Department, and I don't think his sloppy methods should reflect upon their work. I can only imagine what the faculty in that department think about all this.