Field of Science

Why are we growing corn to fuel our cars? Three reasons why ethanol is a bad idea.

Most of us are driving around right now in cars powered by a combination of gasoline and ethanol. Ethanol is a fuel alternative produced from corn (mostly), and it has been touted for years as cleaner, carbon-neutral alternative to gas.

The problem is that ethanol’s benefits have been greatly exaggerated, leading to Congressional regulations that required ever-increasing amounts of ethanol in our gasoline supply. The government requirement goes back to 2005, when gas prices were much higher and the U.S. was in the midst of the Iraq war. Ethanol was supposed to be a clean way to reduce our dependence on foreign oil. The growing mandate for ethanol has instead created an enormous, artificial demand that has had unintended consequences, many of them bad.

Some background: Congress requires automakers to meet fuel economy (“CAFE”) standards for all their cars and light trucks. To encourage ever-greater use of ethanol, Congress modified the CAFE standards in 2005. As a result of that law, this year the EPA will require refiners to use 18.1 billion gallons of ethanol to fuel our cars.

Politicians still love ethanol. In the 2016 presidential campaign, several candidates came out in support of continuing the corn-based fuel program, hoping this position would win them votes in the Iowa caucuses. Iowa is a big corn state.

Unfortunately for the rest of us, mandating the use of ethanol is a terrible policy. Here are three reasons why.

1. Ethanol lowers your gas mileage–a lot. Ethanol only has about 2/3 the energy content of gasoline, meaning it simply cannot provide the same amount of power per gallon (or liter) as gas. E85 fuel, which uses 85% ethanol and 15% gasoline, is widely available, and some gas stations now offer no alternative. Consumer Reports put E85 to the test, and found that highway mileage decreased by 29% and city mileage by 22%. Car and Driver ran their own tests and found a 30% drop in mileage on E85. According to the Union of Concerned Scientists, nearly all gas sold in the US today has 10% ethanol–much less than in E85, but still providing lower fuel efficiency than straight gasoline.

Making things worse, ethanol attracts water and is more corrosive to some metals and rubber than gasoline. So it's bad for your car.

2. Using ethanol doesn’t reduce carbon emissions. The main argument for using ethanol is that because the carbon contained within it was recently put in the ground, burning ethanol (and releasing that carbon) is carbon neutral. Compared to extracting oil, which has lain in the ground for millions of years, growing corn and extracting ethanol puts far less carbon back in the atmosphere.

This argument makes sense, but only in a very narrow context. In an article published in Science in 2008, Timothy Searchinger and colleagues pointed out that previous analyses
“failed to count the carbon emissions that occur as farmers worldwide respond to higher prices and convert forest and grassland to new cropland to replace the grain (or cropland) diverted to biofuels.” 
When the scientists accounted for these land-use changes, they found that using corn to produce ethanol will double greenhouse emissions over a 30-year period. Switchgrass is only slightly better, increasing emissions by 50%. As the Union of Concern Scientists explains that “sustainable production is possible” only if we stop making ethanol from corn.

Admittedly this is a complex topic, but it seems that ethanol-from-corn simply doesn't reduce carbon emissions. Thus the entire justification for using ethanol to fuel our cars is unsound.

3. Increasing fuel efficiency means we’ll never be able to meet Congress’s mandated levels of ethanol usage, not unless we sacrifice even more gas mileage. Automakers have made great progress in producing more fuel-efficient cars, and the growing electric car market (Tesla!) mean that we’re using less and less fuel each year. This is terrific for reducing carbon emissions, but it means that Congress’s original mandate to use more ethanol becomes far harder to satisfy.

What happened was that back in 2005, Congress told us how to solve a problem (carbon emissions from our cars), instead of just encouraging us to solve it using innovative new ideas. Corn producers and their government representatives—governors, Senators, Representatives—all got behind the ethanol “solution” because they saw increased profits in it. Now we are stuck with a non-solution that, as the NY Times recently put it, is “a boon for Iowa and a boondoggle to the rest of the country.” It’s long past time to end the ethanol mandate.

Long term antibiotic use for Lyme disease doesn't work, according to new study

Fifteen years. That's how long scientists have been trying to prove a negative: that "chronic Lyme disease" does not exist, and that long-term antibiotic usage does not help with symptoms that include joint pain and fatigue. This past week, a study in the New England Journal of Medicine showed that long-term antibiotic usage–the favorite treatment prescribed by doctors who call themselves "Lyme literate"–is a failure. Will it convince any of those doctors to change their practice? I doubt it.

Lyme disease is a serious illness, caused by bacteria that are transmitted through tick bites. Although the disease has been infecting humans for centuries, it got its name relatively recently, in 1975, when two Yale doctors, Stephen Malawista and Allen Steere, investigated a cluster of illnesses near the town of Lyme, Connecticut. The symptoms included joint pain that resembles arthritis, so much so that Malawista initially called it "Lyme arthritis." A few years later, biologist Willy Burgdorfer discovered the true cause, and the bacterium Borrelia burgdorferi was named after him. As deer have spread through the suburbs of modern society, ticks traveling with them have spread Lyme disease far and wide; the NIH reported over 36,000 cases in the U.S. in 2013. (Note that the CDC estimates the true number of cases may be closer to 300,000, ten times the number actually reported to health authorities.) Lyme is also common in Europe.

Luckily for most of us, a brief course of antibiotics usually cures people completely of Lyme disease. For some people, though, symptoms including fatigue and joint pain can linger for up to six months. The cause of these longer-term symptoms remains a mystery, and is an active area of current research. One possible explanation from the CDC is that
"the lingering symptoms are the result of residual damage to tissues and the immune system that occurred during the infection."
Unfortunately, the lack of a scientific explanation has opened the door to wildly speculative treatments, based on little or no evidence, by doctors who think they know the answer. These doctors, some of whom have adopted the label "Lyme literate," insist that their patients suffer from what they call Chronic Lyme Disease. Among other things, they've formed an association called ILADS that claims that:
"Most cases of chronic Lyme disease require an extended course of antibiotic therapy to achieve symptomatic relief" and "many patients with chronic Lyme disease require prolonged treatment until the patient is symptom-free."
Not only do these treatments cost far more than the 2-week course of treatment recommended by the Infectious Disease Society of America, but they also have the potential to cause harm: it's just not good for you to be on antibiotics for months or years. Indeed, one of the first studies (in 2001, by Mark Klempner and colleagues) to look at the efficacy of long-term antibiotic use explained its motivation as follows:
"Case reports and uncontrolled trials describe success with prolonged antibiotic therapy, often with a recurrence of the symptoms after the discontinuation of therapy. In view of the substantial morbidity and even death associated with prolonged parenteral antibiotic treatment of Lyme disease, it is important to determine the efficacy of such therapy."
"Substantial morbitity and even death": clearly, long-term antibiotic treatment is not something to be prescribed lightly. The 2001 study found no benefit from long-term versus short-term antibiotic usage for the treatment of Lyme disease.

Despite the science, some doctors persist in prescribing–against the advice of professional societies–long-term courses of antibiotics. Because the practice continues, a European group conducted a new study of long-term antibiotic use in patients who claim to have persistent symptoms from Lyme disease. Anneleen Berende and colleagues reported the results of this well-designed double-blind, placebo-controlled study in the New England Journal of Medicine just last week. Their findings matched those of the 2001 study: long term antibiotic use has no benefits for patients.

The study had three arms: a placebo group and two different treatment groups, who were given two different antibiotics. Everyone got a two-week treatment with real antibiotics, followed by 12 more weeks of either antibiotics or placebo. The pills were created just for this study to look identical, so that neither the doctors nor the patients knew who was getting the placebo.

In an editorial accompanying the article, my Hopkins colleagues Michael Melia and Paul Auwaerter explain that the take-home message is that
"Patients with subjective, vexing symptoms attributed to Lyme disease should not anticipate that even longer courses of antibiotics will produce relief, a finding that is in concert with results from previous trials."
I doubt that "Lyme literate" doctors will accept the latest results and stop prescribing long-term antibiotic use; their websites indicate that they already know that they are right. I hope, though, that patients will start to question doctors who put them on long-term, possibly harmful antibiotic regimens that don't provide any benefit.

(For more information, see this excellent video by Dr. Auwaerter about Lyme disease and its treatment.)

This could be the end of youth football

The end of a lawsuit this month might also signal the end of youth football. The plaintiff in the suit, Debra Pyke, claimed that her son suffered brain damage from repeated head injuries in his youth football league, which caused chronic traumatic encephelopathy, or CTE. Just a couple of weeks ago, the youth football league Pop Warner settled the $5 million case, reportedly for under $2 million.

Debra Pyke’s son, Joseph Chernach, committed suicide in 2012 at the age of 25. He had played youth football for four years, from the ages of 11 to 14, and her lawsuit claims that he suffered from CTE as a result. CTE has been in the news a great deal of late, after it was discovered by Dr. Bennet Omalu to be the cause of early dementia in an alarming number of NFL football players. (Actor Will Smith plays Omalu in the recently released movie, Concussion.)

Pyke’s lawsuit holds nothing back in its claims about the damage of football. She argues that tackle football is a "war game,” and provides copious examples to back it up, including these:
“Football is not a contact sport. It’s a *collision* sport. Dancing is a good example of a contact sport.” (from former Michigan State coach Duffy Daugherty)
“Pro football is like nuclear warfare. There are no winners, only survivors.” (from Frank Gifford, former NFL player and long-time football broadcaster)
The lawsuit includes multiple arguments about why children are more vulnerable to head injuries. Their brains are still developing, and have less myelin to protect their brain cells from damage. Until the age of 14, children's are disproportionately large, and their necks are weaker than adults’ necks, making them prone to greater rotational forces when hit in the head.

Just looking at pictures of young boys wearing football helmets makes many of these points obvious. The idea that young boys are intentionally crashing into one another, often involving their heads, should give any parent cause for concern. Slate writer John Culhane described this as a “bobblehead effect” in which “their brains crash back and forth in their skulls.”

When Pyke’s lawsuit was filed last year, Culhane called it “The Lawsuit That Could Threaten Kids’ Football.” Despite the title of his article, he concluded that
“even in the unlikely event that she [Pyke] wins her case, youth football isn’t going anywhere. But a finding that Pop Warner is carrying on an abnormally dangerous activity would surely drive up the cost of insurance, and therefore make the sport more expensive for participants. It might also make more parents question whether they want to risk their children’s safety.” 
The “unlikely event” didn’t happen, but only because the league settled the suit.

Pop Warner football has a dedicated safety section on its website. It includes pages devoted to their concussion policy, and articles that argue that “rewards outweigh risks” and that “the relationship between developing CTE and playing football remains unclear. The link between the number of concussions a person sustains and the risk of developing CTE is also uncertain.”

Using words such as “unclear” and “uncertain” seems to me to be a classic example of sow confusion rather than admitting the obvious: youth football can be dangerous. Consider the “pee wee” league game in 2012 where five boys got concussions in a single game. The coaches were suspended afterwards, but the damage had already been done. At the time, the NY Times pointed out that “Pop Warner has done more than perhaps any other organization to try to protect young players from head injuries,” but the fact is that football is a violent contact sport.

(Pop Warner did not respond to my request for comment.)

As the New York Post reported when the settlement was announced, Pop Warner had $2 million in liability coverage for players in Wisconsin at the time of the lawsuit, in 2012. They now carry $1 million per player and allow individual chapters to add another $1 million.

Think about that for a second. Why would you want your son to play in a youth sports league where the league feels the need to carry a $2 million insurance policy on every player?

Parents: you and your kids have other options besides football. Youth sports are a great way for kids to exercise, have fun, make new friends, and learn the value of teamwork. Your kids can choose soccer, tennis, baseball, or basketball, all of which have large networks of youth leagues. Many regions of the country have other sports as well. There’s no reason to suit up a child with a helmet on his still-growing head and send him out on a field to be knocked around and possibly concussed.

Joseph Chernach suffered a tragedy, and his family’s suffering will never go away. We’re now learning that even powerfully built grown men suffer permanent injuries on the football field. There’s no reason to expose children or teenagers to similar risks.

Bad news flash: scientists did not cure autism, cancer, or Alzheimer's

This week I'm calling out some recent headlines about medical "breakthroughs" that were wildly misleading. Even when the science itself is good, bad reporting raises false hopes and eventually undermines the public's confidence. At some point, people will just no longer believe the headlines claiming that someone has once again cured cancer.

My first example of bad news is from a couple of weeks ago. I was struck by a headline that showed up in one of my news feeds that read
"Neuroscientists reverse autism symptoms"
Wow, I thought. This would be a real breakthrough if it were true. I traced the headline back to the MIT press office, where I then saw the subheading: "turning on a gene later in life can restore typical behavior in mice." Uh oh: extrapolating any treatment from mice to humans is fraught with problems, and studying a complex behavioral disorder like autism is even more difficult.

The HuffPo fell for it, though. Their headline read "Some Autism Symptoms May Be Reversed By Gene Editing, Scientists Suggest". So did the Daily Mail, which went with this headline:
"Reversing autism 'at the flick of a switch': 'Turning on' a single gene in mice has been found to reduce autistic behaviours"
At least they mentioned mice in the headline. But then they wrote that “scientists have announced a major breakthrough in treating the genetic cause of the spectral condition." Sorry, but there's no new treatment available. (Never mind the poor writing that used "spectral condition" to describe autism spectrum disorder.)

What did the researchers actually do? They studied a gene (see the paper here) that is already known to be associated with autism in humans–though only about 1% of cases–and that has already been shown to affect the behavior of mice as well. They created a means of "fixing" this gene in mice, and showed that it can restore some of the mouse behaviors to normal. My assessment: this is nice incremental work on a gene that seems to affect behavior in both mice and humans. I don't see it leading to any advances in the treatment of human autism for at least a decade, if ever.

So where are we on reversing autism? Probably no closer than we were before this report appeared. I give the science a B, but the science news gets an F.

A second, more recent bit of "bad news" appeared just a few days ago, when CNN reported that
"Breakthrough in cancer research could spawn new treatments"
Sigh. I can't count the number of times I've read that cancer is about to be cured, only to learn that no new treatment exists, and nothing is even close. So what is this new breakthrough?

CNN reported that
"Researchers discovered that even though cancer cells mutate wildly within a person's body, the cancer cells within each patient also have common mutations–ones that could be isolated and fought off by certain immune cells."
This didn't sound like news to me. Genome scientists have sequenced the DNA of cancer cells in exquisite detail in recent years, and we already know that cancer cells share common mutations. The paper itself, which appeared in Science on March 3, revealed a far less dramatic story.

First of all, the new research applies only to lung cancer. It's a highly technical result that showed that certain immune cells in the body could–just maybe–fight off a particular type of lung cancer. There's no new treatment here, and there won't be for many years, if ever. The science here is pretty good, so I'm giving it an A-, but the reporting over-hyped its impact. Because they included the cautionary "could spawn new treatments", I'll give CNN a C.

The third bit of news is older, but it's still making the rounds on social media. The headline is a real attention grabber:
"New Alzheimer’s treatment fully restores memory function"
If this is true, we're talking Nobel Prize material. Alzheimer's is a devastating condition that affects a large percentage of elderly people, and there's no known treatment. But when I looked up the paper itself, from March 2015, I discovered that it's a study in mice, not humans. The scientists here used a mouse that's been genetically modified to have brain defects that resemble some signs of Alzheimer's. They showed that they could use ultrasound–actually this part is pretty cool and quite exciting–to reduce some of the brain defects in the mice.

Will this lead to any human treatments? Maybe, but there are numerous problems and caveats, as neuroscientist Matthew Zabel pointed out soon after the study appeared. And the effect, even in mice, was rather small: it's wildly inaccurate to claim that it "fully restores memory function." I give the science an A, but sciencealert.com gets an F for that headline.

In all of these cases, the scientists involved are at least a little bit (if not a lot) complicit in the over-exuberant headlines. I understand their eagerness to call attention to their work, but by promising too much, they risk disappointing the public when no cures emerge one, two, five, or even ten years later. Journalists and scientists need to work harder to come up with headlines that excite people about the potential of science without making it seem that we've already cured humankind's most devastating diseases.

Raw food is rich in bacteria, not nutrients

The raw food crazies are making themselves sick.

There's a thing called the Raw Food Movement that has been growing in popularity in recent years. Proponents argue that it's far healthier than our usual (human) diet of cooked foods, claiming that cooking removes many of the natural enzymes that make food nutritious. They also believe that cooking creates harmful toxins.

What's really happening, though, is that raw foodies are putting themselves at risk of serious bacterial infections. Just last week, we learned that a salmonella outbreak tied to raw food has sickened people in 15 states so far. The CDC reports that the outbreak is linked to Garden of Life's RAW Meal Organic Shakes, which come in chocolate, vanilla, and vanilla chai flavors. The FDA issued a recall and warned that "persons infected with Salmonella often experience fever, diarrhea (which may be bloody), nausea, vomiting and abdominal pain."

According to the CDC, no one has died from any of these Salmonella infections, although 4 people have been hospitalized.

It's not clear why raw food is so trendy, other than the obsession of some people with everything "natural."

Natural or not, cooking is one of mankind's greatest inventions. It allows us to spend far less time eating, because cooked food is much easier to chew and digest. We extract more nutrients from cooked food–not fewer, despite what the raw foodies claim. Chimpanzees, our closest relative, spend up to 50% of their waking hours eating, because they subsist entirely on raw food. A 2011 study by Chris Organ and colleagues at Harvard University pointed out that
"The ancestors of modern humans who invented food processing (including cooking) gained critical advantages in survival and fitness through increased caloric intake."
Cooking our food has another huge advantage as well: it kills harmful bacteria and viruses. The current Salmonella outbreak could easily have been avoided if people had simply cooked their food instead of consuming raw shakes.

Raw foodies, though, seem to live in Opposite Land, where food science gets turned on its head. The website RawFoodLife.com claims that
"Science now proves that cooking not only destroys nutrition and enzymes but chemically changes foods from the substances needed for health into acid-forming toxins, free-radicals and poisons that destroy our health!"
Er, no. Science proves nothing of the sort (nor does that website provide any citations to scientific articles to back up its claim). As Christopher Wanjek explained ten years ago at LiveScience,
"plant enzymes, which raw dieters wish to preserve, are largely mashed up with other proteins and rendered useless by acids in the stomach. Not cooking them doesn't save them from this fate. Anyway, the plant enzymes were for the plants ... they are not needed for human digestion. Human digestive enzymes are used for human digestion."
Raw foodies also love raw milk, another dangerous trend, which has sickened thousands of people in the U.S. over the past decade. That topic deserves another column all to itself, but for now, suffice it to say that one of the greatest scientists of the 19th century, Louis Pasteur, discovered that heating milk briefly can kill a host of dangerous pathogens. Pasteurization, which is named for him, has been rightly credited with saving millions of lives. A few years ago, the Royal Society named pasteurized milk the 2nd greatest invention in the history of food (after refrigeration).

Obviously some foods, fruits in particular, are generally eaten raw, and fruits are indeed very healthy. But don't be fooled into thinking that cooking somehow makes food bad for you: it doesn't. Cooked food is easier to digest, more nutritious, and to most of us, pretty darned tasty.

Fish really is brain food

Everyone has heard that eating fish is good for the brain. This notion goes back at least a century; the famous humorist and novelist P.G. Wodehouse often mentioned it in his books. In one scene, after Jeeves (the butler) describes a clever scheme to escape a ticklish problem, Bertie Wooster reacts:
I stared at the the man.
'How many tins of sardines did you eat, Jeeves?'
'None, sir. I am not fond of sardines.'
'You mean, you thought of this great, this ripe, this amazing scheme entirely without the impetus given to the brain by fish?'
'Yes, sir.' [From Very Good, Jeeves, (c) 1930 by P.G. Wodehouse]
A century ago, the evidence that fish is brain food was virtually nonexistent. Researchers have been looking at this question ever since, and the evidence has been mixed. Even if fish is good for the brain, the mercury content in some fish might have the opposite effect.

A new study that appeared last week in JAMA answers this question: fish is indeed good for the brain. More precisely, eating fish regularly was associated with a reduced risk of Alzheimer's disease. This benefit occurred despite the fact that people who ate more fish did have higher levels of mercury in the brain. Apparently, the levels of mercury were too low to cause harm, and the benefits of eating fish easily outweighed any risks.

The new study, by Martha Clare Morris and colleagues at Rush Medical Center in Chicago, looked at 554 Chicago residents, all part of a long-term aging study, who died over a ten-year period. The scientists conducted autopsies to look directly in the brain for physiological changes such as neuritic plaques, which are signs of Alzheimer's or other disease. They used questionnaire data, which they collected for everyone in the study, to measure how much fish people had been eating–an imperfect way to measure eating habits, but it's often the only realistic way to gather this information.

Here's what they found: people who ate seafood at least once per week had lower levels of three different physiological signs of Alzheimer's, but only in people with a genetic marker known as APOE ε4, which itself carries an increased risk of Alzheimer's. (None of the people in the study had dementia when they first enrolled, starting in 1997.) Somehow, then, eating fish seems to counteract the effects of this harmful genetic mutation.

Bad news for fish oil supplement makers: the study found that "fish oil supplementation had no statistically significant correlation with any neuropathologic marker." In other words, people who took supplements got no benefit. You just have to eat fish.

This isn't the first study to show a positive benefit to brain health from eating fish. The authors cite 13 previous studies that reported "protective relations between seafood consumption and n-3 fatty acids with cognitive decline and incident dementia." The JAMA study is the first to include mercury levels, brain changes associated with Alzheimer's, and diet all in the same study. It's reassuring to learn that despite the increased mercury caused (probably) by eating more fish, the overall effect is beneficial.

The authors also pointed out that as we age, our brains lose DHA, a critical lipid in the brain, and that therefore "fish consumption may be more beneficial with older age." The FDA already recommends that pregnant women and young children eat more fish for its nutritional benefits. Now there's evidence that older people should eat fish regularly too.

So it seems that Wodehouse's character Bertie Wooster might have been right: fish really is brain food.

NEJM editorial calls data scientists "research parasites." Can Joe Biden fix this?

Vice President Joe Biden recently called for a "moonshot" to cure cancer, which President Obama announced in his State of the Union address last week. Motivated by the tragic death of his son Beau, who died last year of brain cancer, Biden says he will devote his remaining time in office, and many years after, to helping fight cancer. On his VP blog, he writes that he wants to do two things:

  1. Increase resources — both private and public — to fight cancer.
  2. Break down silos and bring all the cancer fighters together — to work together, share information, and end cancer as we know it.

I'm 100% behind the Vice President on these efforts, and I hope he succeeds beyond his wildest ambitions. But he might discover, paradoxically, that raising money–his first goal–is easy compared to the challenge of getting scientists to share data.

Exhibit A is an editorial titled "Data Sharing" that appeared in last week's New England Journal of Medicine, written by Dan Longo and Jeffrey Drazen, the deputy editor and editor-in-chief of the journal. Drazen and Longo wrote that scientists who wish to use other people's data to make new discoveries are "research parasites." Or, to be more precise, they wrote that "some front-line researchers" (none of whom are named) have this view. They also argued that "someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters" and thus have no business re-analyzing the data.

The condescension implicit in this statement is deeply troubling. Drazen and Longo are saying, essentially, that only the people who originally collect a data set can truly understand it, and anyone else who wants to take a look is a parasite.

The editorial has led to a firestorm on social media. For example, Nobel Laureate Barry Marshall tweeted that
"Plenty of Nobel prizes came from a new look at other people’s data."
UC Davis professor Jonathan Eisen tweeted that the "editorial by @nejm is simply deranged," and a new Twitter account under the name ResearchParasite quickly drew many followers.

I asked Dr. Drazen if he really meant to imply that scientists who use other people's data are parasites. He and I spoke on the phone, and he emphasized that he's a strong supporter of data sharing, and that's he been traveling the country promoting a new policy to share the information from clinical trials (something that rarely happens). Just a few days ago, he and other medical journal editors proposed a new policy on clinical trial data sharing, a policy that (while not perfect) would be a big step forward.

So why, I asked him, did he use the harshly negative phrase "research parasites"? Dr. Drazen pointed out that he had heard this term from others, and that's why he enclosed the phrase in quotation marks in his editorial (true). He shared with me an update that will appear in NEJM this week, in which he and Longo will explain further; however the journal asked that I not quote from that.

I was relieved to hear that Dr. Drazen and his NEJM colleagues are supportive of data sharing, and that are implementing new, more open policies on clinical trial data sharing for the journal. I asked him if he would also state directly that he did not believe the phrase "research parasites" was accurate or appropriate. He declined to comment, though he reiterated the point that this phrase came from others, not from him or Dr. Longo.

So the attitude is clearly out there. Indeed, it's not that unusual: I have encountered similar attitudes many times in my own career, although I should quickly add that it is far from universal.

It's a simple fact today that biomedical researchers (take note, Mr. Vice President) rarely share their data with others. Unless a funding agency or a journal in which they wish to publish requires them to share, they will sit on their data forever. I've personally been involved in projects where the various participants–funded by NIH or other federal agencies–refuse to share data even with other groups in the same consortium. For example (and this is just one of thousands I could point to), the raw data behind this clinical exome sequencing study, led by Baylor College of Medicine and published in 2013 in NEJM, is not available. The data collected by the famous Framingham Heart Study, running since 1948, has been locked up by Boston University scientists for half a century, and only recently (after considerable pressure from their funders) have they agreed to let others take a look at small pieces of the data, if they beg hard enough.

Let's go back to Vice President Biden's blog, where he wrote:
"We’ll encourage leading cancer centers to reach unprecedented levels of cooperation, so we can learn more about this terrible disease and how to stop it in its tracks.... Data and technology innovators can play a role in revolutionizing how medical and research data is shared and used to reach new breakthroughs."

Again, I'm 100% behind the VP here. Biden is already meeting with cancer researchers to see what he can do to accomplish these goals, and I'm sure they will tell him what he wants to hear. In contrast, let's see what Drazen and Longo wrote in their NEJM editorial:
"...a new class of research person will emerge — people who use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”"
Shocking! If you share your data, someone might try to disprove your results! Could it be that a published result relies on misinterpreted data and is wrong? It took me less than a minute on Retraction Watch to find multiple articles retracted by the NEJM itself, including some that were retracted because the original data could not be found.

Disproving a claim using the same data is what reproducibility is all about, and this is one of the most important reasons that data needs to be shared. After all, if someone has distorted their data in order to reach a conclusion that isn't really justified, we need someone else–someone not invested in proving the same result–to re-analyze the data using independent methods. This is how science corrects itself.

These sentiments of the unnamed "front-line researchers" quoted by Drazen and Longo reveal the dangerously arrogant assumption that only they understand the data, and that no one should question their findings. And there's also that concern that another scientist might discover something that was missed by the original group. In what view of reality is this "stealing from the research productivity" of that group?

The phrase "research parasites" also reflects the view of some scientists that the data they collect is their property, despite the fact that their research is (frequently) funded by the public. It's time for the funding agencies to set some new ground rules: if the government funds a study, then we all own the data. Scientists who don't like the rule can find another source of funding (and believe me, they might grumble and complain, but they will do what their funders demand).

One final note: a quick scan of recent articles in the NEJM reveals that, not surprisingly, many of them rely on the human genome sequence. Did any of those authors contact the "data gatherers" to get permission to use the genome in their work? Did they offer to include the human genome sequencers as co-authors on their papers, a step that Drazen and Longo recommend? Of course not–and they shouldn't. When we publish papers, we cite the sources of our data, but we don't ask their permission nor do we include them as co-authors. Citations are the currency of modern science.

So here's some advice to Vice President Biden: don't just talk to scientists and urge them to collaborate. They'll all agree, and tell you wonderful things about their numerous collaborations, but once you leave the room, they'll go back to business as usual. If you really want to change the culture, Mr. Vice President, change the rules.