T. rex protein degrades further

An article in Science this week casts fresh doubt on the Tyrannosaurus rex "proteins" that were extracted from a 68-million-year-old fossil. The new report by Pavel Pevzner and colleagues argues that the T. rex peptides (protein fragments) represent statistical artificacts rather than genuine T. rex proteins. The original study by Mary Schweitzer, John Asara, and colleagues, which appeared in Science in 2007, claimed that the authors had discovered 7 peptides from T. rex, all from the collagen protein, which is the most common protein in bone. The story, which received tremendous publicity at the time, has been slowing falling apart ever since. Asara and Schweitzer continue to defend it, including a response in Science this week, but let's look at how the story is collapsing:
  1. After the original report, Pevzner (privately) pointed out statistical problems, and the authors revised their findings, admitting in a letter to Science in September 2007 that one of the peptide fragments was a statistical artifact. One down, six remaining.
  2. In January 2008, Science published a new Technical Comment in which 27 authors (Buckley et al.) used a standard set of authentication tests developed for ancient DNA, and reported that the T. rex sample failed those tests. (Another sample from mastodon, also reported by Asara but 100 times younger, passed the same tests.)
  3. In July 2008, Thomas Kaye and colleagues published a report that re-examined the microscope evidence of the T. rex "soft tissue". The original findings by Schweitzer were based on this soft "tissue" being original T. rex organic material. Kaye et al. report that the soft material was a bacterial biofilm - not original material at all. They also report on carbon dating of the biofilm showing it to be modern, not ancient.
  4. Pevzner et al. report this week that the peptide mass spectrometry evidence - which Asara and Schweitzer repeatedly used to defend their results against the earlier criticism - are also flawed. One way to resolve this, Pevzner points out, is to release the mass spec data, which is a common practice in that community. This would allow others to re-interpret the data and test more rigorously for statistical artifacts. However, Asara and Schweitzer refuse to release their data. Instead, they wrote another response which simply gives more details about how they ran the software to search their spectra against a peptide database, but doesn't really answer Pevzner's questions.
This ongoing controversy reveals some of the worst behavior I've seen on the part of Science. (Disclaimer: I've published repeatedly in Science myself.) They published two articles by Schweitzer and Asara, grabbing all the publicity they could, although they knew about the problems. The second article appeared this past spring - long after they received Pevzner's article (which was received on January 7, 2008, according to the Science website) and after they received and published the criticism by Buckley et al. What I find most reprehensible on their part is that they published both the Buckley et al. and the Pevzner et al. critiques as "Technical Comments" - which means they appear online only, not in the print edition. Both the Asara and Schweitzer articles, by contrast, appeared in the print edition, which means they will be read more widely.

If Science truly cared about getting this story right, they would publish the critiques just as prominently as the original article. It seems that Science is eager to get publicity for a "discovery", but not so eager for publicity when it turns out the discovery is false. Yes, it's true that they did publish the critiques, but they should have done better.

Finally, I recommend Rex Dalton's story in Nature on this controversy, which does a good job of summing it up, with links to all the articles. We'll see what happens next, but it appears that Schweitzer and Asara will keep defending their claims. The mounting evidence seems to show that they were wrong - wrong about the soft tissue, wrong about the mass spec identifications, wrong about the age of the sample, and wrong to continue to refuse to release their data.

Associated Press turns a story on critical care into commentary on religion

A news story from the Associated Press (and carried by CNN) illustrates how the media can focus on whatever they want to in a scientific or medical article, and create a headline that doesn't seem to match the article. Here's the CNN headline: "Survey: Many believe in divine intervention". The URL is even more telling: god.vs.doctors. The first sentence reads: "When it comes to saving lives, God trumps doctors for many Americans."

Well, this isn't too surprising - Americans tend to be religious, and the results were gathered by a survey, which might have biases. But the surprising element of this story is what the original article in the journal Archives of Surgery really contains. It is a survey by three doctors titled "Trauma Death: Views of the public and trauma professionals on death and dying from injuries." The main result is that a majority of the public prefer palliative care (i.e., making the patient as comfortable as possible) when "aggressive critical care would not be beneficial in saving their lives." The study also found that most people trust a doctor's recommendation to withdraw treatment when there is no hope for improvement. These findings are presented first.

However, the authors also asked a question about religion in their survey - they asked both the public and trauma professionals if "divine intervention" could save a person after medical professionals determine there is nothing left to do. Not too surprisingly, over half (57.4%) of the respondents from the public said yes, while only 19.5% of professionals did.

I'm not sure why the authors of the study included this question - maybe they knew it would be a "hot button" that would get them headlines. But it seems the point of their study was to educate doctors about what patients think. CNN turned this into a story that claimed "God trumps doctors," although the article didn't say any such thing. The CNN headline suggests that people would prefer "God" to doctors, but in fact the story says that most people trust a doctor's recommendations for treatment.

This is another illustration of how careful scientists and doctors need to be in presenting their work to the public. The reporter in this case (an un-named AP reporter) gathered quotes from a number of medical professionals, none of whom supported the article's lead sentence. Reporters sometimes decide for themselves what the story is - which clearly seems to have happened here - and they are only too happy to "shape" a story from the scientific literature to support their pre-determined conclusions.