Field of Science

Why Can't We Find The Best Surgeons?

Someone finally figured out how to rate surgeons. Don't expect to see a ranking list any time soon, though, the ranking technique was only used for a very small group of specialized surgeons. But boy, does it work well.

The New England Journal of Medicine reported last week on a remarkable new study, by John Birkmeyer and colleagues.  As they wrote:
"Few studies have directly assessed the technical skill of practicing surgeons, and to our knowledge none have linked the level of surgical skill to clinical outcomes."
They then proceeded to do exactly this.  Here's how the study worked: a group of surgeons filmed themselves performing gastric bypass surgery, and submitted one video for review.  A team of other surgeons then viewed the tapes and ranked the skill of the surgeons.  A least 10 evaluators ranked each video.

After doing the rankings, they followed up by looking at actual surgery results of gastric bypass surgeryon 10,343 patients by these same 20 surgeons, between 2006 and 2012.

The differences between the most skilled and least skilled surgeons were remarkable.  Comparing the top 25% to the bottom 25%, Birkmeyer and colleagues found:

  • The least-skilled surgeons had nearly triple the rate of complications, 14.5% versus 5.2%.
  • The least-skilled surgeons required longer operations, 137 minutes versus 98 minutes.
  • Although death is a very rare outcome for gastric bypass surgery, patients had a higher risk of dying if their operation was done by the least-skilled doctors, 0.26% versus 0.05%.

Across the board, the most skilled surgeons had better results.  This shouldn't be surprising, as Birkmeyer and colleagues wrote:
"Few surgeons would be surprised that technical skill is an important determinant of outcomes in patients who have undergone laparoscopic gastric bypass. The procedure is technically complex and performed in patients with morbid obesity, for whom surgical exposure is often challenging."
We know how to rate physical abilities in sports - the top teams and players compete directly against one another. We know that Rafael Nadal and Novak Djokovic are two of the world's best tennis players, even if some people argue about which one deserves to be number 1 (right now it's Nadal).  We can argue about the best soccer team, or American football team, or golfer, but eventually we can see for ourselves by watching them perform.

But in the world of medicine, finding out who is best - or even good - is nearly impossible.  You can find the best hospitals;  U.S. News has a ranking for hospitals, ranked on a published list of criteria (Johns Hopkins Hospital is No. 1).  But not the best doctors.

Not surprisingly, people and institutions have tried to create rankings of top doctors.  The only ones I'm able to find are based on surveys, such as Castle Connolly.  A survey, though, is more a popularity contest than a real measure of how good a doctor is. Many publications offer lists such as the "Top Doctors" lists in New York Magazine or Washingtonian magazine, but these too are just surveys.

One could use outcome measures: shouldn't the best surgeons (and other doctors) have the best results? Yes, of course they should - but we don't know how to measure this, because results depend on how sick the patient was to begin with, whether the patient follows his/her treatment program, etc.  Outcome measures tend to make doctors in educated, affluent areas look better, simply because the patients are healthier.

And just imagine: if we had accurate rankings, then the best surgeons could charge more. The worst surgeons (and someone has to be worst) would face pressure to improve their techniques or find another specialty, which would be good for patients.  I imagine that plenty of patients would be happy to pay more for a better surgeon. Hospitals do charge dramatically different rates, but not necessarily based on the skill of their doctors.

I don't know about you, but if I need surgery, I want one of those surgeons in the top 25% operating on me.  I just don't know how to figure out who they are.  Birkmeyer's study shows how we might change that.

1 comment:

  1. If the end result is with respect to the number of complications, surgical times, etc, why not just use those as the endpoints to determine surgical skill? Why even bother with a subjective rating of a videotape of one surgery? And what makes this panel so fit to rate surgical skill if, by the authors own admission, there exists no measure of technical skill presently available? And couldn't you argue, simply by virtue of the study design, that perhaps their "surgical skill" was negatively impacted by experiencing 6 years of higher complication rates, etc, rather than the thought that it was their ineptitude that led to these complications? This study has a lot of outlying problems.


Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="">FoS</a> = FoS