Showing posts with label NIH. Show all posts
Showing posts with label NIH. Show all posts

Good news for "Research Parasites": NEJM takes it back, 8 years later

After years of debate, the National Institutes of Health finally rolled out a data sharing policy early this year, one that should greatly increase the amount of data that biomedical researchers share with the public. This week, three prominent scientists from Yale described, in an op-ed in the New England Journal of Medicine, how “the potential effects of this shift ... toward data sharing are profound.”

For some of us, it’s deliciously ironic that this op-ed appeared in NEJM, which just a few years ago coined the term “research parasites” to describe anyone who wants to make discoveries from someone else’s data. That earlier piece, written in 2016 by the NEJM’s chief editors, was simply dripping with disdain. It caused a huge outcry, including a response from me in these pages and a sharply worded response from the Retraction Watch team, published in Statnews. The editor backed down (slightly) in a follow-up letter just a few days later, but the damage was done.

One interesting consequence was that a group of scientists created a Research Parasite Award, now awarded each year (entirely seriously, despite the tongue-in-cheek name) at a major biomedical conference, for “rigorous secondary data analysis.”

The 2016 op-ed in NEJM was itself a response to a call for greater data sharing published in the New York Times by cardiologists Eric Topol and Harlan Krumholz–and Krumholz, we should note, is a co-author of the latest piece in NEJM. Meanwhile, the former editor of NEJM retired years ago, and it appears that the journal is now ready to join the 21st century, even if it’s a few decades late.

What is all this fuss about? Well, many people outside of the scientific research community probably don’t realize that vast amounts of data generated by publicly-funded research–work that is paid for by government grants–are not usually released to the public or to any other scientists.

On the contrary: in much of biomedical research, data sets collected with government funding are zealously kept private, often forever. The usual reasons for this are simple (although rarely admitted openly): the scientists who collected the data want to keep mining it for more discoveries, so why share it? Sometimes, too, researchers package up the data and sell it, which is completely legal, even though the government paid for the work.

(It’s not just medical research data, either: once I tried to get some data from a paleontologist, only to learn that he treated every fossil he ever collected as his personal property. But that’s a blog for another day.)

Many scientists have been fighting this culture of secrecy for a long time. Our argument is that all data should be set free, at least if it’s the subject of a scientific publication. It’s not just scientists making this argument: since the early 2000s, patient groups began to realize they couldn’t even read the studies about their own diseases unless they paid a for-profit journal to access the paper. Those groups lobbied–successfully, after a years-long fight–that any publicly-funded research had to be published on a free website, not locked behind the doors of private publishers. Their effort led to an NIH database called PubMedCentral, which contains the full text of thousands of articles.

The new NIH data sharing policy is one consequence of the Open Science movement (which I’m a part of), which argues that science moves much faster when it’s done in the open. This means sharing data, software, methods, and everything else. There’s now a U.S. government website dedicated to Open Science, open.science.gov, which includes more than a dozen federal agencies including NIH, NSF, and the CDC.

A bit more history: as far as I can tell, the earliest voices for data sharing emerged during the Human Genome Project, an international effort beginning in 1989 that produced the first draft of the human genome in 2001. When a private company (Celera Genomics) emerged in 1998, a dramatic race ensued, and as one strategy for competing, the public groups announced that, in contrast to the private group, they would release all their data openly on a weekly basis, long before publication. That wasn’t how things had worked before.

Very soon after that, scientists in genomics (my own field) realized that all genome data, whether from bacteria, viruses, animals, or plants, ought to be released freely. The publicly-funded sequencing centers received millions of dollars to generate the data, but they weren’t the only places who could analyze it. NIH and NSF agreed, and pretty soon they required all sequencing data to be released promptly.

This same spirit didn’t touch most medical research, though. Even though far more money–billions of dollars a year in NIH funds–is spent on disease-focused research, data from those studies remained locked up in the labs that got the funds. This is now changing.

As the Yale scientists (Joseph Ross, Joanne Waldstreicher, and Harlan Krumholz) point out in their NEJM editorial, open data sharing has already yielded tremendous benefits. For example, they point out that hundreds of papers have been published using public data from the NIH’s National Heart, Lung, and Blood Institute, including studies that revealed new findings about the efficacy of digoxin, a common drug used to treat heart failure.

The new NIH policy covers all of NIH, not just one institute, and we can hope it will unlock new discoveries by allowing many more scientists to look at the valuable data currently kept behind closed firewalls.

But simply requiring scientists to have a “data management and sharing policy,” as the NIH is now doing, might not be enough. Many thousands of scientific papers already say they share data and materials–but as it turns out, the authors don’t always want to share.

A study published last year illustrated how toothless some current policies are. That study identified nearly 1800 recent papers in which the authors said they would share their data “upon request.” They wrote to all of them, only to find that 93% of the authors either didn’t respond at all, or else declined to share their data. That’s right: only 7% of authors shared their data, despite publishing a statement that they would.

The NEJM editorial proposes a different solution, one that could be far more effective: putting scientific data into a government repository. This is something the government itself can enforce (because they control the funding), and once the data is in a public repository, the authors won’t be able to sit on it as (some of them) now do.

It’s good to see NEJM joining the open science movement. Science that is shared openly will inevitably move faster, and everyone–except, perhaps a few data hoarders–will benefit.

Gain of function research needs to be banned, but we need to define it properly


I’ve been writing about dangerous gain-of-function research on viruses for years, originally on the flu virus and more recently on the Covid-19 virus. Many people are deeply concerned about this research, which might have caused the Covid-19 pandemic, and yet there are still no real regulations controlling it, neither in the U.S. nor anywhere else.

I can already hear the objections: oh, but what about the new rules that NIH put in place in 2017, after a 3-year “pause” in some gain-of-function (GoF) research? Those rules were utterly ineffective, but I’ll get to that in a minute.

Despite my arguments, and the concerns of many other scientists, which have been expressed in various forums and articles for at least a decade now, the virology community continues to insist that any limits on GoF are unnecessary, and that GoF is wonderfully beneficial. A group of 156 virologists even wrote an opinion piece, published in the Journal of Virology, making this very point.

I’ve tried to convince some of my colleagues in the infectious disease world that GoF should be banned, and I’ve discovered that many of them–even some non-virologists–are opposed to any government regulation of GoF research.

They are wrong. However, they do raise one important concern that I think is valid, and that I will address in this column. Their concern is that any government regulation will be ham-handed, and will likely end up limiting or preventing a range of very useful experiments that have the potential to lead to beneficial new drugs and vaccines.

I get it. When the government tries to regulate science, it can write rules that are far too broad, or that get mis-interpreted even if well-written, and unintended consequences follow. So let’s not do that: below I’ll explain what I think needs to be banned.

But let’s not forget why we are having this debate right now: there is a very real possibility that the Covid-19 pandemic started in a lab that was doing GoF research on coronaviruses. We know that the Wuhan Institute of Virology (WIV) was doing this kind of research–that fact is not under dispute. We don’t know (and we may never know) if the original Covid-19 virus first appeared as a result of a lab leak, but it might have. That’s why we’re asking whether such research is worth the risk.

Before I explain what I think the rules should be, let’s look at the current NIH rules, which I mentioned above. First, though, let’s remember that NIH rules only apply to research funded by NIH. Research that is funded privately, or by any other part of the government, is unaffected by these rules and remains entirely unregulated.

So: back in 2017, when the NIH lifted the 3-year funding pause, they put in place some rules (detailed here) for work on “potential pandemic pathogens,” or PPPs. (The government loves acronyms.)

The pause itself was prompted by work on avian influenza, led by virologists Ron Fouchier and Yoshihiro Kawaoka, that was designed to turn some deadly bird flu viruses into human flu viruses. The work was successful: the researchers did indeed create viruses that had the potential to infect humans. These results were really alarming to many scientists: I wrote about it at the time, and other scientists also raised the alarm. Those concerns are what led to the funding pause.

Since 2017 then, the NIH regulates (but doesn’t ban) research on PPPs that are both:

  1. “likely highly transmissible and likely capable of wide and uncontrollable spread in human populations, and
  2. likely highly virulent and likely to cause significant morbidity and/or mortality in humans.”

One of the first things to notice about this definition is that avian influenza–the very work that prompted the new rules–isn’t really covered.

Another thing to notice is that work on coronaviruses in bats–the GoF work that was apparently going on in the Wuhan Institute of Virology, and that may have caused the Covid-19 pandemic–wouldn’t have been covered either. Those bat viruses would not have been considered “likely highly transmissible in humans,” not before the pandemic.

Of course, we all know differently now.

In any case, the rules that NIH introduced in 2017 only applied to a very narrow class of work, and as far as I can tell, they didn’t restrict anything. On the contrary: the NIH resumed funding the GoF work on avian influenza work by Fouchier and Kawaoka soon after lifting the funding pause. And let’s not forget that NIH rules aren’t a ban: it remains perfectly legal to do any kind of GoF work.

So how can we put in place intelligent restrictions that will prevent dangerous GoF research in the future?

First, rather than rejecting any restrictions whatsoever, as some virologists have done, scientists should work with the government to craft a thoughtful set of limitations. For starters:

  1. Research that creates new strains of the Covid virus (SARS-CoV-2) that might have greater virulence or transmissibility should be entirely banned.
  2. Research that takes non-human viruses, including avian flu and bat coronaviruses, and gives them the ability to infect any new animals, should be banned.

To scientists who can’t even agree on these restrictions, I would say that it appears you oppose any restrictions whatsoever. If that’s your position, then the government might step in and impose far broader bans, which are not likely to be good. If you’ll agree to these two restrictions, perhaps we can broaden them slightly to cover other types of highly risky GoF work.

Finally, let me return to a point I’ve made before, but that bears repeating: the supposed benefits of GoF research are essentially zero. The claim that GoF research that makes a virus more deadly will help us “understand pathogenicity” or “be prepared for the next pandemic” are just hand-waving arguments. I wrote a whole column just last month explaining why these claims are fundamentally wrong, so I won’t repeat that here.

If we do ban some GoF research, with carefully-crafted rules, we won’t lose anything. Instead, we’ll gain at least two things: first, virologists can apply their expertise to truly beneficial virology work, and second, the scientific community will regain some of the trust it has lost during the pandemic. That would seem like a good thing.

Panel recommends new controls on deadly gain-of-function research. Will the government listen?

Illustration by Erik English

This past week, a government-appointed panel of scientists released a new report recommending 13 actions the U.S. government should take to control “gain-of-function” research that has the potential to create deadly new pathogens.

This has been a long time coming, but the first thing I want to point out is that this is just an advisory panel. The government hasn’t done anything yet. Let’s unpack what happened, shall we?

First, the panel is called the NSABB: the National Science Advisory Board for Biosecurity. The new report, which was at least 3 years in the making, was created in response to a decade’s worth of concerns, raised by many scientists (including me - see my previous articles here and here and here, among others), about the dangers of a specific kind of research known as gain-of-function.

What is gain-of-function (GoF) research? Well, it can include many scientific experiments, including some that are perfectly reasonable. But the term has been used most often to refer to experiments that are designed to take a virus such as influenza or SARS-CoV-2 and alter it intentionally to make it more deadly.

This seems crazy, right? Yet it’s been going on in the influenza virus research world for at least a decade, which is why many scientists have raised alarms.

The Covid-19 pandemic gave this issue much greater urgency, after suspicions arose that the Covid-19 virus, SARS-CoV-2, might have emerged (accidentally) from gain-of-function experiments at a major virology lab in Wuhan, China. (It probably didn’t, but we still don’t know for sure, as I’ve explained in previous columns.)

So back to the topic at hand: the new NSABB report. What do they recommend, and will it matter? I don’t want to go through all 13 recommendations, but overall it’s a very good start, if (and only if) the U.S. government takes them seriously and implements them all.

And the virology community is already pushing back - but first let me go into just three of the recommendations.

First, the panel recommends that the government require that all GoF research undergo federal-level review if the work is

“reasonably anticipated to enhance the transmissibility and/or virulence of any pathogen.”

Believe it or not, GoF research that does this kind of thing is going on right now, and there’s no rule saying it must be reviewed first.

Second, the panel recommends that the government only allow such research if there’s simply no better, lower-risk way to gain the same scientific insights. As they put it, scientists who want to do GoF work would have to prove that

“there are no feasible alternative methods ... that poses less risk ... and the risks are justified by the potential benefits.”

That’s a high bar to clear, but it seems eminently reasonable to insist upon it before allowing dangerous GoF research to proceed.

The panel also recommends that the new restrictions on gain-of-function research apply to all research in the U.S., regardless of whether it’s funded by the government. This is an important addition, as illustrated recently when Boston University, after being called out for dangerous gain-of-function experiments on the Covid-19 virus, claimed that they didn’t use NIH funds for this, so (they argued) they didn’t break any rules. Technically, they were correct. This recommendation will close that giant loophole.

There’s much more in the NSABB report, and my primary reaction is that (1) it’s a good start and (2) it’s not nearly enough. I’d like to see the government make a blanket statement that research that will make deadly viruses even more deadly is simply forbidden, at least for now. If someone wants an exception, they could make the case, but I’ve yet to see a good argument for these experiments.

What about that pushback from the virologists that I mentioned above? Well, in a lengthy commentary just published in the Journal of Virology, 156 virologists argue that gain-of-function research is wonderful! And it’s brought so many benefits! Just let us handle this, and don’t worry, they seem to be saying.

To make the benefits explicit, the 156 virologists include a table listing dozens of “useful examples” of gain-of-function research. Let’s look at just two of them.

Example 1: the virologists assert that experiments on a virus called M13 led to faster computers, citing a 2018 article. First, this is nonsense: no one has been a faster computer using a modified M13 virus. Second, the M13 virus is harmless to humans (it only infects bacteria), so it wouldn’t be subject to any regulations on GoF research in human pathogens.

Example 2: this one is even more outrageous. The table lists as a “benefit” an experiment that “established that H5N1 has capacity for mammalian transmissibility.” They then cite a notorious experiment from 2012 in which scientists intentionally modified a deadly bird flu virus (H5N1) in order to make it possible for the virus to be transmitted directly between mammals. This was one of the key experiments that led to the widespread alarm about GoF research in the first places. (I wrote about it back in 2013.)

So no, creating a more-deadly virus and then saying “see? look how dangerous this virus is?” is not what I’d call useful.

Clearly, the virologists who wrote this commentary do not want to see any restrictions at all on the kind of research they do. They just don’t see the need for it. Obviously, I disagree, as do many others, including many virologists who support the NSABB recommendations.

As I wrote at the beginning of this piece, the NSABB report is just a set of recommendations, and the government might not do anything. I hope that the government will implement all of them, and then go even further, and put a stop to the dangerous, sometimes reckless experiments that a very small minority of scientists are engaging in.

We need to study viruses, and we need to control infectious diseases, but we can do this without making pathogens more deadly.

Gain-of-function experiments at Boston University have created a deadly new COVID virus. Who authorized this?


Schematic of the Covid-19
 virus, SARS-CoV-2

After all the controversy over the past few years about gain-of-function research on viruses, especially the Covid-19 virus, I thought this kind of work was on hold, at least in the U.S. Indeed, the controversy grew so hot that NIH issued a statement in May of 2021 declaring that it wouldn’t support such work.

Nonetheless, some scientists continue to pursue gain-of-function work. In a new study, just released on the preprint server bioRxiv, a group of virologists at Boston University did the following. They took the Spike protein from the Omicron BA.1 strain of SARS-CoV-2 (that’s the strain that spread throughout the world last winter, often slipping past the protection offered by vaccines) and combined it with an early 2020 strain of the Covid-19 virus.

This experiment gave them a brand-new, never-before-seen strain of Covid-19. Was it more deadly? You bet!

In their experiments, the BU scientists infected laboratory mice with the original Omicron virus, which caused “mild, non-fatal infection.” But when they infected mice with their new, recombinant virus, which they called Omi-S, 80% of the mice died. To quote from their article:


“the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.”


Well, that’s just great. Making matters worse, the researchers found that the new recombinant virus also replicated much faster in mice: “Omi-S-infected mice produced 30-fold more infectious virus particles compared with Omicron-infected mice.” Yes, you read that right: Omi-S might grow 30 times faster than the garden-variety Omicron strain.

This, dear readers, is what we mean by “gain of function” research. The scientists took sequences from two different strains of the Covid-19 virus, one of which was relatively mild, and created a new strain that is far more infectious and far more deadly. As many scientists (and others) have pointed out, research like this carries great risks, foremost among them the chance that an accidental lab leak could create a new pandemic, killing millions of people.

And the benefits? There must be some pretty major benefits to offset this risk, right? Well, not exactly. The researchers say that these experiments show that the pathogenicity of the Covid virus is determined primarily by something other than the Spike protein. That’s a pretty narrow finding, and the authors don’t seem to consider that they might have learned this without creating an entirely new, more-lethal virus.

Does this work violate NIH policies? The NIH director has stated that

“neither NIH nor NIAID have ever approved any grant that would have supported ‘gain-of-function’ research on coronaviruses that would have increased their transmissibility or lethality for humans.”

First, let me point out that this is a very narrow statement: the NIH doesn’t deny that it funds gain-of-function work on viruses, because it does. They even put a “pause” on such work for 3 years, but they lifted it (regrettably) in 2017. I wrote about that at the time (“NIH Re-opens the Door to Creation of Super-Viruses,” December 2017).

Second, the NIH policy carefully says they don’t support work that would make viruses more deadly for humans. The BU study only looked at mice, so one might argue that it wasn’t making the viruses more deadly in humans–but there’s simply no way we can tell that, not unless we intentionally infect someone. Having read the paper, this work seems to me to be a clear violation of NIH rules.

Boston University and the researchers who led the study disagree. In a statement issued last week, BU officials wrote:

“First, this research is not gain-of-function research, meaning it did not amplify the Washington state SARS-CoV-2 virus strain or make it more dangerous.”

Let’s take a look at this denial, shall we? First, let me reiterate that the new experiments combined 2 strains of the Covid-19 virus: the Omicron strain, which has been the main strain infecting humans since last winter, and an earlier strain that was collected from a patient in Washington state in 2020. The Omicron strain causes only mild infections in mice, but the new Omi-S strain–the one that Boston University scientists created in their lab–kills 80% of them. The Washington state strain, which is no longer circulating in people and thus isn’t a current threat, kills 100% of mice.

So that is the BU argument: because Omi-S is less deadly than one of its parental strains, the research doesn’t meet the definition of gain-of-function.

Sorry, but this argument is just nonsense. You don’t get to redefine gain-of-function in the same sentence where you’re denying you’ve done it. These experiments created a brand-new, recombinant strain of Covid-19, and that strain was much more infectious and much more deadly than Omicron, which is one of the strains it was created from. This is precisely what most scientists mean when they describe gain-of-function research and the risks that it carries.

Furthermore, we have no idea how this virus will behave in humans. It might be far more deadly than Omicron in people. Let’s hope we never find out.

And what about that 80% mortality rate? According to Prof. Ronald Corley, Director of BU’s National Emerging Infectious Diseases Laboratories (NEIDL), “This was a statement taken out of context for the purposes of sensationalism, and it totally misrepresents not only the findings, but [also] the purpose of the study.”

Out of context? Well, here’s what the scientists themselves wrote in the very first paragraph (the abstract) of their paper: “We generated chimeric recombinant SARS-CoV-2 encoding the S gene of Omicron in the backbone of an ancestral SARS-CoV-2 isolate and compared this virus with the naturally circulating Omicron variant.... In K18-hACE2 mice, while Omicron causes mild, non-fatal infection, the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.”

That’s the scientists’ own statement, and it’s not out of context. The authors themselves were emphasizing this dramatic mortality rate.

The experiments also present another problem for BU. Despite being funded by multiple NIH grants, neither the scientists themselves nor Boston University appears to have informed NIH about this work, which is a requirement for gain-of-function research.

BU officials addressed this problem by stating, first, that the NIH funds only supported some of the underlying “tools and platforms,” and that NIH funds did not directly support the research. Really, BU? How stupid do you think we are? Money, as we all know, is fungible.

Second, according to BU, “there was no gain of function with this research. If at any point there was evidence that the research was gaining function, under both NIAID and our own protocols we would immediately stop and report.” (Read the full BU statement here.)

Well, I would say that when those mice started dying, you had some pretty good evidence that “the research was gaining function.”

I’ve been in touch with multiple virologists who take a similar view. Simon Wain-Hobson, an Emeritus Professor at the Pasteur Institute, wrote to tell me that the BU research “is a GOF outcome in that the recovered virus is more pathogenic than the parental (backbone) virus, albeit in a transgenic mouse setting.” Prof. Wain-Hobson also pointed out that this work “provides a road map to [creating] a virus that might be dangerous to man. By posting this, these authors are making life easier for the next person or copycat.”

Another virologist, Dr. Valentin Bruttel of the University of Würzburg, pointed out the same problems and more, writing that:

• [the experiments] could have produced a virus that is “way more lethal” than the original SARS-CoV-2 strain
• “the study is useless for the general population, because the chance that exactly this Omi-Spike [would] recombine with an extinct variant [the Washington state strain] are zero,”
• “the chimeric virus could cause more severe disease in humans than estimated from mouse data.”

Like Prof. Wain-Hobson, Dr. Bruttel also pointed out that “any terrorist group could copy the BU group’s protocols.”

What does NIH think? They don’t appear convinced by the BU denials. According to an article in The Hill, “NIH is examining the matter to determine whether the research” fits the definition of gain-of-function. And as reported by Helen Branswell in Stat last week, an NIAID official said that NIH should have been informed, at a minimum so that they could determine whether or not the research was permitted under NIH’s gain-of-function rules.

I contacted the lead author of the study to get his response, but he did not reply.

The bottom line here is that some virologists (by no means a majority) believe that conducting gain-of-function research on the Covid-19 virus is just fine. Many other scientists disagree, and strongly. Some have pointed out that this work is qualitatively no different from biowarfare research. I’ve been warning about the risks for years, and I’m certainly not the only one.

Merely requiring scientists to inform the government, which is the current NIH policy, is not enough. We need to shut this research down and take a long, hard look at it before any such experiments can go forward again.

Covid patent fight is about greed, not human health

I suppose I shouldn’t be surprised: one of the companies that developed mRNA vaccines for Covid-19, Moderna, has just sued the companies who make the competing mRNA vaccine, Pfizer and BioNTech, claiming that Pfizer/BioNTech is violating its patents.

Pfizer was surprised though, according to news reports. (Or at least they said they were.)

Moderna had announced, back in 2020, that they wouldn’t enforce patents on their vaccine during the pandemic, but they seem to have changed their mind. Apparently, billions of dollars in profits isn’t enough: they’ve decided the time is right to try to grab even more money.

Let’s make no mistake here: this is purely about greed. Apparently Moderna understands this, since they proudly advertised their earlier plans not to enforce patents on the Covid-19 vaccine. They realized that the public good will generated from such an announcement was valuable.

Not that valuable, apparently.

It’s not even clear that Moderna should have been given the patents it holds. According to a recent story in Science, the key technology behind one of Moderna’s patents was invented, and patented, years earlier by two scientists, Drew Weissman and Katalin Karikó, at the University of Pennsylvania. Their work discovered a way to modify the RNA in the vaccine that would make it much more effective. (Here’s a link to the patent.)

As I wrote last year, patenting the Covid-19 vaccine is unethical. At the time, the US had announced support for a “vaccine waiver” that would allow any country to develop vaccines against Covid-19 without licensing the technology from one of the companies that currently holds a patent. That policy sounded too good to be true–and apparently it was, because no such waiver is in effect now.

The patent system is a creation of modern governments, and they don’t have to let companies get away with this. The profits of a few companies are far, far less important than the lives of millions of people. Allowing companies to restrict Covid-19 vaccine development is, crudely put, defending money over human lives. Maybe it’s time for the international community to institute the vaccine waiver, at least until the pandemic is truly over. 

And make no mistake: even though we have vaccines now, they still need improving, and better vaccines will save lives. Patent disputes will slow down or even prevent work on better vaccines, since the patent holders will have a monopoly. Even the threat of a lawsuit can stymie progress; after all, why would someone invest time and effort on a vaccine that they might never be able to deploy?

Moderna is far from the first company or institution to let greed guide their actions: way back in 2010, I wrote about how MIT and Harvard had filed a patent that was, as I wrote at the time, both inappropriate and harmful. In that case, MIT and Harvard had an incredibly broad patent on a human gene, NF-kB, which plays a key role in our immune system’s response to infections. Granting a patent on NF-kB, as the US Patent Office did, was akin to granting a patent on all drugs that affect nearly any human gene. The universities licensed the patent to Ariad Pharmaceuticals, who filed a lawsuit the day the patent was granted. Neither Ariad nor MIT developed any treatments, but they initially won $65.2 million just because of the patent. (Need I point out that the Harvard and MIT work was mostly funded by the public?)

Fortunately, in the 2010 case, an appeals court threw out the patent, ruling that people and companies cannot patent human genes, because genes are products of nature, not inventions. The mRNA patents, though, don’t fall in this category.

Should I also mention that much of the basic research behind mRNA vaccines was also funded by the public? Or that NIH (and therefore the US government) has patent rights to some of the technology behind the Moderna vaccine?

Moderna, Pfizer, and BioNTech looked like heroes when they first announced their vaccine results–and in some ways, they were. The world was desperate for vaccines against Covid-19, and the mRNA vaccines have saved millions of lives.

But my message to Moderna is simpler: you’re already making billions in profits on the Covid-19 vaccine, so don’t be such greedy assholes. And don’t be evil: drop the lawsuit.