Should we allow AI to control the battlefield? AI itself thinks not.

Artificial Intelligence, or AI, seems to have finally arrived. With the introduction of ChatGPT last November, millions of people suddenly discovered that AI was far, far more than just a research activity. The range and sophistication of ChatGPT’s answers to questions across a wide range of disciplines is, frankly, pretty stunning.

AI is already in lots of places where you might not even realize it. Google Translate has been using AI for years, and it’s remarkably good, although nowhere near as good as a human translator. The technology that Pandora uses to customize your music is a type of AI, as is the technology behind Tesla’s self-driving cars. Within my own field, the program AlphaFold2 was a true breakthrough in scientists’ ability to predict the structure of proteins.

Along with these apparently beneficial developments, though, comes a great deal of concern. As AI gets better and better, can we continue to trust it to make decisions for us? It’s one thing if an AI recommends a song that we don’t like, which we can just ignore, but if AI is driving our cars, then mistakes might be much more costly.

And there are far riskier tasks that AI might be doing soon: what if the military starts using AI on the battlefield? One might be tempted to say no, of course they won’t do that, but what if another country starts using AI this way? Can we simply ignore that?

To be honest, I haven’t been worried, at least not yet. I have a little secret to confess: I did my Ph.D. thesis in AI, in the area called machine learning. I haven’t been doing AI research for the past 25 years, but I think I still understand at least a little of the basics behind the technology.

Thanks mostly to the attention that ChatGPT is getting, though, there’s suddenly a huge concern that AI represents a danger to society, although no one is exactly sure why. President Biden had a meeting just this week with the CEOs of Google, Microsoft, and OpenAI (the creators of ChatGPT) to talk about these concerns, where he told the execs that “What you’re doing has enormous potential and enormous danger.”

Of course, politicians have no idea what AI technology is–although I had to laugh at this quote from the White House press secretary: “The president has been extensively briefed on ChatGPT and knows how it works.” Um, no he doesn’t. I’m skeptical that the CEOs of Google and Microsoft know how it works either. However, you don’t need to understand the technology to understand the dangers.

Just recently, one of AI’s early pioneers, Geoffrey Hinton, has been speaking out to warn the public that AI has reached a point where “there’s a danger of something really bad happening.” Not surprisingly, other experts, such as German scientist Jürgen Schmidhuber, disagree.

The chair of the FTC, Lina Khan, says we can regulate AI effectively, and perhaps we can. But I’m not so sure.

Much of the concern in the press seems to be about the use of AI to spread misinformation. For example, AI can help create “deep fakes” that can be used to convince people that, say, an election was stolen even when it wasn’t. That’s a legitimate concern, but it’s happening even without AI.

A better question, which a family member just asked me, is should we be worried about AI making decisions on the battlefield? I thought I should take this question directly to the source, so I asked ChatGPT itself. The answer I got was remarkably clear and direct; I’ll quote it here in full:

“Yes, people should be concerned about AI making decisions on the battlefield. There are several reasons for this.

Firstly, AI lacks human empathy, judgment, and intuition. In a complex and dynamic environment such as the battlefield, human decision-making is often guided by experience, intuition, and an understanding of the broader context. AI lacks the ability to consider these intangible factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors.

Secondly, the use of AI in military decision-making raises significant ethical concerns. For instance, an AI system may be programmed to prioritize the protection of friendly forces at the expense of civilian lives, leading to potentially devastating consequences for non-combatants caught in the crossfire. Additionally, AI systems may be susceptible to hacking or manipulation, which could result in unintended consequences or a loss of control over the decision-making process.

Finally, there is the risk that reliance on AI systems may lead to the erosion of human decision-making capabilities, potentially diminishing the overall quality of military decision-making and increasing the likelihood of errors or unintended consequences.”

So there you have it: ChatGPT itself gives us several reasons why it’s a terrible idea to let AI make decisions that might cost human lives. We should listen to what it’s telling us, because as it gets more intelligent (and it will), it might decide not to be so honest in the future. And I have to say that it’s reasoning here is both well-written and compelling.

One thing that worries me as well is that the very term “artificial intelligence” is much too broad. We should be concerned about giving any technology the power to control our lives, whatever someone calls it. But when an AI itself says “don’t trust me,” then I think we might want to listen.

Gain of function research needs to be banned, but we need to define it properly


I’ve been writing about dangerous gain-of-function research on viruses for years, originally on the flu virus and more recently on the Covid-19 virus. Many people are deeply concerned about this research, which might have caused the Covid-19 pandemic, and yet there are still no real regulations controlling it, neither in the U.S. nor anywhere else.

I can already hear the objections: oh, but what about the new rules that NIH put in place in 2017, after a 3-year “pause” in some gain-of-function (GoF) research? Those rules were utterly ineffective, but I’ll get to that in a minute.

Despite my arguments, and the concerns of many other scientists, which have been expressed in various forums and articles for at least a decade now, the virology community continues to insist that any limits on GoF are unnecessary, and that GoF is wonderfully beneficial. A group of 156 virologists even wrote an opinion piece, published in the Journal of Virology, making this very point.

I’ve tried to convince some of my colleagues in the infectious disease world that GoF should be banned, and I’ve discovered that many of them–even some non-virologists–are opposed to any government regulation of GoF research.

They are wrong. However, they do raise one important concern that I think is valid, and that I will address in this column. Their concern is that any government regulation will be ham-handed, and will likely end up limiting or preventing a range of very useful experiments that have the potential to lead to beneficial new drugs and vaccines.

I get it. When the government tries to regulate science, it can write rules that are far too broad, or that get mis-interpreted even if well-written, and unintended consequences follow. So let’s not do that: below I’ll explain what I think needs to be banned.

But let’s not forget why we are having this debate right now: there is a very real possibility that the Covid-19 pandemic started in a lab that was doing GoF research on coronaviruses. We know that the Wuhan Institute of Virology (WIV) was doing this kind of research–that fact is not under dispute. We don’t know (and we may never know) if the original Covid-19 virus first appeared as a result of a lab leak, but it might have. That’s why we’re asking whether such research is worth the risk.

Before I explain what I think the rules should be, let’s look at the current NIH rules, which I mentioned above. First, though, let’s remember that NIH rules only apply to research funded by NIH. Research that is funded privately, or by any other part of the government, is unaffected by these rules and remains entirely unregulated.

So: back in 2017, when the NIH lifted the 3-year funding pause, they put in place some rules (detailed here) for work on “potential pandemic pathogens,” or PPPs. (The government loves acronyms.)

The pause itself was prompted by work on avian influenza, led by virologists Ron Fouchier and Yoshihiro Kawaoka, that was designed to turn some deadly bird flu viruses into human flu viruses. The work was successful: the researchers did indeed create viruses that had the potential to infect humans. These results were really alarming to many scientists: I wrote about it at the time, and other scientists also raised the alarm. Those concerns are what led to the funding pause.

Since 2017 then, the NIH regulates (but doesn’t ban) research on PPPs that are both:

  1. “likely highly transmissible and likely capable of wide and uncontrollable spread in human populations, and
  2. likely highly virulent and likely to cause significant morbidity and/or mortality in humans.”

One of the first things to notice about this definition is that avian influenza–the very work that prompted the new rules–isn’t really covered.

Another thing to notice is that work on coronaviruses in bats–the GoF work that was apparently going on in the Wuhan Institute of Virology, and that may have caused the Covid-19 pandemic–wouldn’t have been covered either. Those bat viruses would not have been considered “likely highly transmissible in humans,” not before the pandemic.

Of course, we all know differently now.

In any case, the rules that NIH introduced in 2017 only applied to a very narrow class of work, and as far as I can tell, they didn’t restrict anything. On the contrary: the NIH resumed funding the GoF work on avian influenza work by Fouchier and Kawaoka soon after lifting the funding pause. And let’s not forget that NIH rules aren’t a ban: it remains perfectly legal to do any kind of GoF work.

So how can we put in place intelligent restrictions that will prevent dangerous GoF research in the future?

First, rather than rejecting any restrictions whatsoever, as some virologists have done, scientists should work with the government to craft a thoughtful set of limitations. For starters:

  1. Research that creates new strains of the Covid virus (SARS-CoV-2) that might have greater virulence or transmissibility should be entirely banned.
  2. Research that takes non-human viruses, including avian flu and bat coronaviruses, and gives them the ability to infect any new animals, should be banned.

To scientists who can’t even agree on these restrictions, I would say that it appears you oppose any restrictions whatsoever. If that’s your position, then the government might step in and impose far broader bans, which are not likely to be good. If you’ll agree to these two restrictions, perhaps we can broaden them slightly to cover other types of highly risky GoF work.

Finally, let me return to a point I’ve made before, but that bears repeating: the supposed benefits of GoF research are essentially zero. The claim that GoF research that makes a virus more deadly will help us “understand pathogenicity” or “be prepared for the next pandemic” are just hand-waving arguments. I wrote a whole column just last month explaining why these claims are fundamentally wrong, so I won’t repeat that here.

If we do ban some GoF research, with carefully-crafted rules, we won’t lose anything. Instead, we’ll gain at least two things: first, virologists can apply their expertise to truly beneficial virology work, and second, the scientific community will regain some of the trust it has lost during the pandemic. That would seem like a good thing.

Herbal extracts that cure an enlarged prostate? Not likely.

Saw palmetto, which is NOT effective for
treating enlarged prostates.

(Note: see the brief update at the bottom of this post for a response from the manufacturer.)

I haven’t looked at medical scams recently, and I thought I’d venture back into that world just a little bit this week, to see what is happening.

As always, the scams are everywhere, with products claiming to cure just about everything. What surprised me, though, is how blatant some of them have become. Some sites have no caveats or disclaimers at all, despite the fact that their claims are utterly false. They don’t even pretend that they are worried about a regulatory agency objecting to their false claims. The boldness can be startling–or, if you’re not sufficiently skeptical, convincing.

Let’s look at one site that strikes me as particularly egregious, which sells a dietary supplement called Prostoxalen as a cure for prostate problems. I was directed to this site by another site, ShopBodyVibes, that sells an even wider range of bogus cures (more on that below).

The marketers of Prostoxalen, which they sell for $40 for a bottle of 60 pills, are nothing if not direct. At the top of their website, they promise that Prostoxalen will “get rid of the constant pressure on the bladder, unpleasant pain and all other ailments related to prostate enlargement! Once and for all!

Nowhere do they provide even a shred of evidence for this claim.

And there’s more: they also claim that Prostoxalen will cure erectile dysfunction: “if you've noticed erection problems, our capsules will fix that issue as well,” the site states.

Again, no evidence at all.

I was expecting at least a citation to a poorly-done study published in a low-quality journal - after all, even homeopathic treatments, which are laughably ineffective, can find some bad science to support their claims.

But no, not for Prostoxalen. Maybe its marketers think that the testimonials alone (which appear to be fake) are sufficient.

So what on earth is in these pills? Well, it turns out that they’re just plant extracts and vitamins. The main ingredients are extracts of saw palmetto, pumpkin seeds, cranberries, tomatoes, nettles, and willowherb, along with a couple of common vitamins.

Great! So all you need is cranberries, tomatoes, and pumpkin seeds, and your prostate problems will go away. I’m surprised that anyone has an enlarged prostate, if this is all it takes to cure it.

But here’s the problem: there is no good scientific evidence that any of these ingredients will cure or relieve the symptoms of enlarged prostate. Not even a tiny bit.

(If you want to dig deeper, you can find multiple scientific studies of saw palmetto, which is widely marketed as a treatment for enlarged prostates. A carefully-done randomized trial out of Washington University, back in 2013, show that it simply doesn’t work, even at high doses.)

The name Prostoxalen sounds just a little bit like they might want you to think it’s a drug, doesn’t it? Maybe something to do with the prostate? Fortunately, the website answers this question in a FAQ list, which says: “No, Prostoxalen is not a drug. It is a food supplement in the form of capsules.”

Aha, that explains it. Dietary supplements aren’t regulated by the FDA, unless you claim that they’re a drug or that they can treat a medical condition.

What I expected to see on the website, but didn’t, was this disclaimer: “these statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease.” That’s the small-print language that appears on thousands of websites and products, and that allows supplement makers to make all kinds of hints and suggestions while avoiding regulation. Typically they use phrases like “supports prostate health,” as one saw palmetto product puts it.

It appears that Prostoxalen is manufactured and sold by a company in Poland, identified on the website only as PLT Group. So I guess they just don’t care what the FDA thinks–even though they are marketing this in the U.S. (I contacted them through their website, but they didn’t respond.)

So no, there is no magic pill that cures or relieves the symptoms of enlarged prostates, and any such cure is almost certain to require more than a few plant extracts.

Finally, about that site that directed me to Prostoxalen: that was ShopBodyVibes, a site that sells products to “make the penis longer” (Eroxel), “cleanse the body of toxins” (BurnBooster), “reduce varicose veins” (Variforce), “eliminate knee pain” (Ortezan), and a “breast enlargement serum” called BooUp. I’m not making this up. Needless to say (but I’ll say it), none of these products works–and yet the site has no disclaimers, nor does it provide any evidence for the claims.

The ShopBodyVibes site repeats all the claims from the Prostoxalen website (see here), again with no disclaimers. If you wonder what is in this product, (as I did), ShopBodyVibes suggests that “Everyone who is interested in learning the detailed composition of the product can read the list of ingredients, which is available on the manufacturer’s official website.”

Perhaps unsurprisingly, the ShopBodyVibes site has no indication of where it is located. It appears to be outside the U.S., which explains its stunning lack of any attempt to qualify its many bogus claims. It also has enough similarity to the Prostoxalen site to suggest that both sites might be owned by the same group in Poland. They didn’t respond to my inquiries.

And if you’re wondering how I stumbled upon BodyVibes: they were promoted by an article on Goop (yes, that’s Gwyneth Paltrow’s lifestyle company) about “wearable stickers that promote healing.” Yes, BodyVibes sells those too. I wrote about these magic stickers back in 2017, and my advice then still applies: they still don’t work, but if you like stickers, you can get a sheet of 50 for a couple of bucks.

Update: the manufacturer of Prostoxalen replied to my inquiry after this post appeared. Their message, in full, said: “our product has passed all the required tests before being launched on the market. It has proven to be highly effective, safe and legally introduced. We are a Polish company - the product is on the list of the National Sanitary Inspectorate at the Ministry of Health in Poland.” They did not provide any evidence or citations to support these claims, particularly the “highly effective” claim.

Scientists can now create mice with two fathers

Scientists at Osaka University in Japan have just created baby mice with two dads. That’s right: these mice have two parents, and both parents are males.

How did they do it, and what might this mean for humans?

Well, as reported recently in the journal Nature, it wasn’t easy. The scientists fertilized 630 eggs to get just seven mouse pups, but all seven mouse pups appeared normal and grew into fertile adults.

Let’s dig into the process just a little bit. The research team, led by biologist Katsuhiko Hayashi, first took cells from male mice, and they had to somehow re-program the cells to create egg cells.

One thing about egg cells in mammals: they are always female. Or to be more precise, they have two copies of the X chromosome. Males have one X and one Y chromosome, and the male mouse cells in this experiment started out that way too.

Hayashi’s team first took cells from male mice and turned them into pluripotent stem cells–a special type of cell that can then be turned into many other types of cells, including eggs. Then they grew these cells in Petri dishes until some of them spontaneously lost their Y chromosomes. Now the cells had 1 copy of the X chromosome, but no Y.

That only got the scientists part of the way to where they needed to be. The team then used another genetic trick that induced some of these cells to pick up an extra X chromosome while they were replicating. At that point, they had created mouse cells with two X chromosomes: in other words, the cells were genetically female.

The next step was to convince these XX cells to turn into egg cells. They did that using additional genetic techniques to coax the pluripotent cells to divide and form egg cells, each of which had just one copy of every chromosome (as egg cells do), including the X chromosome.

Those were the hard parts. Once they had the egg cells, the scientists fertilized them with sperm from other males, and then implanted 630 fertilized eggs in female mice. It wasn’t a very efficient process, but it worked: seven of the embryos successfully matured into baby mice, which grew into normal, fertile adults. (Note that mice only take 3-6 months to reach maturity.)

You might be wondering if all mice (or other mammals) with two male parents would have to be males. Well no, not at all. Sperm cells, which come from males, have either an X or a Y chromosome. After fertilizing the eggs, which all have X, the result is either XX (female) or XY (male), depending on which chromosome the sperm carried.

The scientists who did this work emphasized that we’re still a long way from making it work in humans. Among other things, we’d have to be sure that all of the steps involved in turning the male cell into an egg didn’t create harmful mutations elsewhere.

You might also ask if this means that we can also create babies using two female parents. Well, probably yes, but not using the process described here: to create a baby from two females, we’d need to take a female cell (any cell would do) and then turn it into a sperm cell. This is possible too! As it happens, a 2021 paper from Emory University described how scientists have recently created sperm cells from pluripotent cells in rhesus macaques. If viable sperm cells can be created, then they can be used to fertilize eggs, which would give us offspring with two female parents. (In this case, all of the babies would be female.)

But at least in principle, it may soon be possible for two men to have a child where both of them are the child’s genetic father.

Gain-of-function research on viruses justifies itself with a scientific error

We still don’t know where Covid-19 started, although we’re pretty sure it started in or near the city of Wuhan, China. The leading theories are that it started either in the Huanan Wholesale Seafood Market (in Wuhan), a live animal food market, or in the Wuhan Institute of Virology (WIV), a large virus research center in another part of the city.

We might never know, because we’d need access to all of the viruses being studied at WIV in late 2019, and those viruses might not even exist any longer.

I’ve been on the fence about this question since the pandemic started (as I wrote here and here and here), in part because we just don’t have enough data. However, I’m now starting to lean more strongly towards the hypothesis that the Covid-19 virus started in the Wuhan Institute of Virology. I just listened to the interview that Sam Harris did with science journalist Matt Ridley and biologist Alina Chan, who together wrote an entire book on the origins of Covid-19, and the evidence they compiled is compelling.

Let’s look at a few key points.

First, the virus itself, SARS-CoV-2, almost certainly originated in bats, and those bats almost certainly came from caves in southern China, over 1000 kilometers away from Wuhan. The bats didn’t get to Wuhan on their own.

So either someone transported bats to the Huanan Seafood Market, or they transported viruses from the bats to WIV. These are our choices.

Second, WIV was doing research on coronaviruses for years. Their scientists traveled regularly to caves in southern China to find novel viruses, and they’ve acknowledged that WIV’s labs had bat viruses, including viruses related to SARS-CoV-2, before the pandemic began.

Third–and this point is under some dispute–many scientists have argued that the virus was a naturally-occurring one. However, this doesn’t make it more likely that the virus originated in the seafood market. It’s just as likely that a scientist or technician working at WIV was accidentally infected, and then went home (maybe stopping by the seafood market on the way) and started a worldwide pandemic.

Fourth, it’s hard to believe that it’s merely a coincidence that one of the top virology labs in China just happened to be located in the city where the pandemic began. WIV was not only the foremost lab in China doing work on SARS-like viruses, but they also claimed previously that they intended to do gain-of-function work to make these viruses more pathogenic.

This startling fact emerged when a 2018 grant proposal by EcoHealth Alliance, a US-based nonprofit that was working with WIV, was leaked to the press in 2021. Even though that proposal was never funded, the text describes how EcoHealth would genetically engineer new viruses, taking the spike protein from one bat coronavirus and inserting it into a different one, and then infecting mice to see what happens.

But wait, some will say: we now have peer-reviewed studies pointing to the seafood market as the epicenter of the pandemic. (I wrote about those studies back in March 2022.) However, as Alina Chan and Matt Ridley explained to Sam Harris (and in their book), the Chinese authorities in early 2020 focused all their attention on the seafood market, to the exclusion of anywhere else. They collected loads of samples from people who had been in or near the market, and very little from anywhere else. Thus we seem to have a classic case of confirmation bias: when you only look at the place where you’re convinced the virus originated, and you find some evidence, then you stop looking. We simply don’t know if the virus was anywhere else.

Now, to the main topic for today: the scientific error used to justify gain-of-function research on dangerous viruses, the error that might have led to the Covid-19 pandemic. Let me explain.

Why, one might ask, were scientists from the Wuhan Institute of Virology going out into the wild, to places where humans would not otherwise go, and bringing back deadly viruses?

This doesn’t just happen in China. The US is funding a large effort to do exactly the same thing: through a program called DEEP VZN (”deep vision,” get it?), USAID is funding scientists in the US and in Africa, Asia, and Latin America to venture into unpopulated areas of the jungle, and to find animals carrying viruses that might infect humans. They’re hoping (!) to discover 8,000 to 12,000 new viruses, and they’re particularly interested in viruses that could cause the next pandemic.

Why does anyone do this? Virus hunters believe that through these efforts, they can predict which of these viruses are destined to become the next pandemic. Furthermore, the argument goes, through gain-of-function research, virologists will be able to determine just what the new pandemic strains will look like. Then, armed with this knowledge, they can convince governments and private companies to design, manufacture, and stockpile vaccines against these viruses. This way (the argument goes) when the pandemic emerges, we’ll be ready.

At the center of this scientific strategy is an obvious error about evolution.

I’ll have to get a bit technical to explain here, so bear with me: the genome of the SARS-CoV-2 virus contains about 30,000 bases of RNA. The key protein that lets it infect human cells is called the Spike protein, which is about 1300 amino acids long and is encoded by about 3900 RNA bases. RNA has an alphabet of 4 letters (A, C, G, and U), which means that each of those positions can mutate into one of the other 3 letters. So we have almost 12,000 possible mutations that affect just one base in the Spike protein.

But 2 or more mutations can happen at once, quite easily, and this too could make the virus more infectious. How many combinations of 2 positions and 3 mutations are possible? Well, about 650,000,000.

And these aren’t the only mutations that might create a pandemic virus. So we’re supposed to believe that:

  1. gain-of-function experiments in the lab will create precisely the same mutations that would happen in the wild, and
  2. virologists can then predict, based on their experiments, that a virus is likely to cause a pandemic, and
  3. this evidence is so convincing that governments will manufacture and stockpile vaccines based on these experiments, and
  4. that this in turn will allow us to prevent the next pandemic.

Yeah, right. The evolutionary mistake is in the first point above, by the way.

Something happened in Wuhan. You might think that virologists, upon hearing about the gain-of-function research at WIV, would pause and think, oh no, we hope our colleagues’ research didn’t cause the pandemic! But instead, they closed ranks and doubled down.

In case you think I’m exaggerating, consider this: just a month ago, 156 virologists co-authored an article in the Journal of Virology that declared:

“gain-of-function research-of-concern can very clearly advance pandemic preparedness and the development of vaccines and antivirals. These tangible benefits often far outweigh the theoretical risks posed by modified viruses.”

In case that wasn’t clear enough, they assert twice more in the article that gain-of-function research will help us prepare for pandemics.

Virologists have been making this argument for years, and yet their experiments had no benefit at all–none, zero, zip–when we were finally faced with a true pandemic. Why should we believe this claim now?

Instead, it’s possible that gain-of-function research, along with the search for novel viruses in the wild, might have accidentally caused the pandemic.

Let me conclude by emphasizing that the vast majority of research on viruses and infectious diseases is incredibly important. Vaccines, antibiotics, antivirals, and other treatments have saved millions of lives, and the scientists doing this work should be applauded. Shutting down dangerous gain-of-function research–by which I mean research designed to take a virus or bacterium and make it more deadly in humans or in other animals–would only affect a tiny percentage of virologists worldwide. Let’s tell them to stop. If they can’t find something better to do, other scientists can.

Panel recommends new controls on deadly gain-of-function research. Will the government listen?

Illustration by Erik English

This past week, a government-appointed panel of scientists released a new report recommending 13 actions the U.S. government should take to control “gain-of-function” research that has the potential to create deadly new pathogens.

This has been a long time coming, but the first thing I want to point out is that this is just an advisory panel. The government hasn’t done anything yet. Let’s unpack what happened, shall we?

First, the panel is called the NSABB: the National Science Advisory Board for Biosecurity. The new report, which was at least 3 years in the making, was created in response to a decade’s worth of concerns, raised by many scientists (including me - see my previous articles here and here and here, among others), about the dangers of a specific kind of research known as gain-of-function.

What is gain-of-function (GoF) research? Well, it can include many scientific experiments, including some that are perfectly reasonable. But the term has been used most often to refer to experiments that are designed to take a virus such as influenza or SARS-CoV-2 and alter it intentionally to make it more deadly.

This seems crazy, right? Yet it’s been going on in the influenza virus research world for at least a decade, which is why many scientists have raised alarms.

The Covid-19 pandemic gave this issue much greater urgency, after suspicions arose that the Covid-19 virus, SARS-CoV-2, might have emerged (accidentally) from gain-of-function experiments at a major virology lab in Wuhan, China. (It probably didn’t, but we still don’t know for sure, as I’ve explained in previous columns.)

So back to the topic at hand: the new NSABB report. What do they recommend, and will it matter? I don’t want to go through all 13 recommendations, but overall it’s a very good start, if (and only if) the U.S. government takes them seriously and implements them all.

And the virology community is already pushing back - but first let me go into just three of the recommendations.

First, the panel recommends that the government require that all GoF research undergo federal-level review if the work is

“reasonably anticipated to enhance the transmissibility and/or virulence of any pathogen.”

Believe it or not, GoF research that does this kind of thing is going on right now, and there’s no rule saying it must be reviewed first.

Second, the panel recommends that the government only allow such research if there’s simply no better, lower-risk way to gain the same scientific insights. As they put it, scientists who want to do GoF work would have to prove that

“there are no feasible alternative methods ... that poses less risk ... and the risks are justified by the potential benefits.”

That’s a high bar to clear, but it seems eminently reasonable to insist upon it before allowing dangerous GoF research to proceed.

The panel also recommends that the new restrictions on gain-of-function research apply to all research in the U.S., regardless of whether it’s funded by the government. This is an important addition, as illustrated recently when Boston University, after being called out for dangerous gain-of-function experiments on the Covid-19 virus, claimed that they didn’t use NIH funds for this, so (they argued) they didn’t break any rules. Technically, they were correct. This recommendation will close that giant loophole.

There’s much more in the NSABB report, and my primary reaction is that (1) it’s a good start and (2) it’s not nearly enough. I’d like to see the government make a blanket statement that research that will make deadly viruses even more deadly is simply forbidden, at least for now. If someone wants an exception, they could make the case, but I’ve yet to see a good argument for these experiments.

What about that pushback from the virologists that I mentioned above? Well, in a lengthy commentary just published in the Journal of Virology, 156 virologists argue that gain-of-function research is wonderful! And it’s brought so many benefits! Just let us handle this, and don’t worry, they seem to be saying.

To make the benefits explicit, the 156 virologists include a table listing dozens of “useful examples” of gain-of-function research. Let’s look at just two of them.

Example 1: the virologists assert that experiments on a virus called M13 led to faster computers, citing a 2018 article. First, this is nonsense: no one has been a faster computer using a modified M13 virus. Second, the M13 virus is harmless to humans (it only infects bacteria), so it wouldn’t be subject to any regulations on GoF research in human pathogens.

Example 2: this one is even more outrageous. The table lists as a “benefit” an experiment that “established that H5N1 has capacity for mammalian transmissibility.” They then cite a notorious experiment from 2012 in which scientists intentionally modified a deadly bird flu virus (H5N1) in order to make it possible for the virus to be transmitted directly between mammals. This was one of the key experiments that led to the widespread alarm about GoF research in the first places. (I wrote about it back in 2013.)

So no, creating a more-deadly virus and then saying “see? look how dangerous this virus is?” is not what I’d call useful.

Clearly, the virologists who wrote this commentary do not want to see any restrictions at all on the kind of research they do. They just don’t see the need for it. Obviously, I disagree, as do many others, including many virologists who support the NSABB recommendations.

As I wrote at the beginning of this piece, the NSABB report is just a set of recommendations, and the government might not do anything. I hope that the government will implement all of them, and then go even further, and put a stop to the dangerous, sometimes reckless experiments that a very small minority of scientists are engaging in.

We need to study viruses, and we need to control infectious diseases, but we can do this without making pathogens more deadly.