Over the past few months, prominent tech leaders have been raising alarms about the dangers of AI, and politicians are following suit. Just last week, the Senate held hearings on how to regulate AI. The tech industry itself is calling for regulation: just a few days ago, Microsoft’s CEO testified before Congress and asked the federal government “to govern AI at every part of its lifecycle.”
One of the founders of AI, Geoffrey Hinton, just left his high-level position at Google so that he could criticize AI without any constraints from his employer. And a couple of weeks ago, I asked the AI program ChatGPT if we should trust AI. No way, it told me.
This is all kind of surprising. AI experts seem to be saying “stop us before we do any harm.” It’s also kind of refreshing: usually the private sector wants the government to stay out of its affairs.
Now contrast all this with the behavior of virologists on a completely different technology: gain-of-function research on deadly pathogens. As I’ve explained before, gain-of-function (GoF) research takes a deadly pathogen, such as the influenza virus or the Covid-19 virus, and modifies it to make it even more deadly. Many scientists, including me, find this work both alarming and of little benefit, and we’ve been calling for it to be regulated for a decade now.
However, unlike AI experts, many virologists are opposed to any hint of regulation of their GoF work. On the contrary: just recently, 156 leading virologists jointly authored an opinion piece that lauded the many wonderful benefits of GoF, and pooh-poohed any risks.
Don’t worry your pretty little heads, these virologists seem to be saying to the rest of the world. We know what we’re doing, and it’s not that risky. Plus it’s great! Not to put too fine a point on it, but I disagree.
What’s caught my attention this week is not just the contrast in their willingness to be regulated, but the question of how one might imagine doing it, in both cases.
Simply defining what we mean by “AI” today is probably impossible. The number and types of programs that incorporate some form of artificial intelligence is vast and already affects our lives in many ways. The recent alarm bells were caused by one particular type of AI, known as large language models (LLMs), which have the ability to fool people in a new way. For several years now, more alarm bells have sounded (justifiably so) over “deep fakes,” images or videos that appear real but that are completely made up. These use completely different technology.
So even if we agree that AI needs to be reined in, no one can really say with any precision what that would mean.
Now let’s look at gain-of-function research on pathogens. One of the biggest objections that some virologists have made, on many occasions, is that there’s no way to define just the harmful research, so we really should just leave it all alone.
For example, the recent commentary by 156 virologists said that “gain-of-function approaches incorporate a large proportion of all research because they are a powerful genetic tool in the laboratory.” This is nonsense. It’s equivalent to saying “hey, this is science, and you don’t want to ban all science, do you?”
They also defend GoF by trotting out examples of research that were beneficial, such as the recent rapid development of Covid-19 vaccines. As was pointed out recently in the biology journal mBio, this is a red herring: it’s just not that difficult to define GoF “research of concern” and distinguish it from other, much more mundane virology and bacteriology research.
In fact, biologists have already done this, in a recent set of proposed new guidelines for regulating GoF research. As Hopkins researcher Tom Inglesby put it, “if you are going to make a more transmissible strain of Ebola, then you need to have the work reviewed by the U.S. government.”
So why do the AI scientists say “please regulate us” while many virologists say “leave our gain-of-function work alone”? It’s not because it’s too hard to define one or the other–if it were, the AI experts wouldn’t even consider regulation as a possibility.
No, it seems that it’s all about money. AI is thriving in both academia and industry, with tremendous growth ahead. The people calling for regulation just aren’t worried about money. They know that AI will continue to thrive, and they are calling for regulation because they seem to have genuine concerns about the threat that AI poses to society.
On the other hand, the world of gain-of-function research is very small, and almost entirely dependent on government funding. Although I’m sure they’ll deny it, these scientists are worried that they’ll lose their grants if even a small portion of GoF research is shut down. They may also be worried about more direct threats to their finances: the conflict-of-interest statement on that recent article by 156 virologists goes on for 731 words. (That is one of the longest conflict-of-interest statements I’ve ever seen on a scientific article.)
I decided to ask an AI (ChatGPT) these questions. When asked about regulating GoF, it replied with a long answer that concluded,
“Ultimately, the decision to regulate gain-of-function research involves weighing the potential risks and benefits. Striking the right balance requires collaboration between scientists, policymakers, and relevant stakeholders to establish guidelines, promote responsible research practices, and implement appropriate oversight mechanisms.”
ChatGPT’s answer about regulating AI was similar, concluding:
“Regulation can play a crucial role in ensuring that AI systems are developed and deployed responsibly... The specific nature and extent of regulation will likely depend on the application and level of risk associated with AI systems. Striking the right balance between regulation and fostering innovation is essential to ensure that AI technology benefits society while safeguarding against potential risks and ethical concerns.”
Overall, not bad advice. Now if only those virologists will listen.
No comments:
Post a Comment
Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS
Note: Only a member of this blog may post a comment.