Sadly, the Washington Post once again falls for acupuncture pseudoscience

It’s like playing whac-a-mole. No matter how many times I write a column showing that some wildly implausible practice is nonsense, new articles pop up claiming “Hey, look at this! It really works!”

So I’m going to try to whack another mole, because people can be harmed by bad information, especially when it comes in the form of medical advice.

Recently the Washington Post ran a column under the headline, “Does acupuncture work for chronic pain? Here’s what the science says.” (The column first appeared back in July, but the Post’s website promoted it again just last week.)

Before giving you the Post’s answer, let me give you the correct answer. No! Not “maybe” or “sometimes” or “we’re not sure.” Acupuncture doesn’t treat anything, and it carries a real risk of harm, particularly from infections. I’ll get to that below.

I’ve written on this topic many times before (in 2013in 2012in 2010, and more), and I’ve even called out the Washington Post for their pro-acupuncture pseudoscience (see this column, which I wrote in 2016). The physicians over at Science-Based Medicine have debunked more acupuncture studies than I can count; they’ve even created a special webpage (which I highly recommend) dedicated to explaining the bogus claims that acupuncture proponents make.

Acupuncture, in case you don’t know this, is a practice where people who call themselves acupuncturists (they are not doctors) stick needles into your body to “treat” various conditions. The claim is that these needles can manipulate your vital life force, or “qi”, which runs along supposed acupuncture lines throughout your body.

That’s just wrong. Modern biology has taught us a whole lot about human physiology, and there just aren’t any lines with mystical forces flowing through them. There are nerve fibers, true, but acupuncturists don’t use those. (And if their needles were piercing nerves, it would hurt like heck.)

Acupuncture and qi are part of Traditional Chinese Medicine, or TCM, a collection of largely ineffective and sometimes very harmful folk beliefs. TCM’s popularity started to grow in the mid-20th century when Chairman Mao launched a propaganda campaign pushing it. Mao himself never used TCM, but his government couldn’t afford real medicine, so they convinced people that inexpensive folk medicine was just as good. It wasn’t.

But I digress.

Acupuncturists claim to treat many conditions, but they especially like to claim that they can treat chronic pain, for at least a couple of reasons. First, pain is inherently subjective, so the only way to measure if a treatment is working is to ask the patient. This makes it hard to study objectively. And second, pain symptoms usually wax and wane, even without any treatment. Patients usually want treatment when the pain is at its worst, which means once the pain subsides, the patients will give credit to whatever they were doing at that time. So pain is fertile ground for people selling quack treatments.

Now let’s get to that column in the Washington Post. The column promises to tell you “what the science says,” and it quickly gets to the point, saying yes, it does! First it puts forward the logically flawed (and non-scientific) claim that hey, the U.S. Medicare system now covers acupuncture for back pain, so it must be effective.

Ugh, where do I start? Well, like it or not, Medicare approval of a treatment doesn’t mean the treatment works. (And conversely, some treatments that work are approved for coverage by Medicare.) So that’s just a logical fallacy. I wish it were true that Medicare was purely science-driven, but both the federal and state government have been lobbied for years by acupuncturists (and other purveyors of dubious therapies) to provide public tax dollars to cover their practices. For a deeper dive into these lobbying efforts, I recommend the lengthy takedown by Jann Bellamy explaining that acupuncture is “legalized quackery.”

The Post article then goes on to discuss the science, for which it relies primarily on a single study, a meta-analysis published in 2019 by Andrew Vickers. (The column was written by Dr. Trisha Pasricha, who has sterling credentials, including training at Johns Hopkins Medicine where I also work. Alas, good credentials don’t always mean that you can trust the holder of those credentials, and this is one of those instances.)

Vickers has published multiple meta-analyses, and if he’s shown anything, it’s how easy it is to cherry-pick from the (extensive) acupuncture literature and find studies that prove whatever point you want to make. The Post column asserts that Vickers used 39 “high-quality” studies, but that is debatable. Many of the studies were done in China, which (as Science-Based Medicine physicians David Gorksi and Steven Novella have pointed out) virtually never publishes a negative study of acupuncture.

I’ve done a deep dive into one of Vickers meta-analyses of acupuncture–an earlier one–for one of my medical school classes, where I use it to illustrate how bad studies can be mis-reported by scientists themselves and by the media. I don’t have time to go through it here, but among other problems, Vickers doesn’t seem to understand how placebo controls work.

Here’s what I mean by cherry-picking. Vickers went through 100’s of studies to pick the 39 that he included. One of those supposedly high-quality studies looked at acupuncture for knee arthritis. That study found that both acupuncture and sham acupuncture (the placebo arm) and the same small effect on knee pain, and that patients who received no treatment at all reported more pain than patients. The authors of the study (and Vickers) concluded–wrongly–that because acupuncture was better than nothing, it must be working. Wrong! If you don’t beat the placebo, then your treatment fails.

For a drug trial, failing to beat the placebo means the game is over. But with acupuncture, it means “more studies are needed,” and the whac-a-mole game continues.

Oh, and I should add that as far as knee arthritis goes, the reduction in pain in both the acupuncture and placebo group was much less than has been reported in studies that use ibuprofen.

That’s right, ibuprofen is far better than acupuncture. Not to mention cheaper and more convenient.

If this weren’t enough, a more recent study has already contradicted the Vickers study as physician-blogger Steven Novella pointed out in a recent column. Novella wrote that “the evidence is too low quality to conclude that acupuncture works, as desperate as proponents are to say we can reach that conclusion.” So no, Dr. Pasricha, the latest science does not say that acupuncture works. Quite the opposite.

I’m still understating how badly acupuncture has failed every well-designed study to test its effectiveness. Studies have shown that placing the needles in random locations works just as well as using so-called acupuncture points. Other studies showed that sham acupuncture, where the needles don’t pierce the skin but where subjects believe they did, also works just as well. And “expert” acupuncturists can’t agree on the locations of acupuncture points.

And don’t get me started on acupuncture and the risk of infection. Acupuncturists aren’t trained in real medicine, and they don’t use proper sterile procedures. This means that they don’t necessarily sterilize their hands, or your skin at all of those points where they’re plunging needles into you. There have been thousands reports of infections due to acupuncture (dating back decades), some of them fatal. And because acupuncturists aren’t part of the medical system, we can be virtually certain that infections are under-reported.

Acupuncture isn’t going away any time soon, because people are making money from it, and no matter how many studies show that it’s nothing more than a fiction, those people will keep insisting on more studies. Plus they can point to hundreds of poorly-done studies that claim to show benefits, and argue–as the Post column does too–that “more research is needed.” I’m not making this up: that precise phrase appears in Dr. Pasricha’s article.

There are even scientific journals entirely devoted to acupuncture (here and here, for example), and they make money too, for the for-profit publishers that produce them. So you can be sure that more studies are coming, and some of them will be positive, even though acupuncture is utterly ineffective.

Even so, the Washington Post can and should do better. Here’s my (free) advice for those considering acupuncture: save your money, and just take some ibuprofen.

Good news for "Research Parasites": NEJM takes it back, 8 years later

After years of debate, the National Institutes of Health finally rolled out a data sharing policy early this year, one that should greatly increase the amount of data that biomedical researchers share with the public. This week, three prominent scientists from Yale described, in an op-ed in the New England Journal of Medicine, how “the potential effects of this shift ... toward data sharing are profound.”

For some of us, it’s deliciously ironic that this op-ed appeared in NEJM, which just a few years ago coined the term “research parasites” to describe anyone who wants to make discoveries from someone else’s data. That earlier piece, written in 2016 by the NEJM’s chief editors, was simply dripping with disdain. It caused a huge outcry, including a response from me in these pages and a sharply worded response from the Retraction Watch team, published in Statnews. The editor backed down (slightly) in a follow-up letter just a few days later, but the damage was done.

One interesting consequence was that a group of scientists created a Research Parasite Award, now awarded each year (entirely seriously, despite the tongue-in-cheek name) at a major biomedical conference, for “rigorous secondary data analysis.”

The 2016 op-ed in NEJM was itself a response to a call for greater data sharing published in the New York Times by cardiologists Eric Topol and Harlan Krumholz–and Krumholz, we should note, is a co-author of the latest piece in NEJM. Meanwhile, the former editor of NEJM retired years ago, and it appears that the journal is now ready to join the 21st century, even if it’s a few decades late.

What is all this fuss about? Well, many people outside of the scientific research community probably don’t realize that vast amounts of data generated by publicly-funded research–work that is paid for by government grants–are not usually released to the public or to any other scientists.

On the contrary: in much of biomedical research, data sets collected with government funding are zealously kept private, often forever. The usual reasons for this are simple (although rarely admitted openly): the scientists who collected the data want to keep mining it for more discoveries, so why share it? Sometimes, too, researchers package up the data and sell it, which is completely legal, even though the government paid for the work.

(It’s not just medical research data, either: once I tried to get some data from a paleontologist, only to learn that he treated every fossil he ever collected as his personal property. But that’s a blog for another day.)

Many scientists have been fighting this culture of secrecy for a long time. Our argument is that all data should be set free, at least if it’s the subject of a scientific publication. It’s not just scientists making this argument: since the early 2000s, patient groups began to realize they couldn’t even read the studies about their own diseases unless they paid a for-profit journal to access the paper. Those groups lobbied–successfully, after a years-long fight–that any publicly-funded research had to be published on a free website, not locked behind the doors of private publishers. Their effort led to an NIH database called PubMedCentral, which contains the full text of thousands of articles.

The new NIH data sharing policy is one consequence of the Open Science movement (which I’m a part of), which argues that science moves much faster when it’s done in the open. This means sharing data, software, methods, and everything else. There’s now a U.S. government website dedicated to Open Science, open.science.gov, which includes more than a dozen federal agencies including NIH, NSF, and the CDC.

A bit more history: as far as I can tell, the earliest voices for data sharing emerged during the Human Genome Project, an international effort beginning in 1989 that produced the first draft of the human genome in 2001. When a private company (Celera Genomics) emerged in 1998, a dramatic race ensued, and as one strategy for competing, the public groups announced that, in contrast to the private group, they would release all their data openly on a weekly basis, long before publication. That wasn’t how things had worked before.

Very soon after that, scientists in genomics (my own field) realized that all genome data, whether from bacteria, viruses, animals, or plants, ought to be released freely. The publicly-funded sequencing centers received millions of dollars to generate the data, but they weren’t the only places who could analyze it. NIH and NSF agreed, and pretty soon they required all sequencing data to be released promptly.

This same spirit didn’t touch most medical research, though. Even though far more money–billions of dollars a year in NIH funds–is spent on disease-focused research, data from those studies remained locked up in the labs that got the funds. This is now changing.

As the Yale scientists (Joseph Ross, Joanne Waldstreicher, and Harlan Krumholz) point out in their NEJM editorial, open data sharing has already yielded tremendous benefits. For example, they point out that hundreds of papers have been published using public data from the NIH’s National Heart, Lung, and Blood Institute, including studies that revealed new findings about the efficacy of digoxin, a common drug used to treat heart failure.

The new NIH policy covers all of NIH, not just one institute, and we can hope it will unlock new discoveries by allowing many more scientists to look at the valuable data currently kept behind closed firewalls.

But simply requiring scientists to have a “data management and sharing policy,” as the NIH is now doing, might not be enough. Many thousands of scientific papers already say they share data and materials–but as it turns out, the authors don’t always want to share.

A study published last year illustrated how toothless some current policies are. That study identified nearly 1800 recent papers in which the authors said they would share their data “upon request.” They wrote to all of them, only to find that 93% of the authors either didn’t respond at all, or else declined to share their data. That’s right: only 7% of authors shared their data, despite publishing a statement that they would.

The NEJM editorial proposes a different solution, one that could be far more effective: putting scientific data into a government repository. This is something the government itself can enforce (because they control the funding), and once the data is in a public repository, the authors won’t be able to sit on it as (some of them) now do.

It’s good to see NEJM joining the open science movement. Science that is shared openly will inevitably move faster, and everyone–except, perhaps a few data hoarders–will benefit.

A simple trick to make better coffee: cut the static!

 

You’d think that coffee afficionados had tried everything by now, and that few if any tricks remained undiscovered. Well, you could be right–but there’s one trick that most ordinary coffee drinkers probably don’t know, and it’s remarkably easy to do.

I’ll jump right to the punchline, and then I’ll explain the (new) science that explains it. To make richer coffee in the morning, simply spritz a little water on your beans before grinding them. That’s it!

So what happens when you do this, and why does it make better coffee? Well, as explained in this new paper in the journal Matter, by Christopher Hendon and colleagues at the University of Oregon, it’s all about reducing the static electricity that the grinding process creates.

Grinding coffee causes triboelectrification. If you’ve never heard of that, not to worry–neither had I, until I read the paper. Basically, when the beans rub together, they create static, and that makes the ground coffee clump together (and sometimes fly into the air).

Then when you make the coffee, the clumping means that the water flows through the grounds unevenly, absorbing less of the coffee particles than it might. Ideally, all the coffee grounds should be evenly and densely packed, and static electricity prevents that.

Water reduces triboelectrification quite a bit, it turns out.

So what happens? Well, after extensive experimentation–and I do mean extensive–the scientists found that the amount of coffee solids in a cup of espresso increased from 8.2% to 8.9% when adding a bit of water to the beans before grinding. That’s a relative increase of 8.5%. Richer coffee!

Reading the paper, I realized these scientists had a lot of fun doing these experiments. They measured the water content in 31 types of coffees, and tried a wide range of settings for their grinders, for the water temperature, and more.


They also roasted their own beans to varying degrees of darkness. They tried dozens of combinations of beans and roasting strategies, measuring the water content after roasting and the amount of static electricity generated upon grinding. They observed that darker roast coffees usually generate finer ground particles, and finer particles in turn generate more static electricity.

They drank a lot of coffee to get this right! But hey, sometimes science requires sacrifices, right?

I should mention that the trick of adding a little water to the beans is already known to some experts, although the precise science behind it was unknown until now. It even has a name (as the paper points out): the “Ross Droplet Technique.”

As the paper concludes, “a few simple squirts of water [may] have solved the problems of clumping, channeling, and poor extractions while aiding in the pursuit of attaining the tastiest espresso.” You only need a few drops of water–give it a try.

One important caveat is that if you use the French press method to make coffee, where the grounds are immersed in water, then this trick won’t make any difference.

What’s next? Well, I should point out that this study focused entirely on espresso. Does it work for regular coffee as well? Probably so, but more research is needed.

Does Taurine Really Extend Life? Maybe.


 Readers of this column will know that I’m highly skeptical of dietary supplements. So you might imagine my reaction when I saw headlines a few days ago about “Taurine, the elixir of life?” (at CNN) and “Supplement slows aging in mice and monkeys” (NY Times).

Unlikely, I thought. But I read the scientific article behind these reports, and now I’m intrigued.

What is taurine? And could it really slow down aging? Well, it seems like it could, just maybe. A new study published last week in Science (one of the top journals in all of science) seems to show, for the first time, that taking large doses of taurine, an essential amino acid, might provide a host of benefits that include slowing down the aging process.

First question first: what is taurine? It’s an amino acid, but it’s not one of the 20 amino acids that comprise all the proteins in your body. It’s a slightly different one, and our bodies naturally produce it in small amounts. We need more than our bodies produce when we’re very young, but we get it from breast milk, and it’s added as a supplement to infant formula.

We also get extra taurine from our diet: the best foods for taurine are meats, especially shrimp and other shellfish, but also beef and the dark meat in chicken and turkey.

What did the new Science paper show? Well, first the authors (from Columbia University, India’s National Institute of Immunology, and the Sanger Institute in the UK) describe how taurine levels clearly decline with age in humans and other mammals. Now, just because taurine declines doesn’t mean that replacing it will reverse the aging process, but at least it establishes plausibility.

They then describe a series of experiments, mostly in mice but also in monkeys, where they fed the animals relatively large amounts of taurine each day, and the results were pretty darned impressive:

  1. Life span in the mice increased by 10-12%.
  2. In mice that started taurine supplements in middle age, life span increased by 18-25%.
  3. Bone density increased in female mice and osteoporosis seemed to be cured.
  4. Muscle strength increased in both males and females compared to mice who didn’t get taurine.
  5. The number of senescent cells–cells that don’t do much except emit damaging inflammatory signals–seemed to be reduced.

Of course, there’s always a big caveat with results in mice: they’re mice, not humans! And many, many times we’ve seen results in mice that just don’t carry over into humans. So the scientists also did a study (a smaller one) in monkeys, which are much closer to humans genetically. This also had some very good results:

  1. Bone density increased in the spine and legs.
  2. Body fat was lower than it was in monkeys that didn’t get taurine.
  3. Several measures of inflammation decreased.

Monkeys live a lot longer than mice, so the scientists don’t yet know if taurine increases the monkeys’ life span, but all the signs are promising. I was skeptical going into this article, but I couldn’t find any obvious flaws.

In an accompanying article in Science, U. Penn’s Joseph McGaunn and Joseph Baur point out that we don’t know for sure what the risks of long-term supplementation with taurine would be, but it is already widely taken as a supplement in baby formula and in energy drinks, with no known ill effects.

However, the amounts used in the Columbia study were very high, much higher than you’d get from energy drinks or even from standard taurine supplements. I looked up a few, and typical formulations offer 1000 or 2000 mg (which is 1-2 grams) per day. The doses given to monkeys in the study, if converted to a 150-pound person, is equivalent to about 5500 mg (5.5 grams) per day. That’s not very much by weight, and it would be easy enough to take this much taurine, but no one knows the effects in humans of such high doses.

The bottom line: this study is really intriguing. More studies are needed, especially to measure the effects of taurine on humans, but all the signs are positive. I’ll be watching closely to see if the effects in mice and monkeys carry over, and if they do, we may all be taking taurine supplements one day. And I just ordered some taurine powder for myself–why not?

AI is crying out for regulation, while virologists doing gain-of-function research take the opposite tack. Why?

Over the past few months, prominent tech leaders have been raising alarms about the dangers of AI, and politicians are following suit. Just last week, the Senate held hearings on how to regulate AI. The tech industry itself is calling for regulation: just a few days ago, Microsoft’s CEO testified before Congress and asked the federal government “to govern AI at every part of its lifecycle.”

One of the founders of AI, Geoffrey Hinton, just left his high-level position at Google so that he could criticize AI without any constraints from his employer. And a couple of weeks ago, I asked the AI program ChatGPT if we should trust AI. No way, it told me.

This is all kind of surprising. AI experts seem to be saying “stop us before we do any harm.” It’s also kind of refreshing: usually the private sector wants the government to stay out of its affairs.

Now contrast all this with the behavior of virologists on a completely different technology: gain-of-function research on deadly pathogens. As I’ve explained before, gain-of-function (GoF) research takes a deadly pathogen, such as the influenza virus or the Covid-19 virus, and modifies it to make it even more deadly. Many scientists, including me, find this work both alarming and of little benefit, and we’ve been calling for it to be regulated for a decade now.

However, unlike AI experts, many virologists are opposed to any hint of regulation of their GoF work. On the contrary: just recently, 156 leading virologists jointly authored an opinion piece that lauded the many wonderful benefits of GoF, and pooh-poohed any risks.

Don’t worry your pretty little heads, these virologists seem to be saying to the rest of the world. We know what we’re doing, and it’s not that risky. Plus it’s great! Not to put too fine a point on it, but I disagree.

What’s caught my attention this week is not just the contrast in their willingness to be regulated, but the question of how one might imagine doing it, in both cases.

Simply defining what we mean by “AI” today is probably impossible. The number and types of programs that incorporate some form of artificial intelligence is vast and already affects our lives in many ways. The recent alarm bells were caused by one particular type of AI, known as large language models (LLMs), which have the ability to fool people in a new way. For several years now, more alarm bells have sounded (justifiably so) over “deep fakes,” images or videos that appear real but that are completely made up. These use completely different technology.

So even if we agree that AI needs to be reined in, no one can really say with any precision what that would mean.

Now let’s look at gain-of-function research on pathogens. One of the biggest objections that some virologists have made, on many occasions, is that there’s no way to define just the harmful research, so we really should just leave it all alone.

For example, the recent commentary by 156 virologists said that “gain-of-function approaches incorporate a large proportion of all research because they are a powerful genetic tool in the laboratory.” This is nonsense. It’s equivalent to saying “hey, this is science, and you don’t want to ban all science, do you?”

They also defend GoF by trotting out examples of research that were beneficial, such as the recent rapid development of Covid-19 vaccines. As was pointed out recently in the biology journal mBio, this is a red herring: it’s just not that difficult to define GoF “research of concern” and distinguish it from other, much more mundane virology and bacteriology research.

In fact, biologists have already done this, in a recent set of proposed new guidelines for regulating GoF research. As Hopkins researcher Tom Inglesby put it, “if you are going to make a more transmissible strain of Ebola, then you need to have the work reviewed by the U.S. government.”

So why do the AI scientists say “please regulate us” while many virologists say “leave our gain-of-function work alone”? It’s not because it’s too hard to define one or the other–if it were, the AI experts wouldn’t even consider regulation as a possibility.

No, it seems that it’s all about money. AI is thriving in both academia and industry, with tremendous growth ahead. The people calling for regulation just aren’t worried about money. They know that AI will continue to thrive, and they are calling for regulation because they seem to have genuine concerns about the threat that AI poses to society.

On the other hand, the world of gain-of-function research is very small, and almost entirely dependent on government funding. Although I’m sure they’ll deny it, these scientists are worried that they’ll lose their grants if even a small portion of GoF research is shut down. They may also be worried about more direct threats to their finances: the conflict-of-interest statement on that recent article by 156 virologists goes on for 731 words. (That is one of the longest conflict-of-interest statements I’ve ever seen on a scientific article.)

I decided to ask an AI (ChatGPT) these questions. When asked about regulating GoF, it replied with a long answer that concluded,

“Ultimately, the decision to regulate gain-of-function research involves weighing the potential risks and benefits. Striking the right balance requires collaboration between scientists, policymakers, and relevant stakeholders to establish guidelines, promote responsible research practices, and implement appropriate oversight mechanisms.”

ChatGPT’s answer about regulating AI was similar, concluding:

“Regulation can play a crucial role in ensuring that AI systems are developed and deployed responsibly... The specific nature and extent of regulation will likely depend on the application and level of risk associated with AI systems. Striking the right balance between regulation and fostering innovation is essential to ensure that AI technology benefits society while safeguarding against potential risks and ethical concerns.”

Overall, not bad advice. Now if only those virologists will listen. 

Should we allow AI to control the battlefield? AI itself thinks not.

Artificial Intelligence, or AI, seems to have finally arrived. With the introduction of ChatGPT last November, millions of people suddenly discovered that AI was far, far more than just a research activity. The range and sophistication of ChatGPT’s answers to questions across a wide range of disciplines is, frankly, pretty stunning.

AI is already in lots of places where you might not even realize it. Google Translate has been using AI for years, and it’s remarkably good, although nowhere near as good as a human translator. The technology that Pandora uses to customize your music is a type of AI, as is the technology behind Tesla’s self-driving cars. Within my own field, the program AlphaFold2 was a true breakthrough in scientists’ ability to predict the structure of proteins.

Along with these apparently beneficial developments, though, comes a great deal of concern. As AI gets better and better, can we continue to trust it to make decisions for us? It’s one thing if an AI recommends a song that we don’t like, which we can just ignore, but if AI is driving our cars, then mistakes might be much more costly.

And there are far riskier tasks that AI might be doing soon: what if the military starts using AI on the battlefield? One might be tempted to say no, of course they won’t do that, but what if another country starts using AI this way? Can we simply ignore that?

To be honest, I haven’t been worried, at least not yet. I have a little secret to confess: I did my Ph.D. thesis in AI, in the area called machine learning. I haven’t been doing AI research for the past 25 years, but I think I still understand at least a little of the basics behind the technology.

Thanks mostly to the attention that ChatGPT is getting, though, there’s suddenly a huge concern that AI represents a danger to society, although no one is exactly sure why. President Biden had a meeting just this week with the CEOs of Google, Microsoft, and OpenAI (the creators of ChatGPT) to talk about these concerns, where he told the execs that “What you’re doing has enormous potential and enormous danger.”

Of course, politicians have no idea what AI technology is–although I had to laugh at this quote from the White House press secretary: “The president has been extensively briefed on ChatGPT and knows how it works.” Um, no he doesn’t. I’m skeptical that the CEOs of Google and Microsoft know how it works either. However, you don’t need to understand the technology to understand the dangers.

Just recently, one of AI’s early pioneers, Geoffrey Hinton, has been speaking out to warn the public that AI has reached a point where “there’s a danger of something really bad happening.” Not surprisingly, other experts, such as German scientist J├╝rgen Schmidhuber, disagree.

The chair of the FTC, Lina Khan, says we can regulate AI effectively, and perhaps we can. But I’m not so sure.

Much of the concern in the press seems to be about the use of AI to spread misinformation. For example, AI can help create “deep fakes” that can be used to convince people that, say, an election was stolen even when it wasn’t. That’s a legitimate concern, but it’s happening even without AI.

A better question, which a family member just asked me, is should we be worried about AI making decisions on the battlefield? I thought I should take this question directly to the source, so I asked ChatGPT itself. The answer I got was remarkably clear and direct; I’ll quote it here in full:

“Yes, people should be concerned about AI making decisions on the battlefield. There are several reasons for this.

Firstly, AI lacks human empathy, judgment, and intuition. In a complex and dynamic environment such as the battlefield, human decision-making is often guided by experience, intuition, and an understanding of the broader context. AI lacks the ability to consider these intangible factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors.

Secondly, the use of AI in military decision-making raises significant ethical concerns. For instance, an AI system may be programmed to prioritize the protection of friendly forces at the expense of civilian lives, leading to potentially devastating consequences for non-combatants caught in the crossfire. Additionally, AI systems may be susceptible to hacking or manipulation, which could result in unintended consequences or a loss of control over the decision-making process.

Finally, there is the risk that reliance on AI systems may lead to the erosion of human decision-making capabilities, potentially diminishing the overall quality of military decision-making and increasing the likelihood of errors or unintended consequences.”

So there you have it: ChatGPT itself gives us several reasons why it’s a terrible idea to let AI make decisions that might cost human lives. We should listen to what it’s telling us, because as it gets more intelligent (and it will), it might decide not to be so honest in the future. And I have to say that it’s reasoning here is both well-written and compelling.

One thing that worries me as well is that the very term “artificial intelligence” is much too broad. We should be concerned about giving any technology the power to control our lives, whatever someone calls it. But when an AI itself says “don’t trust me,” then I think we might want to listen.