A simple trick to make better coffee: cut the static!

 

You’d think that coffee afficionados had tried everything by now, and that few if any tricks remained undiscovered. Well, you could be right–but there’s one trick that most ordinary coffee drinkers probably don’t know, and it’s remarkably easy to do.

I’ll jump right to the punchline, and then I’ll explain the (new) science that explains it. To make richer coffee in the morning, simply spritz a little water on your beans before grinding them. That’s it!

So what happens when you do this, and why does it make better coffee? Well, as explained in this new paper in the journal Matter, by Christopher Hendon and colleagues at the University of Oregon, it’s all about reducing the static electricity that the grinding process creates.

Grinding coffee causes triboelectrification. If you’ve never heard of that, not to worry–neither had I, until I read the paper. Basically, when the beans rub together, they create static, and that makes the ground coffee clump together (and sometimes fly into the air).

Then when you make the coffee, the clumping means that the water flows through the grounds unevenly, absorbing less of the coffee particles than it might. Ideally, all the coffee grounds should be evenly and densely packed, and static electricity prevents that.

Water reduces triboelectrification quite a bit, it turns out.

So what happens? Well, after extensive experimentation–and I do mean extensive–the scientists found that the amount of coffee solids in a cup of espresso increased from 8.2% to 8.9% when adding a bit of water to the beans before grinding. That’s a relative increase of 8.5%. Richer coffee!

Reading the paper, I realized these scientists had a lot of fun doing these experiments. They measured the water content in 31 types of coffees, and tried a wide range of settings for their grinders, for the water temperature, and more.


They also roasted their own beans to varying degrees of darkness. They tried dozens of combinations of beans and roasting strategies, measuring the water content after roasting and the amount of static electricity generated upon grinding. They observed that darker roast coffees usually generate finer ground particles, and finer particles in turn generate more static electricity.

They drank a lot of coffee to get this right! But hey, sometimes science requires sacrifices, right?

I should mention that the trick of adding a little water to the beans is already known to some experts, although the precise science behind it was unknown until now. It even has a name (as the paper points out): the “Ross Droplet Technique.”

As the paper concludes, “a few simple squirts of water [may] have solved the problems of clumping, channeling, and poor extractions while aiding in the pursuit of attaining the tastiest espresso.” You only need a few drops of water–give it a try.

One important caveat is that if you use the French press method to make coffee, where the grounds are immersed in water, then this trick won’t make any difference.

What’s next? Well, I should point out that this study focused entirely on espresso. Does it work for regular coffee as well? Probably so, but more research is needed.

Vaccines Show Surprising Link To Reduced Risk Of Alzheimer’s


Well, the anti-vaccine community is not going to like this one.

A growing body of evidence, buttressed by a new report out last month, shows that people who are vaccinated have a significantly lower risk of Alzheimer’s disease.

Yes, that’s right: in addition to the obvious benefit of protecting you from infections, multiple different vaccines have now been shown, in two large studies, to be associated with fewer cases of Alzheimer’s.

How big is the effect? Well, the earlier study, published about a year ago, looked at the flu vaccine. Scientists from the University of Texas Health Science Center, led by neurologist Paul Schulz, collected data on more than 1.8 million older patients from around the U.S., of whom half had received the flu vaccine and half had not. Their average age was 74 years, and the scientists examined health records for four subsequent years to see how many developed signs of Alzheimer’s in that time.

In the vaccinated group, nearly 48,000 patients (out of 936,000) developed Alzheimer’s during the four-year follow-up period, about 5.1% of all patients. That might sound like a lot, but in the unvaccinated patients, nearly 80,000 developed Alzheimer’s. To put it another way, patients who got the flu vaccine had a 40% lower risk of Alzheimer’s, 5.1% versus 8.5%.

It appears this benefit isn’t specific to the flu vaccine, though. In the newer study published this September, the same group from the University of Texas looked at three more vaccines: the Tdap vaccine (tetanus, diphtheria, and pertussis), the shingles vaccine and the pneumococcus vaccine. They were able to collect records on 1.65 million people, among whom more than 500,000 had received one or more of the vaccines. All of the patients were over 65 years old.

Remarkably, all three of the vaccines showed similar and quite significant benefits against Alzheimer’s disease. In an eight-year follow-up period, the risk of Alzheimer’s was 30% lower (7.2% versus 10.2%) in patients who had the Tdap vaccine versus those who hadn’t. For the shingles vaccine, the reduction in risk was 25%, 8.1% versus 10.7%. And for the pneumococcal vaccine, the risk was 27% lower, 7.9% versus 10.9%.

(Note that this study looked at the older shingles vaccine. The newer one, Shingrix, has only been widely available since 2017. The scientists hypothesize that the newer vaccine, because it’s more effective against shingles, may provide even greater protection against Alzheimer’s.)

So how might this work? Is Alzheimer’s caused by an infection, and the vaccines prevent Alzheimer’s by preventing the infection? Well, the short answer is no, probably not. Although we don’t know the cause of Alzheimer’s, we don’t have any strong evidence that it’s the direct result of an infection.

Furthermore, with data showing that at least four different vaccines offer similar levels of protection against Alzheimer’s, the evidence indicates that the effect cannot be specific to the flu vaccine, or the Tdap vaccine, or with any of the other vaccines or their ingredients.

The University of Texas group that published the study speculated that vaccines might work to protect against Alzheimer’s by “long-term reprogramming of innate immune cells,” also called “trained immunity.” This hypothesis is far too complex to try to explain here–and it involves immunology, which I don’t really understand myself and which is “absurdly intricate” as science writer Ed Yong put it. So let’s just say that this is an intriguing idea that needs more research.

And for anyone who thinks that getting an infection and recovering (instead of getting vaccinated) might offer the same benefit, that doesn’t seem to work: another study looked at precisely this question and found no connection between the number of influenza infections and the risk of Alzheimer’s. So the vaccine seems to confer a benefit that the disease itself does not. (I mention this because a popular–but wrong–claim of anti-vaccine activists is that you get better immunity from infection by catching the disease and then recovering. Not a good idea.)

Also, I have to add that these studies report correlation, not causation. So even though the effects are large and significant, we don’t know for certain that vaccines do something to directly prevent Alzheimer’s. Maybe people who get vaccinated also have other behaviors that help them avoid Alzheimer’s–although the study design tried to minimize this possibility.

Vaccines have enormous, extensively documented benefits: they prevent suffering and disease, and they’ve saved countless millions of lives. And now it appears that they offer something else: a lower risk of Alzheimer’s. So if you haven’t received your boosters, now you have one more reason to get them.

Most of our common cold medicines don’t work

We still don’t have a cure for the common cold. In your local pharmacy, though, you can find many shelves filled with products that claim to treat the symptoms.

Well, it turns out that one of the most widely used ingredients, long believed to be effective at treating congestion, doesn’t work at all. The ingredient, phenylephrine, has been in use for decades, and it’s in many common cold medicines that are taken by millions of people each year, including Nyquil Severe, Sudafed PE, Robitussin CF, Tylenol Cold & Flu and others.

It turns out that phenylephrine was never properly studied for effectiveness. How can this be? It’s FDA approved, which usually means it has to be safe and effective, right? Not exactly, as I explain below.

As for phenylephrine: well, the studies have finally been done, and last month an FDA panel unanimously concluded, after reviewing the results, that phenylephrine is “useless and no better than a placebo.” It probably won’t cause you any harm, but it won’t have any effect on your stuffy nose.

To understand how this happened—and why it might be true of many other FDA-approved remedies that you can buy at the pharmacy—you have to know about how the FDA approval process has changed over the years. Pharmacists Randy Hatton and Leslie Hendeles, who worked for years trying to get phenylephrine properly reviewed, explained in a recent New York Times editorial that when the FDA was originally created, way back in 1938, it was only required to ensure that drugs were safe.

At the time, that was tremendous progress. Prior to 1938, drug manufacturers could claim pretty much whatever they wanted to.

But it wasn’t under 24 years later, in 1962, that Congress required the FDA to show that drugs were also effective. Therefore the thousands of drugs approved prior to 1962 were mostly safe, but they might not actually treat the disease they were intended to treat. After 1962, the FDA created a process to check those previously approved drugs, but they’ve never had enough staff or funding to check most of them. So phenylephrine was never properly reviewed, until now.

What’s next for phenylephrine? The FDA might ban it from the market, but that will take time, and it might not happen, because the FDA doesn’t have to follow the advice of its panels, although it usually does. Meanwhile, you can still buy cold remedies with phenylephrine, and they still claim to treat congestion.

And don’t even get me started on other treatments that are not only ineffective but that aren’t even subject to review by the FDA, such as homeopathic remedies. These include Zicam, which claims in large print on the front of its packaging that it “shortens colds.” It doesn’t, and Zicam’s manufacturer doesn’t even have to prove it, because it’s homeopathic. If you zoom way in one of the labels on the Zicam website, you’ll find the disclaimer that “Claims [are] based on traditional homeopathic practice, not medical evidence. Not FDA evaluated.” On some of the packages, I couldn’t even find the small print.

(Aside: Congress protected homeopathic preparations from FDA scrutiny way back in 1938, thanks to a homeopath who was also a U.S. Senator, and who helped write the original FDA legislation.)

I wrote about Zicam and other ineffective cold remedies in 2014 (”The Top Five Cold Remedies that Do Not Work”), and that advice still holds. Now we can add another one to the list.

We simply don’t have any drugs that work particularly well for the common cold, despite the many claims you can find online and on the labels of so-called cold remedies. The best thing you can do is just drink warm liquids such as tea or lemon-infused water, stay home and get plenty of rest.

West Virginia Says It’s Too Poor To Support Its Only Major University

As students and faculty prepared for the start of the new academic year this month, the president of West Virginia University, Gordon Gee, made a startling announcement: he’s eliminating 169 faculty jobs, about 16% of the full-time professors, and dropping 32 undergraduate and graduate degree programs, including all of its foreign language programs.

No foreign language classes? No French, Spanish, Chinese, Russian, anything other than English? Apparently not–and in response to questions, the university suggested that students might be offered an online app instead. It’s not just humanities, either: WVU is also getting rid of its graduate program in mathematics.

These cuts came as a shock to the students and faculty of WVU, who are understandably dismayed. The university administration says it’s just about money, as they face a $45 million deficit that they must do something about.

I’m not buying it. What really seems to be going on (and this is not unique to West Virginia), is that the state has steadily cut its support for WVU over the years, creating ever-larger deficits. The problem now is that the current president, Gordon Gee, seems to be saying that’s all fine with him. He didn’t even ask the state if it could help before he imposed these drastic cuts: as quoted in the Washington Post, Gee said “If I had gone down and asked for $45 million from the state legislature, they would have thrown me out.” Perhaps, but did you even try?

In the same article, State Senator Eric Tarr (R) also said he “wholeheartedly” believes that WVU president Gee is doing the right thing. And yet in the same interview, Sen. Eric said “We have never not supported WVU.”

Hmm, really? Over the past decade, the state has cut its support for WVU by 36%, or nearly $100 million. So it’s pretty clear that the legislature is not supporting WVU, at least not like they did in the past.

With support like that, who needs detractors?

According to one analysis, if West Virginia’s legislature had simply kept WVU’s funding flat for the past decade, WVU’s deficit right now would be far smaller, just $7.6 million rather than $45 million.

President Gee, did you ask the legislators about that?

I can’t help noting that WVU just renewed Gee’s contract for another year, at $800K per year. So they do seem to be able to come up with money for administrators. Gee’s only the 4th highest-paid WVU employee, with the football and basketball coaches each making over $4 million. I wasn’t able to find out how much WVU paid the consulting company rpk Group, whom Gee hired to come up with these severe cuts, but I’ll bet they weren’t cheap.

The state of West Virginia has only one R1-class research university, West Virginia University, with some 25,000 students on its main campus in Morgantown. If this trend continues, the state might no longer have even one major research university. That’s too bad.

And despite what WVU’s Gee and the consultants he hired might claim, this is not just about money; it’s about priorities. Currently, only five states don’t have an R1 level university: Alaska, Idaho, South Dakota, Vermont, and Wyoming. Does West Virginia want to join that club?

It’s true that West Virginia is one of the poorest states in the U.S., with a poverty rate of 16.8%. But can it really not afford even one major university? For West Virginia residents, WVU is the only R1 university that most of them can afford, because in-state tuition is far cheaper than out-of-state tuition at any other state university.

The way to fix this problem isn’t to eliminate the core elements of a college education, which include foreign languages. The fix is simple: the legislature should step up and say they truly support WVU, and cover its deficit without eliminating any programs. Then they should find a university president who is willing to ask for the funds that a major university needs. I’m sure they can find someone willing to do the job, maybe for even less than $800K per year.

Let’s Consider The Benefits Of Raw Milk

As reported this past week in the New York Times and elsewhere, more than half the states in the U.S. now allow the sale of raw milk. In some states, you can only buy raw milk at a farm, but in 14 states you can buy it in stores.

So why are some people excited about raw milk? Well, the implication is that if’ it’s less processed, it must be better, right? After all, we learned recently that ultra-processed foods are pretty bad for you. (There’s pretty good evidence for that.) And milk is one of those products that people associate with health. For the first few months of life, we humans live entirely on milk–but that’s milk from a woman’s breast.

Cow milk is a different matter. It’s not nearly so, shall we say, clean.

It only took a minute for me to find claims that raw milk has all kinds of benefits: supposedly it helps you digest food better, it has more vitamins and minerals, it boosts your immune system, and it’s better for people with lactose intolerance. (I won’t link to any of those claims because they’re all bogus, and I don’t want to give scammers any more web traffic.)

Before listing some of the special properties of raw milk, let’s consider how it’s processed. The main thing we do to milk is pasteurization: a very simple process where the milk is heated just enough to kill any bacteria in it. It’s not boiled, and the mild heating doesn’t destroy any of milk’s nutrients.

Pasteurization is named after the French scientist Louis Pasteur, who invented the process in the 1860s. Originally it was used to preserve wine, but people eventually discovered it worked just as well for milk and other foods. Back in the 1860s, people didn’t even know that bacteria caused disease, and Pasteur was one of the pioneers who figured that out.

So what does raw milk have in it? Well, it can carry a long list of delightful bacteria, including the ones that cause tuberculosis, brucellosis, scarlet fever, salmonella. And of course there is E. coli, including strains that can cause terrible diarrhea.

Bacteria just love milk, it turns out. Raw milk is teeming with them, and many of them are really not good for humans. Fortunately for most of us, pasteurization kills bacteria very effectively.

And there’s one more special thing you get in raw milk that you won’t get in pasteurized milk: cow poop. Funny thing, though: on the advertisements I found for the benefits of raw milk, none of them mentioned this.

So sure, get your raw milk and live like it’s 1850 again! Never mind that pasteurization has saved countless millions of lives. Seriously, folks: there’s no reason whatsoever to drink raw milk. And states that allow the sale of raw milk are endangering the health of their own people.

Does Taurine Really Extend Life? Maybe.


 Readers of this column will know that I’m highly skeptical of dietary supplements. So you might imagine my reaction when I saw headlines a few days ago about “Taurine, the elixir of life?” (at CNN) and “Supplement slows aging in mice and monkeys” (NY Times).

Unlikely, I thought. But I read the scientific article behind these reports, and now I’m intrigued.

What is taurine? And could it really slow down aging? Well, it seems like it could, just maybe. A new study published last week in Science (one of the top journals in all of science) seems to show, for the first time, that taking large doses of taurine, an essential amino acid, might provide a host of benefits that include slowing down the aging process.

First question first: what is taurine? It’s an amino acid, but it’s not one of the 20 amino acids that comprise all the proteins in your body. It’s a slightly different one, and our bodies naturally produce it in small amounts. We need more than our bodies produce when we’re very young, but we get it from breast milk, and it’s added as a supplement to infant formula.

We also get extra taurine from our diet: the best foods for taurine are meats, especially shrimp and other shellfish, but also beef and the dark meat in chicken and turkey.

What did the new Science paper show? Well, first the authors (from Columbia University, India’s National Institute of Immunology, and the Sanger Institute in the UK) describe how taurine levels clearly decline with age in humans and other mammals. Now, just because taurine declines doesn’t mean that replacing it will reverse the aging process, but at least it establishes plausibility.

They then describe a series of experiments, mostly in mice but also in monkeys, where they fed the animals relatively large amounts of taurine each day, and the results were pretty darned impressive:

  1. Life span in the mice increased by 10-12%.
  2. In mice that started taurine supplements in middle age, life span increased by 18-25%.
  3. Bone density increased in female mice and osteoporosis seemed to be cured.
  4. Muscle strength increased in both males and females compared to mice who didn’t get taurine.
  5. The number of senescent cells–cells that don’t do much except emit damaging inflammatory signals–seemed to be reduced.

Of course, there’s always a big caveat with results in mice: they’re mice, not humans! And many, many times we’ve seen results in mice that just don’t carry over into humans. So the scientists also did a study (a smaller one) in monkeys, which are much closer to humans genetically. This also had some very good results:

  1. Bone density increased in the spine and legs.
  2. Body fat was lower than it was in monkeys that didn’t get taurine.
  3. Several measures of inflammation decreased.

Monkeys live a lot longer than mice, so the scientists don’t yet know if taurine increases the monkeys’ life span, but all the signs are promising. I was skeptical going into this article, but I couldn’t find any obvious flaws.

In an accompanying article in Science, U. Penn’s Joseph McGaunn and Joseph Baur point out that we don’t know for sure what the risks of long-term supplementation with taurine would be, but it is already widely taken as a supplement in baby formula and in energy drinks, with no known ill effects.

However, the amounts used in the Columbia study were very high, much higher than you’d get from energy drinks or even from standard taurine supplements. I looked up a few, and typical formulations offer 1000 or 2000 mg (which is 1-2 grams) per day. The doses given to monkeys in the study, if converted to a 150-pound person, is equivalent to about 5500 mg (5.5 grams) per day. That’s not very much by weight, and it would be easy enough to take this much taurine, but no one knows the effects in humans of such high doses.

The bottom line: this study is really intriguing. More studies are needed, especially to measure the effects of taurine on humans, but all the signs are positive. I’ll be watching closely to see if the effects in mice and monkeys carry over, and if they do, we may all be taking taurine supplements one day. And I just ordered some taurine powder for myself–why not?

AI is crying out for regulation, while virologists doing gain-of-function research take the opposite tack. Why?

Over the past few months, prominent tech leaders have been raising alarms about the dangers of AI, and politicians are following suit. Just last week, the Senate held hearings on how to regulate AI. The tech industry itself is calling for regulation: just a few days ago, Microsoft’s CEO testified before Congress and asked the federal government “to govern AI at every part of its lifecycle.”

One of the founders of AI, Geoffrey Hinton, just left his high-level position at Google so that he could criticize AI without any constraints from his employer. And a couple of weeks ago, I asked the AI program ChatGPT if we should trust AI. No way, it told me.

This is all kind of surprising. AI experts seem to be saying “stop us before we do any harm.” It’s also kind of refreshing: usually the private sector wants the government to stay out of its affairs.

Now contrast all this with the behavior of virologists on a completely different technology: gain-of-function research on deadly pathogens. As I’ve explained before, gain-of-function (GoF) research takes a deadly pathogen, such as the influenza virus or the Covid-19 virus, and modifies it to make it even more deadly. Many scientists, including me, find this work both alarming and of little benefit, and we’ve been calling for it to be regulated for a decade now.

However, unlike AI experts, many virologists are opposed to any hint of regulation of their GoF work. On the contrary: just recently, 156 leading virologists jointly authored an opinion piece that lauded the many wonderful benefits of GoF, and pooh-poohed any risks.

Don’t worry your pretty little heads, these virologists seem to be saying to the rest of the world. We know what we’re doing, and it’s not that risky. Plus it’s great! Not to put too fine a point on it, but I disagree.

What’s caught my attention this week is not just the contrast in their willingness to be regulated, but the question of how one might imagine doing it, in both cases.

Simply defining what we mean by “AI” today is probably impossible. The number and types of programs that incorporate some form of artificial intelligence is vast and already affects our lives in many ways. The recent alarm bells were caused by one particular type of AI, known as large language models (LLMs), which have the ability to fool people in a new way. For several years now, more alarm bells have sounded (justifiably so) over “deep fakes,” images or videos that appear real but that are completely made up. These use completely different technology.

So even if we agree that AI needs to be reined in, no one can really say with any precision what that would mean.

Now let’s look at gain-of-function research on pathogens. One of the biggest objections that some virologists have made, on many occasions, is that there’s no way to define just the harmful research, so we really should just leave it all alone.

For example, the recent commentary by 156 virologists said that “gain-of-function approaches incorporate a large proportion of all research because they are a powerful genetic tool in the laboratory.” This is nonsense. It’s equivalent to saying “hey, this is science, and you don’t want to ban all science, do you?”

They also defend GoF by trotting out examples of research that were beneficial, such as the recent rapid development of Covid-19 vaccines. As was pointed out recently in the biology journal mBio, this is a red herring: it’s just not that difficult to define GoF “research of concern” and distinguish it from other, much more mundane virology and bacteriology research.

In fact, biologists have already done this, in a recent set of proposed new guidelines for regulating GoF research. As Hopkins researcher Tom Inglesby put it, “if you are going to make a more transmissible strain of Ebola, then you need to have the work reviewed by the U.S. government.”

So why do the AI scientists say “please regulate us” while many virologists say “leave our gain-of-function work alone”? It’s not because it’s too hard to define one or the other–if it were, the AI experts wouldn’t even consider regulation as a possibility.

No, it seems that it’s all about money. AI is thriving in both academia and industry, with tremendous growth ahead. The people calling for regulation just aren’t worried about money. They know that AI will continue to thrive, and they are calling for regulation because they seem to have genuine concerns about the threat that AI poses to society.

On the other hand, the world of gain-of-function research is very small, and almost entirely dependent on government funding. Although I’m sure they’ll deny it, these scientists are worried that they’ll lose their grants if even a small portion of GoF research is shut down. They may also be worried about more direct threats to their finances: the conflict-of-interest statement on that recent article by 156 virologists goes on for 731 words. (That is one of the longest conflict-of-interest statements I’ve ever seen on a scientific article.)

I decided to ask an AI (ChatGPT) these questions. When asked about regulating GoF, it replied with a long answer that concluded,

“Ultimately, the decision to regulate gain-of-function research involves weighing the potential risks and benefits. Striking the right balance requires collaboration between scientists, policymakers, and relevant stakeholders to establish guidelines, promote responsible research practices, and implement appropriate oversight mechanisms.”

ChatGPT’s answer about regulating AI was similar, concluding:

“Regulation can play a crucial role in ensuring that AI systems are developed and deployed responsibly... The specific nature and extent of regulation will likely depend on the application and level of risk associated with AI systems. Striking the right balance between regulation and fostering innovation is essential to ensure that AI technology benefits society while safeguarding against potential risks and ethical concerns.”

Overall, not bad advice. Now if only those virologists will listen. 

Should we allow AI to control the battlefield? AI itself thinks not.

Artificial Intelligence, or AI, seems to have finally arrived. With the introduction of ChatGPT last November, millions of people suddenly discovered that AI was far, far more than just a research activity. The range and sophistication of ChatGPT’s answers to questions across a wide range of disciplines is, frankly, pretty stunning.

AI is already in lots of places where you might not even realize it. Google Translate has been using AI for years, and it’s remarkably good, although nowhere near as good as a human translator. The technology that Pandora uses to customize your music is a type of AI, as is the technology behind Tesla’s self-driving cars. Within my own field, the program AlphaFold2 was a true breakthrough in scientists’ ability to predict the structure of proteins.

Along with these apparently beneficial developments, though, comes a great deal of concern. As AI gets better and better, can we continue to trust it to make decisions for us? It’s one thing if an AI recommends a song that we don’t like, which we can just ignore, but if AI is driving our cars, then mistakes might be much more costly.

And there are far riskier tasks that AI might be doing soon: what if the military starts using AI on the battlefield? One might be tempted to say no, of course they won’t do that, but what if another country starts using AI this way? Can we simply ignore that?

To be honest, I haven’t been worried, at least not yet. I have a little secret to confess: I did my Ph.D. thesis in AI, in the area called machine learning. I haven’t been doing AI research for the past 25 years, but I think I still understand at least a little of the basics behind the technology.

Thanks mostly to the attention that ChatGPT is getting, though, there’s suddenly a huge concern that AI represents a danger to society, although no one is exactly sure why. President Biden had a meeting just this week with the CEOs of Google, Microsoft, and OpenAI (the creators of ChatGPT) to talk about these concerns, where he told the execs that “What you’re doing has enormous potential and enormous danger.”

Of course, politicians have no idea what AI technology is–although I had to laugh at this quote from the White House press secretary: “The president has been extensively briefed on ChatGPT and knows how it works.” Um, no he doesn’t. I’m skeptical that the CEOs of Google and Microsoft know how it works either. However, you don’t need to understand the technology to understand the dangers.

Just recently, one of AI’s early pioneers, Geoffrey Hinton, has been speaking out to warn the public that AI has reached a point where “there’s a danger of something really bad happening.” Not surprisingly, other experts, such as German scientist Jürgen Schmidhuber, disagree.

The chair of the FTC, Lina Khan, says we can regulate AI effectively, and perhaps we can. But I’m not so sure.

Much of the concern in the press seems to be about the use of AI to spread misinformation. For example, AI can help create “deep fakes” that can be used to convince people that, say, an election was stolen even when it wasn’t. That’s a legitimate concern, but it’s happening even without AI.

A better question, which a family member just asked me, is should we be worried about AI making decisions on the battlefield? I thought I should take this question directly to the source, so I asked ChatGPT itself. The answer I got was remarkably clear and direct; I’ll quote it here in full:

“Yes, people should be concerned about AI making decisions on the battlefield. There are several reasons for this.

Firstly, AI lacks human empathy, judgment, and intuition. In a complex and dynamic environment such as the battlefield, human decision-making is often guided by experience, intuition, and an understanding of the broader context. AI lacks the ability to consider these intangible factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors.

Secondly, the use of AI in military decision-making raises significant ethical concerns. For instance, an AI system may be programmed to prioritize the protection of friendly forces at the expense of civilian lives, leading to potentially devastating consequences for non-combatants caught in the crossfire. Additionally, AI systems may be susceptible to hacking or manipulation, which could result in unintended consequences or a loss of control over the decision-making process.

Finally, there is the risk that reliance on AI systems may lead to the erosion of human decision-making capabilities, potentially diminishing the overall quality of military decision-making and increasing the likelihood of errors or unintended consequences.”

So there you have it: ChatGPT itself gives us several reasons why it’s a terrible idea to let AI make decisions that might cost human lives. We should listen to what it’s telling us, because as it gets more intelligent (and it will), it might decide not to be so honest in the future. And I have to say that it’s reasoning here is both well-written and compelling.

One thing that worries me as well is that the very term “artificial intelligence” is much too broad. We should be concerned about giving any technology the power to control our lives, whatever someone calls it. But when an AI itself says “don’t trust me,” then I think we might want to listen.