A simple trick to make better coffee: cut the static!

 

You’d think that coffee afficionados had tried everything by now, and that few if any tricks remained undiscovered. Well, you could be right–but there’s one trick that most ordinary coffee drinkers probably don’t know, and it’s remarkably easy to do.

I’ll jump right to the punchline, and then I’ll explain the (new) science that explains it. To make richer coffee in the morning, simply spritz a little water on your beans before grinding them. That’s it!

So what happens when you do this, and why does it make better coffee? Well, as explained in this new paper in the journal Matter, by Christopher Hendon and colleagues at the University of Oregon, it’s all about reducing the static electricity that the grinding process creates.

Grinding coffee causes triboelectrification. If you’ve never heard of that, not to worry–neither had I, until I read the paper. Basically, when the beans rub together, they create static, and that makes the ground coffee clump together (and sometimes fly into the air).

Then when you make the coffee, the clumping means that the water flows through the grounds unevenly, absorbing less of the coffee particles than it might. Ideally, all the coffee grounds should be evenly and densely packed, and static electricity prevents that.

Water reduces triboelectrification quite a bit, it turns out.

So what happens? Well, after extensive experimentation–and I do mean extensive–the scientists found that the amount of coffee solids in a cup of espresso increased from 8.2% to 8.9% when adding a bit of water to the beans before grinding. That’s a relative increase of 8.5%. Richer coffee!

Reading the paper, I realized these scientists had a lot of fun doing these experiments. They measured the water content in 31 types of coffees, and tried a wide range of settings for their grinders, for the water temperature, and more.


They also roasted their own beans to varying degrees of darkness. They tried dozens of combinations of beans and roasting strategies, measuring the water content after roasting and the amount of static electricity generated upon grinding. They observed that darker roast coffees usually generate finer ground particles, and finer particles in turn generate more static electricity.

They drank a lot of coffee to get this right! But hey, sometimes science requires sacrifices, right?

I should mention that the trick of adding a little water to the beans is already known to some experts, although the precise science behind it was unknown until now. It even has a name (as the paper points out): the “Ross Droplet Technique.”

As the paper concludes, “a few simple squirts of water [may] have solved the problems of clumping, channeling, and poor extractions while aiding in the pursuit of attaining the tastiest espresso.” You only need a few drops of water–give it a try.

One important caveat is that if you use the French press method to make coffee, where the grounds are immersed in water, then this trick won’t make any difference.

What’s next? Well, I should point out that this study focused entirely on espresso. Does it work for regular coffee as well? Probably so, but more research is needed.

Does Taurine Really Extend Life? Maybe.


 Readers of this column will know that I’m highly skeptical of dietary supplements. So you might imagine my reaction when I saw headlines a few days ago about “Taurine, the elixir of life?” (at CNN) and “Supplement slows aging in mice and monkeys” (NY Times).

Unlikely, I thought. But I read the scientific article behind these reports, and now I’m intrigued.

What is taurine? And could it really slow down aging? Well, it seems like it could, just maybe. A new study published last week in Science (one of the top journals in all of science) seems to show, for the first time, that taking large doses of taurine, an essential amino acid, might provide a host of benefits that include slowing down the aging process.

First question first: what is taurine? It’s an amino acid, but it’s not one of the 20 amino acids that comprise all the proteins in your body. It’s a slightly different one, and our bodies naturally produce it in small amounts. We need more than our bodies produce when we’re very young, but we get it from breast milk, and it’s added as a supplement to infant formula.

We also get extra taurine from our diet: the best foods for taurine are meats, especially shrimp and other shellfish, but also beef and the dark meat in chicken and turkey.

What did the new Science paper show? Well, first the authors (from Columbia University, India’s National Institute of Immunology, and the Sanger Institute in the UK) describe how taurine levels clearly decline with age in humans and other mammals. Now, just because taurine declines doesn’t mean that replacing it will reverse the aging process, but at least it establishes plausibility.

They then describe a series of experiments, mostly in mice but also in monkeys, where they fed the animals relatively large amounts of taurine each day, and the results were pretty darned impressive:

  1. Life span in the mice increased by 10-12%.
  2. In mice that started taurine supplements in middle age, life span increased by 18-25%.
  3. Bone density increased in female mice and osteoporosis seemed to be cured.
  4. Muscle strength increased in both males and females compared to mice who didn’t get taurine.
  5. The number of senescent cells–cells that don’t do much except emit damaging inflammatory signals–seemed to be reduced.

Of course, there’s always a big caveat with results in mice: they’re mice, not humans! And many, many times we’ve seen results in mice that just don’t carry over into humans. So the scientists also did a study (a smaller one) in monkeys, which are much closer to humans genetically. This also had some very good results:

  1. Bone density increased in the spine and legs.
  2. Body fat was lower than it was in monkeys that didn’t get taurine.
  3. Several measures of inflammation decreased.

Monkeys live a lot longer than mice, so the scientists don’t yet know if taurine increases the monkeys’ life span, but all the signs are promising. I was skeptical going into this article, but I couldn’t find any obvious flaws.

In an accompanying article in Science, U. Penn’s Joseph McGaunn and Joseph Baur point out that we don’t know for sure what the risks of long-term supplementation with taurine would be, but it is already widely taken as a supplement in baby formula and in energy drinks, with no known ill effects.

However, the amounts used in the Columbia study were very high, much higher than you’d get from energy drinks or even from standard taurine supplements. I looked up a few, and typical formulations offer 1000 or 2000 mg (which is 1-2 grams) per day. The doses given to monkeys in the study, if converted to a 150-pound person, is equivalent to about 5500 mg (5.5 grams) per day. That’s not very much by weight, and it would be easy enough to take this much taurine, but no one knows the effects in humans of such high doses.

The bottom line: this study is really intriguing. More studies are needed, especially to measure the effects of taurine on humans, but all the signs are positive. I’ll be watching closely to see if the effects in mice and monkeys carry over, and if they do, we may all be taking taurine supplements one day. And I just ordered some taurine powder for myself–why not?

AI is crying out for regulation, while virologists doing gain-of-function research take the opposite tack. Why?

Over the past few months, prominent tech leaders have been raising alarms about the dangers of AI, and politicians are following suit. Just last week, the Senate held hearings on how to regulate AI. The tech industry itself is calling for regulation: just a few days ago, Microsoft’s CEO testified before Congress and asked the federal government “to govern AI at every part of its lifecycle.”

One of the founders of AI, Geoffrey Hinton, just left his high-level position at Google so that he could criticize AI without any constraints from his employer. And a couple of weeks ago, I asked the AI program ChatGPT if we should trust AI. No way, it told me.

This is all kind of surprising. AI experts seem to be saying “stop us before we do any harm.” It’s also kind of refreshing: usually the private sector wants the government to stay out of its affairs.

Now contrast all this with the behavior of virologists on a completely different technology: gain-of-function research on deadly pathogens. As I’ve explained before, gain-of-function (GoF) research takes a deadly pathogen, such as the influenza virus or the Covid-19 virus, and modifies it to make it even more deadly. Many scientists, including me, find this work both alarming and of little benefit, and we’ve been calling for it to be regulated for a decade now.

However, unlike AI experts, many virologists are opposed to any hint of regulation of their GoF work. On the contrary: just recently, 156 leading virologists jointly authored an opinion piece that lauded the many wonderful benefits of GoF, and pooh-poohed any risks.

Don’t worry your pretty little heads, these virologists seem to be saying to the rest of the world. We know what we’re doing, and it’s not that risky. Plus it’s great! Not to put too fine a point on it, but I disagree.

What’s caught my attention this week is not just the contrast in their willingness to be regulated, but the question of how one might imagine doing it, in both cases.

Simply defining what we mean by “AI” today is probably impossible. The number and types of programs that incorporate some form of artificial intelligence is vast and already affects our lives in many ways. The recent alarm bells were caused by one particular type of AI, known as large language models (LLMs), which have the ability to fool people in a new way. For several years now, more alarm bells have sounded (justifiably so) over “deep fakes,” images or videos that appear real but that are completely made up. These use completely different technology.

So even if we agree that AI needs to be reined in, no one can really say with any precision what that would mean.

Now let’s look at gain-of-function research on pathogens. One of the biggest objections that some virologists have made, on many occasions, is that there’s no way to define just the harmful research, so we really should just leave it all alone.

For example, the recent commentary by 156 virologists said that “gain-of-function approaches incorporate a large proportion of all research because they are a powerful genetic tool in the laboratory.” This is nonsense. It’s equivalent to saying “hey, this is science, and you don’t want to ban all science, do you?”

They also defend GoF by trotting out examples of research that were beneficial, such as the recent rapid development of Covid-19 vaccines. As was pointed out recently in the biology journal mBio, this is a red herring: it’s just not that difficult to define GoF “research of concern” and distinguish it from other, much more mundane virology and bacteriology research.

In fact, biologists have already done this, in a recent set of proposed new guidelines for regulating GoF research. As Hopkins researcher Tom Inglesby put it, “if you are going to make a more transmissible strain of Ebola, then you need to have the work reviewed by the U.S. government.”

So why do the AI scientists say “please regulate us” while many virologists say “leave our gain-of-function work alone”? It’s not because it’s too hard to define one or the other–if it were, the AI experts wouldn’t even consider regulation as a possibility.

No, it seems that it’s all about money. AI is thriving in both academia and industry, with tremendous growth ahead. The people calling for regulation just aren’t worried about money. They know that AI will continue to thrive, and they are calling for regulation because they seem to have genuine concerns about the threat that AI poses to society.

On the other hand, the world of gain-of-function research is very small, and almost entirely dependent on government funding. Although I’m sure they’ll deny it, these scientists are worried that they’ll lose their grants if even a small portion of GoF research is shut down. They may also be worried about more direct threats to their finances: the conflict-of-interest statement on that recent article by 156 virologists goes on for 731 words. (That is one of the longest conflict-of-interest statements I’ve ever seen on a scientific article.)

I decided to ask an AI (ChatGPT) these questions. When asked about regulating GoF, it replied with a long answer that concluded,

“Ultimately, the decision to regulate gain-of-function research involves weighing the potential risks and benefits. Striking the right balance requires collaboration between scientists, policymakers, and relevant stakeholders to establish guidelines, promote responsible research practices, and implement appropriate oversight mechanisms.”

ChatGPT’s answer about regulating AI was similar, concluding:

“Regulation can play a crucial role in ensuring that AI systems are developed and deployed responsibly... The specific nature and extent of regulation will likely depend on the application and level of risk associated with AI systems. Striking the right balance between regulation and fostering innovation is essential to ensure that AI technology benefits society while safeguarding against potential risks and ethical concerns.”

Overall, not bad advice. Now if only those virologists will listen. 

Should we allow AI to control the battlefield? AI itself thinks not.

Artificial Intelligence, or AI, seems to have finally arrived. With the introduction of ChatGPT last November, millions of people suddenly discovered that AI was far, far more than just a research activity. The range and sophistication of ChatGPT’s answers to questions across a wide range of disciplines is, frankly, pretty stunning.

AI is already in lots of places where you might not even realize it. Google Translate has been using AI for years, and it’s remarkably good, although nowhere near as good as a human translator. The technology that Pandora uses to customize your music is a type of AI, as is the technology behind Tesla’s self-driving cars. Within my own field, the program AlphaFold2 was a true breakthrough in scientists’ ability to predict the structure of proteins.

Along with these apparently beneficial developments, though, comes a great deal of concern. As AI gets better and better, can we continue to trust it to make decisions for us? It’s one thing if an AI recommends a song that we don’t like, which we can just ignore, but if AI is driving our cars, then mistakes might be much more costly.

And there are far riskier tasks that AI might be doing soon: what if the military starts using AI on the battlefield? One might be tempted to say no, of course they won’t do that, but what if another country starts using AI this way? Can we simply ignore that?

To be honest, I haven’t been worried, at least not yet. I have a little secret to confess: I did my Ph.D. thesis in AI, in the area called machine learning. I haven’t been doing AI research for the past 25 years, but I think I still understand at least a little of the basics behind the technology.

Thanks mostly to the attention that ChatGPT is getting, though, there’s suddenly a huge concern that AI represents a danger to society, although no one is exactly sure why. President Biden had a meeting just this week with the CEOs of Google, Microsoft, and OpenAI (the creators of ChatGPT) to talk about these concerns, where he told the execs that “What you’re doing has enormous potential and enormous danger.”

Of course, politicians have no idea what AI technology is–although I had to laugh at this quote from the White House press secretary: “The president has been extensively briefed on ChatGPT and knows how it works.” Um, no he doesn’t. I’m skeptical that the CEOs of Google and Microsoft know how it works either. However, you don’t need to understand the technology to understand the dangers.

Just recently, one of AI’s early pioneers, Geoffrey Hinton, has been speaking out to warn the public that AI has reached a point where “there’s a danger of something really bad happening.” Not surprisingly, other experts, such as German scientist Jürgen Schmidhuber, disagree.

The chair of the FTC, Lina Khan, says we can regulate AI effectively, and perhaps we can. But I’m not so sure.

Much of the concern in the press seems to be about the use of AI to spread misinformation. For example, AI can help create “deep fakes” that can be used to convince people that, say, an election was stolen even when it wasn’t. That’s a legitimate concern, but it’s happening even without AI.

A better question, which a family member just asked me, is should we be worried about AI making decisions on the battlefield? I thought I should take this question directly to the source, so I asked ChatGPT itself. The answer I got was remarkably clear and direct; I’ll quote it here in full:

“Yes, people should be concerned about AI making decisions on the battlefield. There are several reasons for this.

Firstly, AI lacks human empathy, judgment, and intuition. In a complex and dynamic environment such as the battlefield, human decision-making is often guided by experience, intuition, and an understanding of the broader context. AI lacks the ability to consider these intangible factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors.

Secondly, the use of AI in military decision-making raises significant ethical concerns. For instance, an AI system may be programmed to prioritize the protection of friendly forces at the expense of civilian lives, leading to potentially devastating consequences for non-combatants caught in the crossfire. Additionally, AI systems may be susceptible to hacking or manipulation, which could result in unintended consequences or a loss of control over the decision-making process.

Finally, there is the risk that reliance on AI systems may lead to the erosion of human decision-making capabilities, potentially diminishing the overall quality of military decision-making and increasing the likelihood of errors or unintended consequences.”

So there you have it: ChatGPT itself gives us several reasons why it’s a terrible idea to let AI make decisions that might cost human lives. We should listen to what it’s telling us, because as it gets more intelligent (and it will), it might decide not to be so honest in the future. And I have to say that it’s reasoning here is both well-written and compelling.

One thing that worries me as well is that the very term “artificial intelligence” is much too broad. We should be concerned about giving any technology the power to control our lives, whatever someone calls it. But when an AI itself says “don’t trust me,” then I think we might want to listen.

Gain of function research needs to be banned, but we need to define it properly


I’ve been writing about dangerous gain-of-function research on viruses for years, originally on the flu virus and more recently on the Covid-19 virus. Many people are deeply concerned about this research, which might have caused the Covid-19 pandemic, and yet there are still no real regulations controlling it, neither in the U.S. nor anywhere else.

I can already hear the objections: oh, but what about the new rules that NIH put in place in 2017, after a 3-year “pause” in some gain-of-function (GoF) research? Those rules were utterly ineffective, but I’ll get to that in a minute.

Despite my arguments, and the concerns of many other scientists, which have been expressed in various forums and articles for at least a decade now, the virology community continues to insist that any limits on GoF are unnecessary, and that GoF is wonderfully beneficial. A group of 156 virologists even wrote an opinion piece, published in the Journal of Virology, making this very point.

I’ve tried to convince some of my colleagues in the infectious disease world that GoF should be banned, and I’ve discovered that many of them–even some non-virologists–are opposed to any government regulation of GoF research.

They are wrong. However, they do raise one important concern that I think is valid, and that I will address in this column. Their concern is that any government regulation will be ham-handed, and will likely end up limiting or preventing a range of very useful experiments that have the potential to lead to beneficial new drugs and vaccines.

I get it. When the government tries to regulate science, it can write rules that are far too broad, or that get mis-interpreted even if well-written, and unintended consequences follow. So let’s not do that: below I’ll explain what I think needs to be banned.

But let’s not forget why we are having this debate right now: there is a very real possibility that the Covid-19 pandemic started in a lab that was doing GoF research on coronaviruses. We know that the Wuhan Institute of Virology (WIV) was doing this kind of research–that fact is not under dispute. We don’t know (and we may never know) if the original Covid-19 virus first appeared as a result of a lab leak, but it might have. That’s why we’re asking whether such research is worth the risk.

Before I explain what I think the rules should be, let’s look at the current NIH rules, which I mentioned above. First, though, let’s remember that NIH rules only apply to research funded by NIH. Research that is funded privately, or by any other part of the government, is unaffected by these rules and remains entirely unregulated.

So: back in 2017, when the NIH lifted the 3-year funding pause, they put in place some rules (detailed here) for work on “potential pandemic pathogens,” or PPPs. (The government loves acronyms.)

The pause itself was prompted by work on avian influenza, led by virologists Ron Fouchier and Yoshihiro Kawaoka, that was designed to turn some deadly bird flu viruses into human flu viruses. The work was successful: the researchers did indeed create viruses that had the potential to infect humans. These results were really alarming to many scientists: I wrote about it at the time, and other scientists also raised the alarm. Those concerns are what led to the funding pause.

Since 2017 then, the NIH regulates (but doesn’t ban) research on PPPs that are both:

  1. “likely highly transmissible and likely capable of wide and uncontrollable spread in human populations, and
  2. likely highly virulent and likely to cause significant morbidity and/or mortality in humans.”

One of the first things to notice about this definition is that avian influenza–the very work that prompted the new rules–isn’t really covered.

Another thing to notice is that work on coronaviruses in bats–the GoF work that was apparently going on in the Wuhan Institute of Virology, and that may have caused the Covid-19 pandemic–wouldn’t have been covered either. Those bat viruses would not have been considered “likely highly transmissible in humans,” not before the pandemic.

Of course, we all know differently now.

In any case, the rules that NIH introduced in 2017 only applied to a very narrow class of work, and as far as I can tell, they didn’t restrict anything. On the contrary: the NIH resumed funding the GoF work on avian influenza work by Fouchier and Kawaoka soon after lifting the funding pause. And let’s not forget that NIH rules aren’t a ban: it remains perfectly legal to do any kind of GoF work.

So how can we put in place intelligent restrictions that will prevent dangerous GoF research in the future?

First, rather than rejecting any restrictions whatsoever, as some virologists have done, scientists should work with the government to craft a thoughtful set of limitations. For starters:

  1. Research that creates new strains of the Covid virus (SARS-CoV-2) that might have greater virulence or transmissibility should be entirely banned.
  2. Research that takes non-human viruses, including avian flu and bat coronaviruses, and gives them the ability to infect any new animals, should be banned.

To scientists who can’t even agree on these restrictions, I would say that it appears you oppose any restrictions whatsoever. If that’s your position, then the government might step in and impose far broader bans, which are not likely to be good. If you’ll agree to these two restrictions, perhaps we can broaden them slightly to cover other types of highly risky GoF work.

Finally, let me return to a point I’ve made before, but that bears repeating: the supposed benefits of GoF research are essentially zero. The claim that GoF research that makes a virus more deadly will help us “understand pathogenicity” or “be prepared for the next pandemic” are just hand-waving arguments. I wrote a whole column just last month explaining why these claims are fundamentally wrong, so I won’t repeat that here.

If we do ban some GoF research, with carefully-crafted rules, we won’t lose anything. Instead, we’ll gain at least two things: first, virologists can apply their expertise to truly beneficial virology work, and second, the scientific community will regain some of the trust it has lost during the pandemic. That would seem like a good thing.

Herbal extracts that cure an enlarged prostate? Not likely.

Saw palmetto, which is NOT effective for
treating enlarged prostates.

(Note: see the brief update at the bottom of this post for a response from the manufacturer.)

I haven’t looked at medical scams recently, and I thought I’d venture back into that world just a little bit this week, to see what is happening.

As always, the scams are everywhere, with products claiming to cure just about everything. What surprised me, though, is how blatant some of them have become. Some sites have no caveats or disclaimers at all, despite the fact that their claims are utterly false. They don’t even pretend that they are worried about a regulatory agency objecting to their false claims. The boldness can be startling–or, if you’re not sufficiently skeptical, convincing.

Let’s look at one site that strikes me as particularly egregious, which sells a dietary supplement called Prostoxalen as a cure for prostate problems. I was directed to this site by another site, ShopBodyVibes, that sells an even wider range of bogus cures (more on that below).

The marketers of Prostoxalen, which they sell for $40 for a bottle of 60 pills, are nothing if not direct. At the top of their website, they promise that Prostoxalen will “get rid of the constant pressure on the bladder, unpleasant pain and all other ailments related to prostate enlargement! Once and for all!

Nowhere do they provide even a shred of evidence for this claim.

And there’s more: they also claim that Prostoxalen will cure erectile dysfunction: “if you've noticed erection problems, our capsules will fix that issue as well,” the site states.

Again, no evidence at all.

I was expecting at least a citation to a poorly-done study published in a low-quality journal - after all, even homeopathic treatments, which are laughably ineffective, can find some bad science to support their claims.

But no, not for Prostoxalen. Maybe its marketers think that the testimonials alone (which appear to be fake) are sufficient.

So what on earth is in these pills? Well, it turns out that they’re just plant extracts and vitamins. The main ingredients are extracts of saw palmetto, pumpkin seeds, cranberries, tomatoes, nettles, and willowherb, along with a couple of common vitamins.

Great! So all you need is cranberries, tomatoes, and pumpkin seeds, and your prostate problems will go away. I’m surprised that anyone has an enlarged prostate, if this is all it takes to cure it.

But here’s the problem: there is no good scientific evidence that any of these ingredients will cure or relieve the symptoms of enlarged prostate. Not even a tiny bit.

(If you want to dig deeper, you can find multiple scientific studies of saw palmetto, which is widely marketed as a treatment for enlarged prostates. A carefully-done randomized trial out of Washington University, back in 2013, show that it simply doesn’t work, even at high doses.)

The name Prostoxalen sounds just a little bit like they might want you to think it’s a drug, doesn’t it? Maybe something to do with the prostate? Fortunately, the website answers this question in a FAQ list, which says: “No, Prostoxalen is not a drug. It is a food supplement in the form of capsules.”

Aha, that explains it. Dietary supplements aren’t regulated by the FDA, unless you claim that they’re a drug or that they can treat a medical condition.

What I expected to see on the website, but didn’t, was this disclaimer: “these statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease.” That’s the small-print language that appears on thousands of websites and products, and that allows supplement makers to make all kinds of hints and suggestions while avoiding regulation. Typically they use phrases like “supports prostate health,” as one saw palmetto product puts it.

It appears that Prostoxalen is manufactured and sold by a company in Poland, identified on the website only as PLT Group. So I guess they just don’t care what the FDA thinks–even though they are marketing this in the U.S. (I contacted them through their website, but they didn’t respond.)

So no, there is no magic pill that cures or relieves the symptoms of enlarged prostates, and any such cure is almost certain to require more than a few plant extracts.

Finally, about that site that directed me to Prostoxalen: that was ShopBodyVibes, a site that sells products to “make the penis longer” (Eroxel), “cleanse the body of toxins” (BurnBooster), “reduce varicose veins” (Variforce), “eliminate knee pain” (Ortezan), and a “breast enlargement serum” called BooUp. I’m not making this up. Needless to say (but I’ll say it), none of these products works–and yet the site has no disclaimers, nor does it provide any evidence for the claims.

The ShopBodyVibes site repeats all the claims from the Prostoxalen website (see here), again with no disclaimers. If you wonder what is in this product, (as I did), ShopBodyVibes suggests that “Everyone who is interested in learning the detailed composition of the product can read the list of ingredients, which is available on the manufacturer’s official website.”

Perhaps unsurprisingly, the ShopBodyVibes site has no indication of where it is located. It appears to be outside the U.S., which explains its stunning lack of any attempt to qualify its many bogus claims. It also has enough similarity to the Prostoxalen site to suggest that both sites might be owned by the same group in Poland. They didn’t respond to my inquiries.

And if you’re wondering how I stumbled upon BodyVibes: they were promoted by an article on Goop (yes, that’s Gwyneth Paltrow’s lifestyle company) about “wearable stickers that promote healing.” Yes, BodyVibes sells those too. I wrote about these magic stickers back in 2017, and my advice then still applies: they still don’t work, but if you like stickers, you can get a sheet of 50 for a couple of bucks.

Update: the manufacturer of Prostoxalen replied to my inquiry after this post appeared. Their message, in full, said: “our product has passed all the required tests before being launched on the market. It has proven to be highly effective, safe and legally introduced. We are a Polish company - the product is on the list of the National Sanitary Inspectorate at the Ministry of Health in Poland.” They did not provide any evidence or citations to support these claims, particularly the “highly effective” claim.