The underlying research article (https://www.mdpi.com/1420-3049/23/10/2454) is set up as a "no-fail" experiment since it infers toxicity with the slightest change (inhibition or induction) in a very sensitive reporter (luciferase). They are NOT measuring change in growth rate of the bacterial cells, the traditional measure of toxicity (aka IC, or inhibitory concentration). The study has positive and negative controls, but this assay is poorly controlled relative to how it is being interpreted (impact of artificial sweeteners on human gut microbiome).
The only way to accurately calibrate this novel assay for relevance to human health is to expose the bacteria to a wide range of natural foods (e.g., avocado, spinach, berries, honey, etc) and show that it does not return a positive result. If nearly everything is toxic, the assay is not useful. However, if most natural foods have no toxicity signal, then the assay has genuine value.
I've always found any of these studies that treat artificial sweeteners as a single group very suspect.
1) The only thing the different types of Artificial Sweeteners have in common is a high sweetness value per volume (2k-4k time more sweet than sugar is common). Other than that they are different chemicals so it seems strange to treat them a single group. Sure a few of them might have issues but it seems silly to say they all have the same single issue. Really a red flag for who funded the study.
2) Due to the very high sweetness levels compared to sugar, even if the artificial sweetener is many times worse than sugar per volume, it is still safer due to only 1/2k being actually being used.
In defense of an otherwise-poor study, it didn't do this. Six of the most common artificial sweeteners were tested on the bacterial assay and their results reported and discussed separately. Another sequence of assays was run on a panel of sports supplements with varying sweeteners. It looks to me like the only reason for grouping these chemicals was practical - they have interchangeable dietary uses, so it's helpful to observe their varying responses on a single test.
It'd be a pretty interesting study, except that the assay's quality and sensitivity seem hugely in doubt.
I think that, assuming the study is high quality, one that looks at all the popular artificial sweeteners together is probably more valuable from a "practical application" perspective, because it allows you to make more of an apples-to-apples comparison among the individual sweeteners.
If all you've got is bunch of isolated studies that only look at one of them at a time, you'll always have some room for doubt about whether a difference in outcome reflects the sweetener being tested, or some difference in the experimental protocol.
I can buy the mechanism of high sweetness/vol mucking with satiety or insulin heuristics. My problem with these studies is that the ones promoted by the media overwhelmingly tend to have "cheated" to obtain positive results -- they claim to have found a link between artificial sweetners and cancer/diabetes/whatever but show the usual signs of severe significance hacking when I skim the paper. My own belief heuristics lead me to the tentative conclusion that effect size is therefore very small.
Those are interesting possibilities, but they seem crippled by the underlying failure to calibrate or prove the assay.
It's possible there's prior lit here and I'm just not seeing the reference, but it looks like there's no claim that these changes are sweetener-specific. Without that I don't think it's a viable environmental test.
The correlation with prior toxicity work is more promising since this looks like a relatively easy test, but I still have some questions about how robust that correlation is to changing substances and dosages.
Actually, I’d say it’s a good reason to keep it visible. First thing I did after I’d read the first paragraph was hit ‘back’ and open the discussion thread. Who funded the study? Is the science valid? Stifling discussion about something by demoting it to the nether realms is counter-productive.
Well, the top two comments say the study is fundamentally flawed and they make decent arguments for their case, rather than just spewing upvote-bait HN conventional wisdom...
The only way to accurately calibrate this novel assay for relevance to human health is to expose the bacteria to a wide range of natural foods (e.g., avocado, spinach, berries, honey, etc) and show that it does not return a positive result. If nearly everything is toxic, the assay is not useful. However, if most natural foods have no toxicity signal, then the assay has genuine value.