Comment by ncasenmare

Comment by ncasenmare 3 days ago

18 replies

Hi, author of the blog post here! Thank you for writing in with your concerns. First:

> Please be very careful when someone tries to tell you that supplements are miraculous and pharmaceutical drugs don’t work at all.

I'll concede I unintentionally gave the tone that one should replace antidepressants with supplements, even though the conclusion specifically writes: "(Don't quit your existing antidepressants if they're net-positive for you!) you may also want to ask your doctor about Amitriptyline, or those other best-effect-size antidepressants."

I have now edited the intro to more explicitly say "you can take these supplements alongside traditional antidepressants! You can stack interventions!"

===

> and nobody noticed this massive discrepancy until now?

Researchers have noticed it for 13 years! From the linked Ghaemi et al 2024 meta-analysis ( https://pmc.ncbi.nlm.nih.gov/articles/PMC11650176/ ):

> Several meta-analyses of epidemiological studies have suggested a positive relationship between vitamin D deficiency and risk of developing depression (Anglin et al., 2013; Ju, Lee, & Jeong, 2013).

> Although some review studies have presented suggestions of a beneficial effect of vitamin D supplementation on depressive symptoms (Anglin et al., 2013; Cheng, Huang, & Huang, 2020; Mikola et al., 2023; Shaffer et al., 2014; Xie et al., 2022), none of these reviews have examined the potential dose-dependent effects of vitamin D supplementation on depressive symptoms to determine the optimum dose of intervention. Some of the available reviews, owing to the limited number of trials and methodological biases, were of low quality (Anglin et al., 2013; Cheng et al., 2020; Li et al., 2014; Shaffer et al., 2014). Considering these uncertainties, we aimed to fill this gap by conducting a systematic review and dose–response meta-analysis of randomized control trials (RCTs) to determine the optimum dose and shape of the effects of vitamin D supplementation on depression and anxiety symptoms in adults regardless of their health status.

===

> even common OTC pain meds can have effect sizes lower than 0.4 depending on the study. Have you ever taken Tylenol or Ibuprofen and had a headache or other pain reduced? Well you’ve experience what a drug with a small effect size on paper can do for you.

I must push back: that's an effect of 0.4 plus placebo effect and time.

There's now RCTs of open-label placebos (where subjects are told it's placebo), which show even open-label placebos are still powerful for pain management. So, I stand by 0.4 being a small effect; even if you took a placebo you know to be placebo, you'd feel a noticeable reduction in pain/headache.

EDIT: Here's a systematic review of Open-Label Placebos, published in Nature in 2021: https://www.nature.com/articles/s41598-021-83148-6.pdf

> We found a significant overall effect (standardized mean difference = 0.72, 95% Cl 0.39–1.05, p < 0.0001, I2 = 76%) of OLP.

In other words, if the effect on antidepressants vs placebo is ~0.4, and the effect of a placebo vs no placebo (just time) is ~0.7, that means the majority of the effect of antidepressants & OTC pain meds is due to placebo.

(I don't mean this in an insulting way; the fact that placebo alone has a "large" effect is a big deal, still under-valued, and means something important for how mood/cognition can directly impact physical health!)

Aurornis 3 days ago

> Researchers have noticed it for 13 years! From the linked Ghaemi et al 2024 meta-analysis

You’re cherry picking papers. Others have already shared other studies showing no significant effects of Vitamin D intervention.

For any popular supplement you can find someone publishing papers with miraculous results, showing huge effect sizes and significant outcomes. This has been going on for decades.

With Omega-3s the larger the trial size, the smaller the outcome. The largest trials have shown very little to no detectable effect.

I think a lot of people are skeptical about pharmaceuticals because they see the profit motive, but they let their guard down when researchers and supplement pushers who have their own motives start pushing flawed studies and cherry picked results.

> In other words, if the effect on antidepressants vs placebo is ~0.4, and the effect of a placebo vs no placebo (just time) is ~0.7, that means the majority of the effect of antidepressants & OTC pain meds is due to placebo.

You keep getting closer to understanding why these effect size studies are so popular with alternative medicine and supplement sellers: They’re so easy to misinterpret or to take out of context.

According your numbers, taking Tylenol would be worse than placebo alone! 0.4 vs 0.7

Does this make any sense to you? It should make you pause and think that maybe this is more complicated than picking singular numbers and comparing them.

In this domain of cherry picking studies and comparing effect sizes, you’ve reached a conclusion where Vitamin D is far and away more effective than anything, placebo is better than OTC pain medicines, and OTC pain meds are worse than placebo.

It’s time for a reality check that maybe this methodology isn’t actually representative of reality. You’re writing at length as if these studies you picked are definitive and your numeric comparisons tell the whole story, but I don’t think you’ve stopped to consider if this is even realistic.

  • ncasenmare 3 days ago

    > You’re cherry picking papers.

    I just picked the most recent meta-analysis I could find, which also specifically estimates the dose-response curve. (Since averaging the effect at 400 IU and 4000 IU doesn't make sense.)

    > Others have already shared other studies showing no significant effects of Vitamin D intervention.

    Yes, and the Ghaemi et al 2024 meta-analysis addresses the methodological problems in those earlier meta-analyses. (For example, they average the effects at vastly varying doses from 400 IU and 4000 IU)

    > According your numbers, taking Tylenol would be worse than placebo alone! 0.4 vs 0.7

    No, I understand this fine. Taking Tylenol would give you active medication + placebo + time, which is 0.4 + 0.7 + X > *1.1.* Taking open-label placebo is just placebo + time = *0.7* + X.

    (Edit: Also, these aren't "my" numbers. They're from a major peer-reviewed study published in Nature, the highest-impact journal. I don't like "hey look at the credentials here", but I bring it up to note I'm not anti-science, see below paragraph)

    ===

    Stepping back, I suspect the broader concern you have is you (correctly!) see that supplement/nutrition research is sketchy & full of grifters. And at the current moment, it seems to play into the hands of anti-establishment anti-science types. I agree, and I'll try to edit the tone of the article to avoid that.

    That said, there still is some good science (among the crap), and I think the better evidence is accumulating (at least for Vitamin D) that it's on par with traditional antidepressants, possibly more. I agree that much larger trials are required.

    • svara 3 days ago

      > They're from a major peer-reviewed study published in Nature, the highest-impact journal.

      No, the domain name is nature.com because it's in a Nature Publishing Group journal, Scientific Reports, which is their least prestigious journal.

      It's a common mistake, and they do that on purpose, of course, to leverage the Nature brand.

      It's also a mistake that implies a complete lack of familiarity with scientific publishing, unfortunately, which makes it a bit difficult to take your judgements regarding plausibility very seriously.

      • directevolve 3 days ago

        It’s less prestigious because it doesn’t judge papers on novelty, only on technical accuracy. For incremental research like this, it is an appropriate choice. The lower prestige has no bearing on the accuracy of their findings.

      • MRtecno98 3 days ago

        > It's also a mistake that implies a complete lack of familiarity with scientific publishing, unfortunately, which makes it a bit difficult to take your judgements regarding plausibility very seriously.

        It's still peer reviewed, and as the sibling comment said, more applicable to this type of research. Also you now went from raising understandable objections to refusing the argument because it comes from a specific journal, which doesn't sound very scientific to me

        • svara 3 days ago

          You're right it isn't fair to reject someone's scientific argument just because they seem unfamiliar with how professional science works.

          We shouldn't have believed the study more if it actually had been in Nature.

          I don't think that's what I was saying, though.

          The issue in this thread was about taking a step back and looking at the overall plausibility of the conclusions, taking together multiple studies.

          I agree with the GP that the argument doesn't really pass the smell test.

          That's still the main issue, and it is something that people who don't understand scientific publishing struggle understanding/doing, because they lack the intuition for how certain results came about.

    • keybrd-intrrpt 3 days ago

      Hello, there is another study that might be relevant to you

      https://pubmed.ncbi.nlm.nih.gov/28768407/

      > A statistical error in the estimation of the recommended dietary allowance (RDA) for vitamin D was recently discovered

      > ... This could lead to a recommendation of 1000 IU for children <1 year on enriched formula and 1500 IU for breastfed children older than 6 months, 3000 IU for children >1 year of age, and around 8000 IU for young adults and thereafter.

  • kadushka 3 days ago

    the larger the trial size, the smaller the outcome

    I find this a bit surprising. Could there be something else affecting the accuracy of larger trials? Perhaps they are not as careful, or cutting corners somewhere?

    • lamename 3 days ago

      Maybe. Those could be the case. But ignoring all confounding factors, this phenomenon is possible with numerical experiments alone. One of the meanings of "the Law of Small Numbers".

      Basically, the possibility that the small study was underpowered, and just lucky...then the large studies with more power are closer to the truth. https://en.wikipedia.org/wiki/Faulty_generalization

      • kadushka 3 days ago

        Sure, could be just lucky. But if there are several successful small studies, and several unsuccessful large ones (no idea if this is the case here), we should probably look for a better explanation.

        • svara 3 days ago

          It does not require more explanation: publication bias means null results aren't in the literature; do enough small low quality trials and you'll find a big effect sooner or later.

          Then the supposed big effect attracts attention and ultimately properly designed studies which show no effect.

    • habinero 3 days ago

      No, the other way around. It's the combination of two well known effects. Well, three if you're uncharitable.

      1. Small studies are more likely to give anomalous results by chance. If I pick three people at random, it's not that surprising if I happened to get three women. It would be a lot different if I sampled 1,000 people.

      2. Studies that show any positive result tend to get published, and ones that don't tend to get binned.

      Put those together, and you see a lot of tiny studies with small positive results. When you do a proper study, the effect goes away. Exactly as you would expect.

      The less charitable effect is "they made it up". It happens.

    • hirvi74 3 days ago

      Just my hypothesis, but I wonder if larger sample sizes provide a more diverse population.

      A study with 1000 individuals is likely a poor representation of a species of 8.2 billion. I understand that studies try to their best to use a diverse population, but I often question how successful many studies are at this endeavor.

      • kadushka 3 days ago

        use a diverse population

        If that's the case, we should question whether different homogeneous population groups respond differently to the substance under test. After all, we don't want to know the "average temperature of patients in a hospital", do we?

        • hirvi74 2 days ago

          > If that's the case, we should question whether different homogeneous population groups respond differently to the substance under test.

          In terms of psychological treatments, I am honestly in support of this. Many mental illnesses can have a cultural component to them.

          > After all, we don't want to know the "average temperature of patients in a hospital", do we?

          No, I don't think we do. Am I understanding you correctly?

directevolve 3 days ago

A point I think is crucial to mention is that “effect size” is just standardized mean difference.

If a minority of patients benefit hugely and most get no benefit, then you get a modest effect size.

This is probably why this discussion always has a lot of people saying “yeah, it didn’t help me at all” and a few saying “it changed my life.”

I believe we should be focusing on more relevant statistical methods for assessing this hypothesis formally. Basically, using mean differences is GIGO if you assume you’re comparing a bimodal or highly skewed distribution to a bell curve.