Absolutely no causal link was shown. Maybe the nerve pains for which doctors prescribe gabapentin increase the risk of dementia on their own, or maybe there is some third factor that causes both nerve pain and dementia.
Valid comment. In general retrospective studies aren't that rigorous, yet researchers love them (usually a quick and easy publication). They aren't easy to do well because controlling for confounding factors isn't easy - even if you think you know what you need to control for, data often isn't available, and even then, how you control for it can drastically change your findings.
Then layer on top the available data. I assume in this study they just tried to create a control group and an intervention group based on gabapentin prescriptions, then tried to see how many had a dementia diagnosis. So many ways the data can mislead! Differences between the control and intervention group when it comes to total exposure to drug, other medical interventions, diagnosis rates, family history, etc, etc. They are basically going in blind.
Retrospective studies can be useful in identifying potential signals. It's what we do for drug safety regularly. But it's not rigorous enough data to start making changes to medical care - you need a more rigorous study to confirm.
But what annoys me is the coverage these studies get. The average reader thinks "oh my god!", when they should think "interesting, but there is a good chance they are seeing a signal that isn't there".
A great example of the impact is the use of hormone replacement in menopausal women. It used to be very common until a study came out showing higher rates of uterine cancer (I believe). Use of hormone replace went way down, plenty of women suffered from menopausal symptoms for a few decades.
Then a massive (160 women) prospective, randomized, controlled study (WHI) were done and it was clear the safety signal wasn't there.
Not sure why this comment is getting downvoted. The article itself states that:
> "This is an observational study, and as such, no firm conclusions can be drawn about cause and effect. The researchers also acknowledge that their study was retrospective, and they weren't able to account for dose or length of gabapentin use."
Not to be too meta, but it’s kind of boring to point out the obvious limitation of the research method.
“No casual link is proven” could be said about so much science, specifically medical science and other disciplines which limit research methods for ethical or practical constraints. So you end up with this comment in every front page post about an observational medical study. We could be discussing the actual research or its implications, instead of repeating a discussion on limitations of research methodology.
In addition, I find these types of critics to be a little too cynical even for my taste. There’s a whole group of people that feel smart by finding ways to dismiss scientific studies even when there is some actually interesting data being brought up.
On first brief reading I misunderstood the title to be causal, even though it only claims a link. I think it is worth pointing out for those who check the comments before reading.
Yes, these comments are necessary pushback against the habit of these disciplines to push interventions that don't work because their evidentiary standards are bad.