36 participants is reasonable for showing some effects, and it is easy for people who've just internalized "small samples bad" to underestimate that. I'm glad you raised the point, especially since ignoring it tends to push people to respect very large studies - even when those studies use self-report on famously unreliable topics like weight and sleep.
I still find it worrying here because we know there's substantial variance in how/when people sleep, and 36 is well below "representative sample" numbers. More generally, a 36 participant, 2 week study ought to be examined pretty carefully when it contradicts a study (Akerstedt et al) on 38,000 subjects over 13 years. I can hardly blame the authors for that, though; small lab confirmations of large self-report studies are a useful and established practice.
But I don't think that's the damning part. The 9 night study span is incredibly worrying for trying to assess a long-term effect. If a value shifts over that span, is it rising steadily, rising to a new equilibrium, or reacting to a change and then returning to homeostasis?
(I do want to give credit to the study for using pre-experiment substance/diet/sleep control, and then applying individual sleep times to avoid "night owls" being a confounder. Seeing studies judge "sleep time" by making all subjects go to bed at the same hour is infuriating.)
Finally, there's strong evidence that something was unusual about study conditions. The WaPo article claims that the weekend sleepers "gained nearly three pounds over two weeks", but that's pretty much journalistic malpractice. The study actually found that all patients in the study gained weight; more weight with higher confidence for the low-sleep groups, but the groups had heavy overlap in amount gained. We can be confident that two weeks of work-schedule sleep do not cause that sort of weight gain for most people, since the reduction to the absurd there is everyone gaining ~70 lb/year indefinitely. It's the sort of number which raises pretty significant questions about how this compares to anything outside the lab setting.
I still find it worrying here because we know there's substantial variance in how/when people sleep, and 36 is well below "representative sample" numbers. More generally, a 36 participant, 2 week study ought to be examined pretty carefully when it contradicts a study (Akerstedt et al) on 38,000 subjects over 13 years. I can hardly blame the authors for that, though; small lab confirmations of large self-report studies are a useful and established practice.
But I don't think that's the damning part. The 9 night study span is incredibly worrying for trying to assess a long-term effect. If a value shifts over that span, is it rising steadily, rising to a new equilibrium, or reacting to a change and then returning to homeostasis?
(I do want to give credit to the study for using pre-experiment substance/diet/sleep control, and then applying individual sleep times to avoid "night owls" being a confounder. Seeing studies judge "sleep time" by making all subjects go to bed at the same hour is infuriating.)
Finally, there's strong evidence that something was unusual about study conditions. The WaPo article claims that the weekend sleepers "gained nearly three pounds over two weeks", but that's pretty much journalistic malpractice. The study actually found that all patients in the study gained weight; more weight with higher confidence for the low-sleep groups, but the groups had heavy overlap in amount gained. We can be confident that two weeks of work-schedule sleep do not cause that sort of weight gain for most people, since the reduction to the absurd there is everyone gaining ~70 lb/year indefinitely. It's the sort of number which raises pretty significant questions about how this compares to anything outside the lab setting.