How about a counterexample? I am a consenting adult in my thirties. I create a photo or video and send it to my partner. The algorithm flags it as CSAM when it only shows a fat bald guy. Before I know it I'm under investigation and my life is ruined because the algorithm got it wrong. Even being accused of this sort of thing is enough to destroy someone and drive them to suicide.
The "algorithm" isn't some sort of neural network trying to intelligently identify things that "look like" CP. It's a perceptual hash matching against a database of known CP. It has to find multiple matches before it flags the account for review to reduce false positives. Only after review confirming a match to known exploitative images is the info referred to NCMEC for action.