Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about a counterexample? I am a consenting adult in my thirties. I create a photo or video and send it to my partner. The algorithm flags it as CSAM when it only shows a fat bald guy. Before I know it I'm under investigation and my life is ruined because the algorithm got it wrong. Even being accused of this sort of thing is enough to destroy someone and drive them to suicide.


The "algorithm" isn't some sort of neural network trying to intelligently identify things that "look like" CP. It's a perceptual hash matching against a database of known CP. It has to find multiple matches before it flags the account for review to reduce false positives. Only after review confirming a match to known exploitative images is the info referred to NCMEC for action.

Full whitepaper: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


Who validates said database of "known CP"? How do we know the pictures are actually not "find dissident"? Who watches the watchers?

This entire endeavor hands the keys to an unaccountable police state.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: