Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This doesn’t work for two reasons: 1) There’s no way to know the perceptual hash value of Apple’s private NeuralHash function that is run on the derivative of the image server side to verify a hit really is CSAM. So while you could cause a collision with the on device neural hash if you possessed illegal content, you wouldn’t know if you successfully faked Apple’s private neuralhash implementation. 2) An Apple reviewer must verify the image is illegal before it’s passed along to law enforcement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: