Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What happens if someone leaks or guesses the weights on that "secret" classifier? The whole system is so ridiculous even before considering the amount of shenanigans the FBI could pull by putting in non-CSAM hashes.


For better or worse, opaque server-side CSAM models are the norm in the cloud photo hosting world. I imagine that the consequences would be roughly the same as if Google's, Facebook's or Microsoft's "secret classifiers" were leaked.


but in the cloud setting they have the plaintext of what was uploaded. The attack described above is about abusing the lack of information apple has so they will report an innocent user to the authorities.


The voucher that Apple can decrypt once enough positives have been received contains a scaled-down version of the original. How else would Apple be able to even run a second hash function on the same picture?


Can't they just make a new one and recompute the 2nd secret hash on the whole data set fairly easily?

Also, the whole point is that it's fairly easy to create a fake image that collides with one hash, but doing it for 2 is exponentially harder. It's hard to see how you could have an image that collides with both hashes (of the same image mind you).


Two hash models is functionally equivalent to a particular type of one double-sized hash model. So it shouldn't be any harder to recompute against a 2nd hash, if that 2nd hash were public.

Of course, it won't be public (and if it ever became public they'd replace it with a different secret hash).


If you have both models it is easy. If Apple manages to keep the server model private then it is hard.


You don’t need to have the weights. “Transfer attack” is a thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: