> If you've worked in computer security long enough, you know that Apple could already have added a backdoor into every build of iOS that allows the government to read any material on your device they chose to read.
Yes. They could. However, getting caught doing that, from Apple, would be an absolute disaster for them, and they've both made the right statements and made the right actions in terms of things like physical device security (moving more and more into the secure enclave to protect against demonstrated attacks).
It's entirely possible they've done this for specific cases, with a gag order, and nobody knows. However, it cannot be widespread, or it would end up known.
This, however, is adding a blackbox image matching algorithm (we know it's not matching image file data based on statements about how it's robust against cropping/rotation/resizing/etc), and a blackbox database of "Stuff claimed to be Really Bad." There is no way to audit the database and see what's in it, because of the one way nature of hash functions, and there's almost certainly not a pile of the source material laying around for audit reasons (because that would be a really nasty legal problem). So you've got a "Trust us, this stuff is bad" database, and a "Trust us, it matches images accurately" hash function.
And, if those find enough matches (also unspecified), you get yourself reported directly to the various authorities.
It's guilt by algorithm, across every single Apple device (running the latest OS, etc... handwave as needed).
That's very, very different from a customized backdoor deployed to a small number of phones.
Dumb question: do we know from what they've released publicly if it will be possible for security researchers to snag a copy of the database, perform the same perceptual hash algorithm on a given image, and determine if there's a "hit", without violating some kind of license term?
Perhaps the community could run a crowdsourced "keep them honest" service web service -- upload the latest illegal-in-China Winnie-the-Pooh meme, oh hey look at that, it's in the China-only version of the database, isn't that weird, etc etc. (Obviously you wouldn't want people "testing" images that are in the database for the actual stated purpose...)
The scanner that can be fooled by embedding the images into some document is going to get useless fast. Therefore, PDFs and office formats have to be checked, too. Well, and archives, without doubt, should be unpacked on-the-fly. What if someone embeds the base64-encoded image into HTML? And so on, and so on.
You can't just look at media files, you naturally need to scan everything, and the working thing is likely to be antivirus-like, with many file format filters, heuristics, and extended description language.
Do you have the Collateral Murder video? Abracadabra, and now you don't have it.
So, your argument is that Apple couldn't have implemented a wide-scale backdoor because people would know about it but Apple could use this new technology in a nefarious way at scale and no one would know about it.
Yes. They could. However, getting caught doing that, from Apple, would be an absolute disaster for them, and they've both made the right statements and made the right actions in terms of things like physical device security (moving more and more into the secure enclave to protect against demonstrated attacks).
It's entirely possible they've done this for specific cases, with a gag order, and nobody knows. However, it cannot be widespread, or it would end up known.
This, however, is adding a blackbox image matching algorithm (we know it's not matching image file data based on statements about how it's robust against cropping/rotation/resizing/etc), and a blackbox database of "Stuff claimed to be Really Bad." There is no way to audit the database and see what's in it, because of the one way nature of hash functions, and there's almost certainly not a pile of the source material laying around for audit reasons (because that would be a really nasty legal problem). So you've got a "Trust us, this stuff is bad" database, and a "Trust us, it matches images accurately" hash function.
And, if those find enough matches (also unspecified), you get yourself reported directly to the various authorities.
It's guilt by algorithm, across every single Apple device (running the latest OS, etc... handwave as needed).
That's very, very different from a customized backdoor deployed to a small number of phones.