Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. That's the whole point of this endeavor.

It's a privacy-preserving framework to allow the government to monitor the contents of iCloud directly, with few, if any, Apple employees ever having to get their hands dirty.

Apple just handles a target database that gets distributed to phones, and then compiles a list of users whose data had hits against the target database.

Apple employees don't have to dirty their hands with what, perhaps, is in that target database. Not their problem!



That isn't accurate. They're not blindly handing lists of users over to the government.

If an account uploads multiple images that match to known exploitative images and exceeds a threshold, then the account is flagged for review by Apple. (Note the threshold is selected to provide a ~1 in 1 trillion probability of incorrectly flagging an account.) Once they review and confirm a match, it's then forwarded to the National Center for Missing & Exploited Children for further action (and presumably referral to Law Enforcement.)

More details in their whitepaper: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


There are no details in the whitepaper.

The "1 in 1 trillion" figure is accidental flagging on the target database, but there is no validation whatsoever on the target database. How can you, or I, or any other citizen, know whether non-CSAM items are present in the target database?

-------

The NCMEC is a patsy for the police state on this one. It's gross, it's ugly, and it is a terrible outcome for the charity.

In their participation in this program, they make themselves into a front for the CIA, FBI, and DIA forces that are aching for opportunities to crack down on dissent in America. This is an awful, terrible outcome.

--------

The whole thing is an incredibly thin, easily pierced veil for any government. Even if you think the secret police forces of the United States generally do well by citizens, how do you feel about China, or Russia, or Eritrea, or Burma, or Turkmenistan using these tools to flag people trafficking images with undesirable fingerprints?


This is a good point. I believed that the fact that apple manually reviews the content implies they will compare the images against those in the database. Without the database of content it does imply that outside organizations are uploading hashes to apple and that apple cannot determine the scope of the content.

However, that does not invalidate the fact that apple is in the loop! It’s not just the NCMEC that has to be corrupted - it’s also apple employees. Apple has stated in their whitepaper that they review all flagged content before forwarding to the NCMEC. If the apple employees forward the non-CSAM matches then that is a failure of the reviewers who have betrayed their duty to prevent authoritarian abuse of this system.


“Apple just handles a target database” ==> apple in the loop


Read the papers published by Apple. The target database is (intentionally) designed to make it impossible for Apple to know what the targets are.

This is a complex cryptosystem designed to keep Apple out of the loop. There is a target database whose intended targets they cannot know, and customer data they prefer not to know.


Yes but they review the matches! It is the same thing in the end: if apple reviewers start to see that a bunch of political pictures are being flagged they will realize that the system is no longer being used to flag CSAM content.


The database distributed to phones is just hashes. By itself it can't be validated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: