Don't shit in my hand and call it chocolate ice cream.
"We're going to scan your photos, on your encrypted device, to look for badness. Right now, we're going to claim that's only for the really icky people that nobody is going to defend, but, hey, once the tech is in place, who's to say we can't scan for dank memes and stuff?"
I think I'm done with Apple. Sad, really. I was hoping that their bowing to China with iCloud wasn't a sign of what's to come, but apparently it was. They had done such nice stuff with privacy too.
Demote my phone to a house phone and go without, I suppose.
These processes seem to only move in one direction. In 5 years this exact comment will probably make you sound like an "extremist" if you say it to a random person. "What, why wouldn't you be okay with apple searching through your photos?"
I see it everywhere and it literally seems like some kind of one-way entropic process. I can't think of anything that would reverse it. It would be like turning an omelet into an egg.
Is there something about modern life that just inescapably creates this complacent, servile temperament in most of the population? Or has it always been there and I'm just overthinking it? It's really depressing either way so I try not to think about it
>I see it everywhere and it literally seems like some kind of one-way entropic process. I can't think of anything that would reverse it. It would be like turning an omelet into an egg.
Thomas Jefferson had some arboriculture advice that seems relevant to this kind of thing.
Assuming they were not struck dead by future shock (photo, radio, television, telephone, mobile handsets), they would lose their damn minds that Black people have escaped slavery, non-landholders can usually vote, and the Presidency is (mostly) determined by the popular vote.
I do not think any of the founders, however amorphously the term is applied, would have interesting thoughts about cryptosystems for the examination of private photographs stored in commercial cloud setups.
> Is there something about modern life that just inescapably creates this complacent, servile temperament in most of the population? Or has it always been there and I'm just overthinking it?
I only have anecdotal evidence, but it seems like most people don't care about problems unless they affect themselves personally right now. Also, most of us (HN aside) don't tend to think seriously about how things will go wrong, we're generally optimists that give more weight to the good something will do.
I appreciate that Apple is trying to find technical means to reduce child porn. That's a more important problem to solve than optimizing ad click rates. But my concern is that once the tools for local image scanning exist, Apple will come under pressure from authoritarian regimes like China, Australia, and Saudi Arabia to also search for images associated with lesser crimes or even just criticism of the government. It's tough for a company to refuse. Either comply with the orders, or risk having your business shut down and your local employees punished.
What makes you think that child porn is such an important problem, apart from rag articles on vague “Darknet”? Shouldn't all the phones get scanned for the evidence of tax evasion, a much more widespread crime? All those rich people dealing with offshore businesses have iPhones™, it's only natural to make Apple do their part as a honest member of society… I mean, honest corporation of society.
If you allow me to guesstimate wildly, most “child porn” these days, in a technical sense, is made by kids themselves having access to an internet-connected device with a camera. Sometimes it is extorted by despicable abusers, sometimes it is done for no one in particular, just for the perceived fame/popularity/likes on social services/etc. Big services have an army of moderating grunts to keep Victorian purity of blissfully ignorant common public intact, things are a bit different in poorer parts on the Net and on the Globe. Should we expect the naked selfie of a teenager sent to their significant other to automatically cause a police interrogation of the same teenager? What if the device is shared with older members of the family? Another man-made dystopia enabled by people who enjoy to express outrage over the racy stories in the media, and people who are too afraid to speak up.
In essence, Apple has introduced a software agent to signal whether you have files that belong the list someone provides. If I recall the scandal correctly, this is what Kaspersky allegedly (ab)used, and what other antivirus tools (including Microsoft's built-in and enabled-by-default Windows scanner that for some seemingly important reason nags all the time if uploading of files to Microsoft is disabled) surely enjoy offering to various agencies around the world.
I don't think you should worry about “China, Australia, and Saudi Arabia” so much, there's elephant in the room you don't like to notice.
You are stating it in reverse. Think about the children is always an excuse to get started with the next big things (like all the other things that you mentioned). So its step 1 of a grand plan on the lines of 1984 (probably to get some more markets to open for them by cozying up to governments).
The state must declare the child to be the most precious treasure of the people. As long as the government is perceived as working for the benefit of the children, the people will happily endure almost any curtailment of liberty and almost any deprivation. -- Adolf Hitler
Do we know the true origin of the quote? The source you link links to another source and makes it sound like that was the origin, but the source it links to doesn't use the full quote and already is using it as a quote, so it makes me think the actual quote predates that usage. It may be the first time it was misattributed to Hitler, but is it actually the origin of the quote?
"This quotation is often cited as having come from Mein Kampf, the autobiography and political polemic written by Hitler during his stay in Landsberg prison, but no such passage appears in that work. In fact, this quotation appears to have been virtually non-existent until 2014, when it was used as the heading to Chapter 11 of Willfully Ignorant, a novel by Pat Miller set in Nazi-era Germany. (That book also mistakenly cited Mein Kampf as the source of the quotation.)"
There is something like a 0% chance this program wasn't co-developed with representatives of the government entities you least want rifling through your data.
I find discussions around this kind of stuff frustrating because often times what's actually happening always gets muddled by the hysteria, even here on Hacker News.
From the link: "Before an image is stored in iCloud Photos..."
This leads me to believe that only data that is going to get uploaded to their servers is going to be scanned. If anyone has a different interpretation or thinks I'm wrong, feel free to reply.
It's my understanding that all cloud services do this type of scanning, when they are technically able to.
But all data on iCloud is encrypted by default, so Apple can't scan for this kind of material once it is on their servers. Doing it on device before it gets encrypted and uploaded is the only place they even could do a scan like this.
Additionally, they make it clear in the article that there has to be more than one hit (they don't say the actual number) which would mitigate risk of hash collision false positives.
If this type of scanning makes you uncomfortable, you can just not use their cloud services.
I do agree that this is still not a good direction to go, even with all the precautions they've taken. But I had to do some digging to figure out what was actually going on, the comments/commentary made it seem like Apple is now routinely scanning all your photos/videos if you have an iPhone.
Once the code is there to do local scanning, it might make it easier for a zero day exploit to do phone scanning and grab data it might not otherwise have access to or for governments to force Apple to conduct scans of content on a phone when they ask.
iCloud isn't end-to-end encrypted and Apple is technically capable of accessing the data stored there. As I understand, they already do server-side scanning for abusive material.
I'm not an expert in how this works, but I am skeptical that Apple would do this if they had a way of scanning iCloud photos and videos on their servers.
A generous explanation would be that they intend to add end-to-end encryption to iCloud eventually, together with this new scanning technology as a "government backdoor". A less generous explanation would be that they intend to expand this scanning to non-iCloud media eventually.
I agree- I think this is a bad decision on Apple's part. It really undercuts a lot of their statements about privacy by doing any kind of on device scanning of your content, even in such a narrow context.
I guess the point I am making is that as of now, this only applies if you're using their cloud services. I'm not sure if Apple would announce if they were compelled to use this functionality through a court order.
The US is one of the main countries that spies on their citizens. I do wonder if Apple developed this to keep FBI happy. The same way they stopped on E2EE cloud backups to keep the FBI happy.
forget about authoritarian regimes committing genocide, think about the children sharing pepe memes with ok signs. There are lots of pitchforks ready to go, and some of them already work withing Apple campus who are writing design docs on how to make Apple ecosystem even more safe
Who has access to update what are considered bad images and when to alert the authorities? This could very easily be made into an anti-wikileaks feature.
Exactly, if some whistleblower is going leak a picture of some classified documents, the intelligence agency could simply upload a picture of the documents to the database. And bam, the journalist is arrested.
Moreover, if the database doesn't store actual images and instead only the perceptual hashes, it would be impossible to audit, even if the auditor has access to the database.
All the auditor would see is a bunch of hashes, they wouldn't be able to tell which hash actually represents CP and which hashes represent pictures of confidential documents.
The next question is since this database is not exact binary hashes, but an ill defined "perceptual hash" - how many false positive images has this system surreptitiously extracted from customer hardware and shown to presumably human control personnel? Were these customers notified? What is the recourse for them?
PhotoDNA has been around for a while and you might think about it what you want, but they have never answered those very pertinent questions.
This is all a few months after touting themselves as a company focused on privacy. Decentralization is the only way to truly know you have privacy today I suppose.
You’re already completely trusting them if you’re using their device? I mean I get the privacy concern, but it’s odd that this is the line if you were fine with things before. That said I’m all for ditching the smartphone and best of luck to you
Historically, Apple has come down, hard, on the side of "Your device is your device, and where that assumption is violated, we will work to protect it." They have added increasing layers of hardware security around device encryption keys, have added the various mitigations for password guessing attacks, etc.
And when the FBI said, "Hey, can you write and sign a custom bootloader for this phone to bypass that stuff?" they told the FBI to pound sand and made the hardware security features stronger so even Apple couldn't break them.
And then they bowed to China regarding iCloud and in-country servers, which clearly are accessible to the government. And then this. Whatever claims they've been making about privacy are obviously now crumbling under some external pressure.
> they told the FBI to pound sand and made the hardware security features stronger so even Apple couldn't break them
You didn't really believe that cheesy open letter, did you? The real reason they declined was because FBI innocently said “We can't bother to transfer phones to Apple all the time, just give us the tool, we've seen how it's done enough times, and can use it just the same. Here's the official demand.” If Apple did that, the next day the tool would be leaked to multiple other agencies and actors, and on the day after any stand on any Chinese market would offer the iPhone unlock and reset for a small price. In other words, the shattering of the legend of privacy protection marketed so hard, and financial trouble. The FBI effectively demanded to hold one of the keys to Apple's power, and was found not significant enough to be esteemed.
Then they all pretended to have a legal fight to smokescreen the public. Apple itself said that it had always cooperated in doing the job on premises, and you can be pretty sure that every official case, semi-official case, and special case leaving no paper trail was routinely accepted. Apple was protecting its own secrets, not yours.
> Historically, Apple has come down, hard, on the side of "Your device is your device, and where that assumption is violated, we will work to protect it."
On the contrary, if my iPhone really was "my device" then Apple wouldn't make it so hard to sideload software (it was literally impossible to sideload without building software
from source yourself until very recently) - and they'd let me change my default browser/maps/contacts/email apps too.
Apple's stance on privacy is something that really doesn't cost them anything but from which they gain massively by using it as a bludgeon against Google and Android (and Facebook...).
> Imagine being dumb enough to believe that PR campaign. If you actually believe that the spooks don’t have iPhone back doors after the PRISM slide leaks, I don’t even know what to say…
My pet theory is that because Apple can ship personalized iOS software updates to specific iPhones - and assuming that iPhone has automatic iOS update installation enabled (which is the default now, previously it wasn't) then Apple could essentially OTA-push an intentionally compromised iOS build to someone's phone and they'd be none-the-wiser.
That hardly requires any backdoors or even any security vulnerabilities - just someone with access to Apple's iOS build signing keys.
At the limit you are trusting the sum of the people who are creating the software you use - and that includes drivers, firmware, etc. Linux is not exempt, but it may be easier to find issues in open source code.
Given this, Apple was not a bad person to trust. Yes they wanted you to pay them money, but were a) quite happy to deliver value in return, b) prepared to do lots of hard work to fight on your behalf and c) it was quite clear that you were their customer, not their product. Google is an advertising company, even if you pay them money you are always also the product.
I am seriously wondering what happened to Apple that they came up with this idea, it doesn't seem to be in their interests at all.
They're already scanning your photos to classify the images. That's how you can search for photos by descriptions. They also can OCR the text in images. Seems like a weird time to start getting angry about your privacy being invaded
Would you please describe a solution to finding CSAM which you would find acceptable? It seems to me that on-device scanning for known CSAM content is on the lower end of the privacy-violating spectrum.
There is no solution to searching private data that I am going to find acceptable.
* If the "target" database is secret, then government entities are free to use a "CSAM" database to monitor dissidents by adding their own items of interest to the target list.
* If the "target" database is public, the only way to validate its contents is to ... traffic in child sex abuse material. That's not great.
Basically, there is no way to create and validate this database in public view. I don't need yet another flavor of secret policing in America.
This doesn't seem like a problem that can be solved without compromising my privacy.
I would rather the problem went un-solved than allow the state to rifle through all of my private files to "prevent the distribution of child sex abuse material," knowing full well the state will define that "material" however they like, then use parallel construction to prosecute dissidents whenever they have gotten information illegally.
It's a privacy-preserving framework to allow the government to monitor the contents of iCloud directly, with few, if any, Apple employees ever having to get their hands dirty.
Apple just handles a target database that gets distributed to phones, and then compiles a list of users whose data had hits against the target database.
Apple employees don't have to dirty their hands with what, perhaps, is in that target database. Not their problem!
That isn't accurate. They're not blindly handing lists of users over to the government.
If an account uploads multiple images that match to known exploitative images and exceeds a threshold, then the account is flagged for review by Apple. (Note the threshold is selected to provide a ~1 in 1 trillion probability of incorrectly flagging an account.) Once they review and confirm a match, it's then forwarded to the National Center for Missing & Exploited Children for further action (and presumably referral to Law Enforcement.)
The "1 in 1 trillion" figure is accidental flagging on the target database, but there is no validation whatsoever on the target database. How can you, or I, or any other citizen, know whether non-CSAM items are present in the target database?
-------
The NCMEC is a patsy for the police state on this one. It's gross, it's ugly, and it is a terrible outcome for the charity.
In their participation in this program, they make themselves into a front for the CIA, FBI, and DIA forces that are aching for opportunities to crack down on dissent in America. This is an awful, terrible outcome.
--------
The whole thing is an incredibly thin, easily pierced veil for any government. Even if you think the secret police forces of the United States generally do well by citizens, how do you feel about China, or Russia, or Eritrea, or Burma, or Turkmenistan using these tools to flag people trafficking images with undesirable fingerprints?
This is a good point. I believed that the fact that apple manually reviews the content implies they will compare the images against those in the database. Without the database of content it does imply that outside organizations are uploading hashes to apple and that apple cannot determine the scope of the content.
However, that does not invalidate the fact that apple is in the loop! It’s not just the NCMEC that has to be corrupted - it’s also apple employees. Apple has stated in their whitepaper that they review all flagged content before forwarding to the NCMEC. If the apple employees forward the non-CSAM matches then that is a failure of the reviewers who have betrayed their duty to prevent authoritarian abuse of this system.
Read the papers published by Apple. The target database is (intentionally) designed to make it impossible for Apple to know what the targets are.
This is a complex cryptosystem designed to keep Apple out of the loop. There is a target database whose intended targets they cannot know, and customer data they prefer not to know.
Yes but they review the matches! It is the same thing in the end: if apple reviewers start to see that a bunch of political pictures are being flagged they will realize that the system is no longer being used to flag CSAM content.
Traditional police work that respects the principle behind the 4th amendment, no searches of private information, algorithmic or otherwise, directly by the government or otherwise, where you do not have probable cause.
Yes, but the big exception to the fourth amendment are private searches. As long as it's not the government, or someone working on their behalf searching, it's not covered by the fourth amendment. The trend these days is to eliminate constitutional protections by farming the violations out to private corporations.
I agree it's not a violation of the fourth amendment (so long as Apple is not being forced to do this by the government), which I why I said "the principle behind the fourth amendment" instead.
I wonder what the limits to this are. Clearly if government asks for help then that counts, but what happens if government slowly builds up partnerships where certain 'good partner' behavior is expected but never explicitly required in a contract?
Say government started favoring companies who scan for drug images for any contracts over those who don't? Or say that while working with companies that scan for drug images they build up relationships that lead to both better treatment and to better chances at winning contracts? Maybe companies that refuse to implement the drug abuse material scans end up getting investigated more often.
I wonder how long until the fourth amendment can be considered dead due to this one exception?
The U.S.A.'s weak privacy protections aren't the entire world, you know.
In the E.U., and other places, companies are definitely held to stricter standards as to what extent they can search a private device, even if they claim the o.s. is only a service.
The fourth amendment is irrelevant here. The constitution is meant to limit the power of government because government is ubiquitous and it has a monopoly on violence. If it were to become too powerful then it will oppress its citizens.
Preventing a digital platform from governing what kind of content it hosts is just absurd and makes no sense. If apple decides that it does not want to be the platform where CSAM content is distributed then they should be free to do so. If you don’t like being a part of a platform which scans images for abusive content migrate to another platform.
Where you do not have both probable cause and a warrant, based on and identifing that cause, signed by a publicly elected judge in the user's (not company/operator's) juridiction.
Seriously? For one, this is basically a warrantless search which is illegal (obviously they get around this because Apple is a private company). Also, trusting algorithms for critical things like this is beyond absurd.
I still don’t understand the reaction. People get tied up in a knot over this but it is an effective deterrent to child pornography full stop. Even if it rubs you the wrong way to have software fingerprinting your files, I really don’t care if it means placing a deterrent in place against child trafficking.
Let’s walk through this.
1. Criminal kidnaps child and abuses him.
2. Criminal produced video of said abuse and sells it on the web.
3. Criminal continues to sell it and it spreads.
4. The video is detected by authorities who promptly add it to database.
5. Video is cryptographically hashed and now anybody who stores this content in iCloud can be identified
6. A customer of the criminal is caught
7. Forensics leads authorities to criminal who produced the video
8. One less criminal to profit from kidnapping and abusing children
Everyone tries to make this approach as a slippery slope to facial. It doesn’t have to be that way if the right people are in the loop to blow the whistle.
> I still don’t understand the reaction. People get tied up in a knot over this but it is an effective deterrent to child pornography full stop. Even if it rubs you the wrong way to have software fingerprinting your files, I really don’t care if it means placing a deterrent in place against child trafficking.
Most reductions of privacy toward the police would act as an effective deterrent to that crime and other crimes.
Deterring crime is not enough to justify a reduction in privacy.
Hypotheticals: What if your platform is causing a growth in the crime rate? What if your platform is enabling new forms of crime and potentially on an unprecedented scale? Is it justified then?
For the first one it depends on how much growth and in what crimes specifically.
For the second one, maybe, but I seriously doubt icloud encryption is going to do that.
Either way, just talking about baselines and percents is a good improvement over just "this would decrease crime". Add in the downsides too and you have yourself a good platform for discussion!
How about a counterexample? I am a consenting adult in my thirties. I create a photo or video and send it to my partner. The algorithm flags it as CSAM when it only shows a fat bald guy. Before I know it I'm under investigation and my life is ruined because the algorithm got it wrong. Even being accused of this sort of thing is enough to destroy someone and drive them to suicide.
The "algorithm" isn't some sort of neural network trying to intelligently identify things that "look like" CP. It's a perceptual hash matching against a database of known CP. It has to find multiple matches before it flags the account for review to reduce false positives. Only after review confirming a match to known exploitative images is the info referred to NCMEC for action.
"It is an effective deterrent" to using this one specific platform to distribute CSAM. The problem with this solution is the exact same problem with the tired old "solution" to E2E encryption that gets trotted out every couple of months. If you add monitoring to the tool that criminals are using -- especially, especially if the company loudly and publicly announces that they are adding monitoring! -- you will, at best, catch a few of the very dumbest possible criminals, while the rest move on to one of countless available non-monitored tools.
> anybody who stores this content in iCloud can be identified
Any reason to believe anybody does this before the child reaches the retirement age?
You are describing the reverse cause of targeted search for the individual in which each step has probability that is much less than 100%. The technology discussed is a broad sweep: everything, everywhere, every time.
Oh, well, no need to worry so much, Apple has just added data collection ability which others had in one way or another.
if you really want to put a dent in abuse, mandate cameras in every home because that is where most abuse happenes. Anybody who opposes that clearly has something to hide...
Police can always pinky promise to never use it for anything except catching the baddies.
If the government mandated them, they would absolutely be illegal searches. The government is not entitled to check whether or not I have a document containing certain content (illegal or not) without probable cause and a warrant to suggest that I do have that document (and that it is illegal).
I'm carefully not alleging that this is illegal, because it's a third party doing it and it's not obvious that they are acting as the governments agent. Regardless of whether or not that solves the legal problem of unlawful searches, it does not solve the moral problem that we have a right to be free from unreasonable searches.
Replace "child porn" with "political posters" here. If you would have a problem with that search, I claim that you should have a problem with this search, because the there is no evidence that the person you are searching is committing a crime, and as a result the claim for this to be a morally valid search needs to not be about guilt (which would require probable cause first), but about it not being a prohibited search in the first place.
Because the price is too high. The same argument is being made to outlaw encryption: it will make police work much easier. Yeah, it sure will, but it will bring a lot of problems as well, and they far outweigh the benefits.
Algorithms don't put people in jail, people do. The person who irresponsibly followed the output of the algorithm is, and the person who vouched for the algorithm being correct and put it into production is.
Don't think of an algorithm as an independent actor, think of it as an awesome tool that makes you 100x more productive.
... and guns and cars don't kill people, murderers do.
But now a company is creating s new kind of "gun" that huge chunks of the community of security professionals are warning about because it is insanely powerful and dangerous.
We have a number of reasons to warn about it:
- China will undoubtedly demand to have Apple scan for their hashes too. The databases these hashes are collected from are not public for obvious reasons so this means they can slip whatever they want into it. Maybe Apple will check the images before they send them over but that is not necessarily the case as far as I read it, also it takes one rogue employee to correctly classify as not abuse material but make a note of the account and send it back after work.
- This is not your average sha256 hash. This is perpetual hashes. They are made to catch not only the exact document but all kinds of variations of it. I have not specialized in perpetual hashes but as far as I can understand it goes without saying that the more resistant this is to modifications, the easier it becomes to create innocent images that triggers it.
- even if we had a magic algorithm that resonated with everything good and only matched the images we wanted there still is potential for abuse. When I was younger I browsed through the cache folder of my machine and I remember there being a lot of images there that I can't remember having seen on any site I visited. Now it is said that this algorithm will only flag images about to be uploaded so obviously cache folders won't be scanned. But once this tool is in place why won't governments start applying pressure to Apple to scan everything? And what prevents someones soon-to-be ex from downloading and slipping some into iCloud when one leaves the phone unlocked?
- Due process should sort many of the problematic cases here, but child abuse is for good reasons one of the worst things you can get accused. A mere accusation is often enough to ruins someone's life even if it later becomes clear to law enforcement that the person is innovent. Also most places have a way to go wrt due process.
Okay but what about the intense war going on for youtube content creators getti g banned or content struck right now. Or all the wrongful bans on platforms like facebook or Twitter.
a) Executives at Alphabet, who are ultimately in charge of the companies decisions
b) The individual product managers, lawyers, engineers, and whoever else is involved in creating, maintaining, and running the system that bans people inappropriately.
The algorithm they are using is a tool that they are using to do this, using a tool does not absolve them of the responsibility in any way.
If (s)he sells to thousands via WhatsApp and word-of-mouth good old police work will have him/her bagged shortly.
WhatsApps only claim to be private is end-to-end encryption of messages. The moment a polkce officer opens the phone of a buyer they can identify the seller. Same if an undercover cop hears the word-of-mouth and buys.
This is just ordinary good police work that I whole-heartedly support.
Find the websites distributing it, infiltrate them, generally do the legwork to find what's going on - which is exactly what they've been doing.
"But the children!" is not a skeleton key for privacy, as far as I'm concerned.
I reject on-device scanning for anything in terms of personal content as a thing that should be done, so, no, I don't have a suggested way to securely accomplish privacy invasions of this nature.
I'm aware that they claim it will only be applied to iCloud based uploads, but I'm also aware that those limits rarely stand the test of governments with gag orders behind them, so if Apple is willing to deploy this functionality, I have to assume that, at some point, it will be used to scan all images on a device, against an ever growing database of "known badness" that cannot be evaluated to find out what's actually in it.
If there existed some way to independently have the database of hashes audited for what was in it, which is a nasty set of problems for images that are illegal to store, and to verify that the database on device only contained things in the canonical store, I might object slightly less, but... even then, the concept of scanning things on my private, encrypted device to identify badness is still incredibly objectionable.
In the battle between privacy and "We can catch all the criminals if we just know more," the government has been collecting huge amounts of data endlessly (see Snowden leaks for details), and yet hasn't proved that this is useful to prevent crimes. Given that, I am absolutely opposed to giving them more data to work with.
I would rather have 10 criminals go free than one innocent person go to prison, and I trust black box algorithms with that as far as I can throw the building they were written in.
>on the lower end of the privacy-violating spectrum
I think that's the thing, privacy minded people want solutions that aren't on the spectrum at all.
I can come up with some solutions for greatly reducing CSA that are really low on violating parental rights but they have little chance of gaining support because the only solutions people want are the ones that don't violate parental rights at all (except when parental rights have been suspended due to evidence of CSA being found).
Yeah. They can scan photos that are actually passed around, instead of something that's just in a photo library. That will likely result in 2-3 orders of magnitude fewer false positives.
That's exactly what they're doing. And they're requiring multiple matches before an account is flagged, further reducing false positives. They claim they've set the threshold to achieve a ~1 in 1 trillion probability of an account being improperly flagged.
False, they are scanning all photos in your library, regardless of whether or not you send them on to anyone else.
1 in 1 trillion is not a particularly impressive number, considering that users have tens of thousands of photos in their library, and there are a billion active iPhones.
That's if you do believe their claimed false positive rate -- which I don't (yet), because they have published absolutely nothing about how neuralMatch actually works. They have an interesting body of work around how suspected matches are encrypted so Apple can't see them until certain conditions are made, but nothing about how they identify suspect photos.
Boosting the amount of influence schools have in childrens’ lives is something we could do better at.
More teachers, smaller classes, stronger relationships, better discipline, more love. I see too many teachers where I work who just don’t care any more.
Educating school leaders to educate their teachers to help children root out the family member that is abusing them. That there would be a pretty neat solution — and I really don’t mean to be inflammatory about this — than catching villains based on photo evidence of the abuse that has become systematic enough to be shared online.
It is very expensive and time consuming though. Teachers are hard to recruit (low pay, low quality job) which exacerbates the problem which makes it even harder to recruit them.
Not OP, but a warrant, upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
In other words, you don't get to yell think of the children and randsack peoples homes. If you don't have specific evidence suggesting that I am a criminal, you assume that I am not _and_ let me be.
The "privacy" of a on-device scanning is meaningless when you upload to iCloud where they will send to law enforcement and an unaccountable nonprofit as soon as it matches something.
Sorry, but they're not "spying on your device". If you choose to upload your photos to their cloud service (iCloud Photo Library), they'll use perceptual hashing and private set intersection tech to check against a database of known exploitative child imagery to make sure you aren't uploading that to their service. That's it.
Child pornography is the boogyman used to compromise privacy.
There is significantly more illegal material of greater concern such as communication to plot terrorists attacks or companies ignoring safety regulations and leading to the deaths of many in doing so, and all that isn't scanned for either on the theory that it might exist, not that there is due cause for a search warrant or some similar mechanism and burden.
But child pornography is the one thing Anglo-Saxon culture is notoriously emotional about, and willing to surrender all it's freedom and privacy for. For a while terrorism counted among this too, but that seems to have fallen off.
Based on the press release, I don't think you're right. It doesn't appear to be all photos, just ones uploaded to iCloud.
It looks like it only triggers when you go to upload a photo to iCloud. In that case, it (maybe?) scans only that photo on your device and then decides if it's CSAM.
I guess Apple reallyreally doesn't want CSAM on their servers? I don't know why they wouldn't just perform the scanning on their own server otherwise.
If the software is scanning everything, and they were already scanning uploaded content, why have a press release at all? If you're deploying an unrestricted panopticon to all your devices, why on earth would you go to the trouble of announcing it in the first place?
Ah yes, the "slippery slope" argument, where evidence is never required, pessimism always wins over realism, and any counter-argument can be dismissed with "but they COULD!"
I remember when they put a fucking U2 album on everyone's phone. Remember that? Many found that suspicious, but I recall people like you saying "oh Apple can write, who cares, but they'd never read, you're being ridiculous".
I remember when an Apple auth server went down and no one could launch non-Apple applications because Apple needed to see the hashes of the things people ran on their computers.[0]
The U2 album was a marketing tactic gone horribly wrong. But the same basic tactic was used to automatically upgrade people's movies to 4K versions. It's just assigning a right to an asset by a given account. It was not an invasion of privacy or some new technical advance that gave Apple some advanced privacy invading features.
Apple said "Hey everyone owns this U2 album now". And if your phone had automatic downloads turned on it downloaded the album.
The idea that even belongs on the slope is … you're not a serious person and you're not making a serious argument.
>Apple said "Hey everyone owns this U2 album now". And if your phone had automatic downloads turned on it downloaded the album.
It wasn't that simple. The usual functionality of slide-to-delete didn't work at all, because of iCloud, or something. A week after they foisted it on everybody, they decided to implement a dedicated page to allow frustrated users to delete it, because of how different it was from a mere purchase authorization from the iTunes Store. [0]
"The slope", in this case, as far as I understand the root-level ancestor comment, refers to the idea that Apple is increasingly paternalistic about its powers over the device you ostensibly own:
>>>>"We're going to scan your photos, on your encrypted device, to look for badness. Right now, we're going to claim that's only for the really icky people that nobody is going to defend, but, hey, once the tech is in place, who's to say we can't scan for dank memes and stuff?"[1]
I think bringing up the U2 album is entirely within scope re paternalistic overreach. I assume your arbitrary goalpost movement to "invasion of privacy or some new technical advance" is considered by you to be indicative of your personal and argumentative seriousness?
Apple gave everyone the album. It just assigned that album to everyone's library. It could be deleted from the device, but it stayed attached to your account. So it showed up in your library when you searched even if you didn't have it downloaded, because that's how Apple's library worked.
I'm not sure how that's "paternalistic" in any sense of the word.
I didn't move the goalpost. This article, and commentary, is specifically about how Apple is adding technology that can be used to invade our privacy or be abused by state actors.
A bad marketing tactic from seven years ago that amounted to a free gift no one wanted is an absolutely ridiculous example.
I'm sure I'll be revealing my ignorance here, but what does revoking certs have to do with Apple needing to know who's running youtube-dl or tor or anything else? Was the revocation of something somehow contingent on the specific application being run, or vice versa?
Apple requires signing of code that's going to run on MacOS. You can disable this, but it is a nice little security feature that allows Apple to quickly react if a virus starts spreading on their devices. They revoke that certificate and OCSP (which is a standard way of checking for revocation) blocks that software from running.
Apple's design was fail closed, so if OCSP is down, assume the application has had its key revoked.
Unfortunately that's just how OCSP is, your browser (if you're using Firefox) does this with CAs. The unfortunate thing is, due to the nature of desktop applications, OCSP stapling doesn't really work when you're not the one serving content.
Uh, sure, fine. But Apple decided they needed to see the hashes of the things people ran on their computers, to possibly block execution if they decide that's necessary, and they don't have to. I'm kinda shocked that you're framing this as if it's innocent, and therefore a bad example of Apple's increasingly paternalistic control. I don't particularly care what sort of RFC they're following or which alternative implementation sucks or which fig-leaf covers their true intent; Apple chose to have the ability to see what people are running in real time. If you're a regular of this forum, and can't immediately imagine how this sort of information might be used to harm users, rather than help them, now or in the future, I don't know what to tell you.
I also don't quite understand how this even helps with the stated goal of virus-corralling. Does the hypothetical virus that they're trying to guard us against change an executable? If so, then the hash is immediately different, but presumably no longer matches its signed checksum, and so could be rejected at the OS level without needing the whole 'real-time seeing what people run' aspect. Does the hypothetical virus run independently? How could it, given the prohibition against non-signed code? I guess the idea is 'prevent a once-legit app from pushing a malicious update and turning several nations worth of Macs into a botnet', style of thing?
Apple doesn't go and check every application thats being signed basically. The idea is that all applications get signed but if apple notices something it can revoke them and basically make them unusable.
>Apple doesn't go and check every application thats being signed basically
...uh, then how did the failure of the auth server mean that nothing (except Apple apps) could run? My understanding was that the auth server checked the hash of every application that was being run, and the absence of that auth server meant nothing could run.
Moreover, I don't care about the fact that they're just using plain ol' certs, just like Firefox. They could be using screen recording software and Mechanical Turk to decide whether users can execute some third-party software -- the point is that they are deciding whether users can execute some third-party software. The technical implementation is unimportant, it's a bad thing.
Yes, Apple chose to add technology to its system which identifies application developers and allows them to tell your machine that specific developers or applications cannot be trusted.
Exactly. Reaching out over the network like that is an invitation to, if not a revelation of, malfeasance of some kind. Apple intentionally chose to implement this "feature" in a suspect way, and specious comparisons of an OS and a browser [0] don't strike me as justification, or even explanation. Maybe it's just me, but I like my machine to stay as mine as possible.
The "slippery slope" argument is not a fallacy. If you were in 2001 and told people what the state of data collection and privacy would look like in 20 years, they'd call you an absolute conspiracy nutjob. It was a national news story then that a printer driver phoned home. Now look at where we are at, and every loss of privacy has had some seemingly reasonable excuse.
For the narrow formal definition in your link, that being an assertion without evidence, then yes, it's a fallacy.
In the colloquial language we are using here, there is absolutely evidence, tons of it, that this will take the same downward trend of previous infractions.
Slippery slope has almost always meant a fallacy, because it typically invokes "the sky is falling" examples that have no evidence to support it coming to pass - even here.
While I see the point, "slippery slope is a fallacy" seems to have been abused far more than "slippery slope" on HN over the decade I've hung around here.
That's because nobody makes an argument by starting with "I'm making a slippery slope argument" - it's always identified by someone else. Same with whataboutism, no true scotsman, etc. by their nature.
Are you under the impression that people who teach logic are unaware of real world evidence that "things change and some of the those end up being bad"?
A slippery slope is a logical fallacy. There's not a colloquial version of that term that means something different. It's a style of argument, where you asset that one action will lead to another action without any evidence of a direct casual link between the two.
Second, I'm not appealing to authority. You said a slippery slope isn't a fallacy. I provided a link which makes the case as to why a slippery slope is considered a fallacy.
We are not using "slippery slope" in any way which is different from the examples given.
That one thing has lead to a bad thing in the past is not evidence that this thing will lead to this specific bad thing in the future.
Yes, sometimes, slippery slopes end up being slippery, and sloped.
But that doesn't mean anyone has to take it as a given that any other specific slope will be slippery, nor slippery in some specific way which leads to some specific outcome.
Do you trust your government to not abuse "But they could?" types of capabilities?
I don't. I've seen what the FBI claims, that they then magically find ways around the "impenetrable encryption" they claim they can't break (but then do), and I don't trust them to not require Apple to add hashes of "things they find problematic," and include a gag order with it so Apple can't report it.
I've worked computer security long enough to know that it's always worse than advertised, always will be abused, etc.
If you've worked in computer security long enough, you know that Apple could already have added a backdoor into every build of iOS that allows the government to read any material on your device they chose to read.
My contention isn't that I trust my government, but instead that this specific technology is no more threatening than the technology already present on my phone and the trust I've already placed in the vendor.
> If you've worked in computer security long enough, you know that Apple could already have added a backdoor into every build of iOS that allows the government to read any material on your device they chose to read.
Yes. They could. However, getting caught doing that, from Apple, would be an absolute disaster for them, and they've both made the right statements and made the right actions in terms of things like physical device security (moving more and more into the secure enclave to protect against demonstrated attacks).
It's entirely possible they've done this for specific cases, with a gag order, and nobody knows. However, it cannot be widespread, or it would end up known.
This, however, is adding a blackbox image matching algorithm (we know it's not matching image file data based on statements about how it's robust against cropping/rotation/resizing/etc), and a blackbox database of "Stuff claimed to be Really Bad." There is no way to audit the database and see what's in it, because of the one way nature of hash functions, and there's almost certainly not a pile of the source material laying around for audit reasons (because that would be a really nasty legal problem). So you've got a "Trust us, this stuff is bad" database, and a "Trust us, it matches images accurately" hash function.
And, if those find enough matches (also unspecified), you get yourself reported directly to the various authorities.
It's guilt by algorithm, across every single Apple device (running the latest OS, etc... handwave as needed).
That's very, very different from a customized backdoor deployed to a small number of phones.
Dumb question: do we know from what they've released publicly if it will be possible for security researchers to snag a copy of the database, perform the same perceptual hash algorithm on a given image, and determine if there's a "hit", without violating some kind of license term?
Perhaps the community could run a crowdsourced "keep them honest" service web service -- upload the latest illegal-in-China Winnie-the-Pooh meme, oh hey look at that, it's in the China-only version of the database, isn't that weird, etc etc. (Obviously you wouldn't want people "testing" images that are in the database for the actual stated purpose...)
The scanner that can be fooled by embedding the images into some document is going to get useless fast. Therefore, PDFs and office formats have to be checked, too. Well, and archives, without doubt, should be unpacked on-the-fly. What if someone embeds the base64-encoded image into HTML? And so on, and so on.
You can't just look at media files, you naturally need to scan everything, and the working thing is likely to be antivirus-like, with many file format filters, heuristics, and extended description language.
Do you have the Collateral Murder video? Abracadabra, and now you don't have it.
So, your argument is that Apple couldn't have implemented a wide-scale backdoor because people would know about it but Apple could use this new technology in a nefarious way at scale and no one would know about it.
Just the other day in the Netherlands it came out that cameras placed with the strictest promise that they would only record license plates and only trigger when it matched with a known license plate now also capture persons in the car. And before that the systems were changed to store pictures for 28 days in order to look back. It also came out that the deletion doesn’t always work and that the obfuscation that is supposed to happen also doesn’t always happen.
The reaction of the police seems to be “why do you care about this? We have the data let’s use it. And we could use it to find those ~killers~ these people that allegedly just killed the most popular crime news reporter. So let us do our work.”
Just like software expands to eat cpu/memory/disk resources and people start living up to their salaries: whatever is possible will happen.
To me the slippery slope argument is perfectly valid.
You're right in one sense. In another, isn't this exactly what people said about intelligence agencies ("they could but they don't") and then... Snowden.
That he demonstrated that those agencies were actually doing the things people said they would do, while being defended by people who used arguments quite like gp comment.
I used the word "Snowden" not to refer to him, but to the event. Why? Because I didn't want to type the three letter keyword into my comment. Which is an example of the chilling effect these activities cause.
I think we've learned from experience that tech companies will happily exploit anything they’re capable of exploiting, and this new capability has plenty of potential for exploitation.
Searching everyone's media for evidence of criminal wrongdoing, but setting only one example of the kind of wrongdoing you're looking for is very susceptible to an actual slippery slope, given that there are plenty of other criminal activities they could start looking for should they decide that it's part of their mission.
The only thing that makes CSAM so attractive to go after is how disgusting society feels that it is. Up next could be reporting drug-related texts of known convicts to parole officers. Next could be drug-related texts of everyone to police officers. Next could be letting the police officers make their own searches, where they find everyone talking about a certain kind of political organizing.
We should refuse to accept even the first instance of this kind of thing.
Prior to this change, they couldn't. Now they could.
It's much easier to change the policy than to add a new mechanism. Governments, including the US government have previously attempted to force Apple to break into users' devices, and often apply gag orders to such attempts.
Now they could, and that's enough for me to be unwilling to keep sensitive information on an iOS device.
It is my understanding that Apple implemented a program that generates a hash of a file and compares it to a blacklist, and notifies Apple in the event of a match. It's not clear, but it appears this is only run when uploading to iCloud.
The blacklist itself is not maintained by Apple, but by the US government or a third party like NCMEC, which means Apple can't be sure content that isn't child abuse imagery hasn't made it onto the list. Perceptual hashes probably can't be abused to target non-image/video content because they're an inherently image-oriented technology.
Apple could, however cause such a program to match on different criteria with a simple update, and such a change would likely be difficult to detect. Most of us assume Apple wouldn't voluntarily do such a thing, but it's very probable that they would do it involuntarily. The US government has already attempted to compel Apple to create a tool to compromise the security of an iPhone, and might have eventually succeeded in court if they hadn't gained access by other means. That fight took place in public, but the next one might well take place in secret.
In what way does the OP not understand what was implemented? This gives apple the ability to monitor any file on your device. Not only that, but all we have is the government's pinky promise that the database contains only CSAM. There is no verification. How do we know that it isn't going to end up filled with gay pornography involving consenting adults when the "feature" is rolled out in Saudi Arabia?
How do you know that the image classifier on your iPhone, that exists today, and detects faces, isn't also detecting whether the photo has a gun in it?
We don't know, but a photo of a gun isn't going to break your marriage, make you lose your job, make your friends hate you and get you thrown into jail for weeks.
Someone finding a way to plant this kind of material on your phone does that.
Besides, the fact that something that can be misused exist isn't a very good defense for creating something else that widens the opportunity for abuse significantly.
It is a slippery slope argument, but wouldn't you agree that it's been proven by the past decade of events (e.g. Snowden) that we can expect governments to try (and succeed) to abuse originally well intentioned technologies to spy on anyone?
Considering that Chinese iPhone users already do not enjoy the same privacy features as non-Chinese iPhone users, I would not be at all surprised if Chinese iPhones are going to have a lot more "child porn" hashes to check against.
All data in China is already stored on servers that the CCCP has access to. Why would they need to invoke a convoluted hash match scheme when they can just perform broad checks on the data they can access?
This is incredibly disappointing. The sick criminals that run child pornography rings are not storing their material on iCloud.
The "This could be sensitive to view" screen is downright Orwellian. This technology could be used to scan for ANYTHING, completely undermining user privacy. It might just be CP today, but tomorrow it could be screenshots of protest material, whistleblower content, or anti-government memes.
I cannot express how sad I am Apple has decided to do this. It doesn't protect children, it won't catch any pedophiles, but it certainly WILL be misused in the future and create a chilling effect on what (politically dissident) content people are willing to store on their phones.
Lol there is no way they will be scanning for just one type of thing. That would be a waste of computation. They will tag everything and decide if they want to use it later.
Scary times; for exactly the reasons you've called out.
They already tag everything, just search for any object in your photos library (wether you are on iOS or even Android with Google Photos) and it will find it.
MP3 player, simple HTML browser, FM radio (it's nice often), Gopher (there are a few J2ME browsers, and there are lots of nice stuff out there). For gaming, IF thru the T9 kbd it's effective, or a GB emulator with custom ROMs (itch.io has several free and a lot of games are long and turn based).
Use a smartphone for work and for nothing else. If any, the news over a dumb web interface, as these will run on current dumbphone with ease:
The gradual but steadily accelerating rise of authoritarianism scares me far more than terrorists, drugs, child abuse, and the pandemic.
Unless we push back mightily, it will be a question of when, not if, owning a general-purpose computer that's not controlled by the government or a company becomes discouraged, suspicious, and eventually illegal.
Do you honestly think this, what I'm doing right now - posting on HN on a general-purpose computer, on an open source browser and server stack - will become limited to only licenced professionals?
Companies holding interests in copyrighted works would love to see general purpose computers go away, replaced by "trusted" media players that can't make unauthorized copies or be used to make unsanctioned independent creative works.
Totalitarian governments would love to take away general purpose computers to prevent end users from removing surveillance and anonymity.
Companies who want to control software markets would love it if all software licensing transactions ran thru their "marketplace".
Of course, his suggested solution of open-source didn't quite work out --- because even with the source available, you're still enslaved by things like Secure Boot and other cryptographic jails; and on the other hand, as evidenced by the long history of the cracking scene, and Windows modding in general, not having the source is no big obstacle as long as you can still arbitrarily change any bit. Perhaps what is truly important to software freedom is not "right to read (the source)", but "right to write (anything)".
Apple is not an autonomous hive mind. The hundreds of people that built this and coordinated the deployment across multiple teams including management, deployment, QA, beta testing, machine learning, and SWE aren't lying and the majority of them certainly have good intentions.
Remember, Apple doesn't actually control the CSAM database (NCMEC). They almost certainly don't (and wouldn't want to) even have access to a reverse mapping between hashes and original images. A/The government(s) could (yes, theoretically - this is cryptography we are purportedly talking about, the onus is on them to prove they can't) easily slip in pictures/screenshots of political materials to have your account flagged. Even if it requires manual human review on Apple's end, the government could still (and has in the past) serve them with a warrant + gag order for "false" matches in the past.
Yes, it's all "they could" but with current technical solutions already providing some measure of protection against a corrupt or malicious government from cracking down on its citizens, anything that erodes from that freedom deserves to be held to such a high standard.
In the press release Apple says they are using hashes of hashes.
Apple has no visibility into the original image that generated the hash, so the gov can compromise the list at the source and Apple would have plausible deniability.
„ In his book Tears of Rage, Walsh openly admits being in a relationship with 16-year-old Revé when Walsh was in his early 20s and aware of the age of consent being 17 in New York.[30] Critics of the Adam Walsh Act have pointed out that, had he been convicted, Walsh himself would have been subject to sex offender registration under the law which he aggressively promoted.“
I wonder if the tiananmen square photo matches one of the Chinese hashes. I bet a bunch of lesser known uyghur photo hashes pop against the database. Is it a publicly available hash database? I wonder if it’s possible to run known controversial stuff and see if it flags?
Such images of the Tiananmen square would have been wiped off the face of social media if so - every big social media company uses PhotoDNA or similar to catch likely matches before someone in the real world sees the illegal images on their platform.
After reading the post and as a parent of two kids who are in middle school now...I'm pretty happy with what I see. I didn't expect to be based on the comments I read here before reading the article though.
I know a local family who has a daughter who's been in therapy for the last 3 years because she fell victim to the type of thing Apple is discussing in this post. They are firmly advocates for better parent education and oversight, sharing their experience so that other people can hopefully never have to deal with the same thing. They told us about an app called Bark[1] that's supposed to really help with a lot of this stuff and seems inline with what Apple is talking about here. I'm pretty happy to see it will be built in.
> The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
All the parental controls in the world don't prevent the fact that getting your kids a phone in this day and age is a pretty terrifying experience if you know what type of things are out there.
> When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.
I think the account-owner being able to turn-on or turn-off that type of capability is generally okay (though god knows if I was a 17 year old I'd certainly be switching to Droid for that reason alone).
It sounds like there's a second, unrelated thing going on which cannot be turned-off that reports images to apple that set off their warnings. The large concern there is that this type of techology could obviously someday be used to say report/delete all photos of police brutality. Since 20% of the world alone lives in China, I think the question of "How do we ensure a malicious authorities cannot use this technology against good users" is not really an afterthought but must be addressed head-on before people will buy-in.
This iMessage feature is good, it's the hashing all of your photos for comparison against CSAM hashes that's the problem. How do you know if you've tripped the wire? What about false positives?
Given how free US Law enforcement is with violence, any potential threat of involvement with them makes me very very nervous.
Going to need to reconsider hosting my photos with Apple devices.
My kids' school recommends Bark but, after looking into it, I felt I couldn't trust it. Unlike Apple, Bark appears to (at the time I reviewed them) transmit nearly everything the kid does on the phone to the Bark servers for review. That's a pretty terrible way of solving this problem, in my opinion.
a) Many places are still going in/out of lockdown or are schooling remotely and so their phone is the only way for them to communicate with their friends. Depriving them of social contact is incredibly unhealthy and harms their development.
b) For better or worse apps like Tiktok are a huge part of their culture and the popular dances etc are often known by everyone. Being the only child who is out of the loop can cause serious isolation.
Children are growing and making them not feel like they are part of a social group is incredibly harmful and can have permanent effects in adulthood. Giving them a phone but monitoring their activities is likely to be the least harmful approach.
I still don't understand how there are people on HN who think that giving their kids less access to technology is somehow a virtuous position to take. When I was the same age as my kids I could have gotten into all sorts of shit on a BBS or Compuserve forum -- my parents had no idea what was going on, but they'd given me a basic sense of right and wrong, and somebody to talk to if I was concerned. You've got to educate them about the world, but cutting them off from it is not the way to do that.
You've never considered that maybe growing up apart from the groupthink would be helpful to a child? Did you ever wonder why Jobs and all of the other tech ceos don't let their kids have these consumption devices? They're in charge of of this stuff yet they keep their progeny from utilizing it. But you actually want to give it to them as if there were nothing wrong with it because it's socially acceptable?
Your line of thinking is ridiculous but please continue.
The Messages app auto-blurring seems useful and respectful of the user, which is nice.
Additionally, the client side scanning seems very well-designed, but if iCloud Photos are not end-to-end encrypted, why are they going to such an effort to do this when they already have access to any image they want server-side?
The explanation is complicated but really fascinating. I think I understand it, but not well enough to explain it. Read the section entitled "What cryptographic tools are used in the implementation of the system?" in this write up about Apple's methodology.
My take is that performing the initial hash matching and encrypting the results in two separate layers on device prevents Apple from having meaningful knowledge of low (under the set threshold for being flagged) quantities of matches on a user account. This protects the use of the threshold as a way to further reduce false positives. For example they couldn't comply with a subpoena that said "Hey, we know you set a threshold of only flagging + reporting accounts with 50 image matches, but we want to see a list of all accounts with 10 or more matches because we think that's good enough."
This method lets them set and enforce a threshold to maintain their target false positive rate which they say is ~1 in 1 trillion accounts incorrectly flagged.
Disclaimer: I'm not a cryptographer and could be misunderstanding this.
I think your take is correct but doesn't answer the question about why this matching has to take place on the device, if it's only for photos that are going into iCloud, and the iCloud contents are already being stored unencrypted.
The only remotely plausible answer I've seen is that Apple wants to keep potentially-violating material out of their general storage, and flagged images are being sent to the review team instead of regular backup, but that's a pretty weak guess.
My kids are older teenagers now, but I wish Apple would have had some of this 5-8 years ago. Good on Apple for investing in helping real world problems and issues instead of investing in silencing opinions they disagree with in the name of "misinformation."
I think Moxie came up with the end-to-end encryption technology that WhatsApp and Signal uses.
It is great.
But it didn't prevent Facebook from trying to abuse WhatsApp users in every conceivable other way including some rather innovative ones as seen recently.
Also it didn't prevent Signal from releasing a desktop client with a really nasty XSS bug and the phone client hasn't exactly been without faults either.
You and Moxie are both much smarter than me but I foresaw and maybe even predicted (possibly so early it is under another handle) some of Facebooks actions while everyone else was just talking about how E2E-encryption was the big difference.
This feels more or less the same: the intent is honorable, the core code is clever and almost unbreakable - but you cannot really trust all of the actors you need to trust for the system to work.
We cannot read WhatsApp messages in transit. And, as specified this system sounds good. But WhatsApp happily uploads unencrypted copies to Google Cloud and/or iCloud without your consent if another participant enable backups, and I'm fairly sure we'll see this system containing huge opportunities for abuse by power hungry regimes as well as one or two nasty bugs that has the possibility to utterly destroy a few innocent persons lives.
I'm not sure I really follow this (Signal Protocol is Trevor and Moxie; in particular, I think the triple-DH AKE is Trevor's invention) --- but I'm just here to point out the interesting cryptography angle. I wouldn't touch the policy argument here with a 10 foot pole.
I think the point is that they are using careful engineering to gain the support of cryptographers on the technical aspects, to detract from the problems with the policy aspects.
Thank you for the purely informational post and I'm currently reading through the second just to understand this more thoroughly but I can say already screw the proofs. Apple needs to stay off my devices with their crime scanning bs. It doesn't matter if this whole scheme ensures that Apple cryptographically keeps their hands clean. The effect is the same. They're on my device looking for things that can send me to jail.
These brilliant academics just don't care so long as they can publish an interesting paper I guess. Whatever, still reading.
1. Their credentials provide some legitimacy to this endeavor.
2. They get to do some interesting work.
Those are the positives for them I see.
Rather than abstaining they work on it and lend their names. I don't know whether they support what Apple is doing with the protocol or if they just like figuring out the possibilities. I don't think either is good in this case because I don't like the very first application of this PSI System. Maybe I'm being short sighted and jumping to conclusions all at once. I still don't like it.
Wow, I am surprised Apple is taking this route. It's not like iCloud was a haven for sharing illegal content. I love the security features of iOS but honestly this may have pushed me to move to an Android device running GrapheneOS.
Honestly, any time there is a new policy to "protect children" it is almost always incredibly invasive and it always feels like there is some other motive and "protecting children" is used to scare anyone who tries to question it.
I really don’t understand why Apple is doing this. The vast majority of their customers aren’t involved in any of these illegal activities, so it only provides potential downside through false positives.
I’m also struggling to imagine scenarios where a child predator is clever enough to acquire illegal photos without triggering any number of internet monitoring mechanisms (e.g. honeypots, server logs with their IP address) who would then turn around and upload those photos to their iCloud account. Doesn’t make sense.
I just spent the last hour of my life digging through the material. It seems like they've calibrated the system to have an expected false classification rate of one account in one trillion, per year. Based on the threshold secret sharing math, they won't know anything until a user has a significant collection of CSAM and the secret key is recovered.
> I’m also struggling to imagine scenarios where a child predator is clever enough to acquire illegal photos
I don't want to be the bearer of bad news, but CSAM has been shared openly on the clearnet on places like 4chan for years. The internet is a pretty wide open search space. Many of the people who download CSAM don't do so from some sketchy underground website. This is why Tumblr, for instance, burned their platform to the ground: lots of people sharing CSAM with no way to detect and stop it (without costing Verizon a ton of money to work on the problem).
i'm scratching my head wondering why they bothered announcing it ahead of time?
why not just run the scans in the background... (honestly surprised they aren't already) it's not like it would be hard to omit this avenue as your lead during prosecution
The first item (on-device nudity detection in conversations) is good and deserves some applause. It's also easy to implement, they could just take an off-the-shelf API and optimize the parameters to their needs, and has minimal privacy concerns because nothing is being reported to Apple. I'm honestly surprised this isn't common already.
It's the part about scanning people's photo libraries that folks are (rightly) concerned about.
This is a very slippery slope. Users of iOS devices have already ceded general-purpose computing in favor of Apple being their sysadmin. Now Apple is using that control and access to act against the interests of some users.
The fact that it's only targeting users doing something almost universally considered heinous is irrelevant. If Apple can scan your encrypted photos to make sure they don't contain child abuse, why not scan your encrypted chats to make sure they don't contain a conspiracy to commit murder, or an arrangement to buy drugs, or a plan to reveal something embarrassing to the ruling party?
... and stop their further propagation. It's kind of like a viral pandemic—just because a virus has been spreading for a while, doesn't mean we should drop any attempts to stop new infections from spreading to five more people each.
The more platforms we make unable to keep CSAM on disk, the harder it is for it to circulate in a viral fashion. With this change, all iOS machines become immediately unviable as CP transmission vectors.
Apple's software is quite buggy. There have been multiple instances of Unicode bugs for example, where receiving a notification or text message containing certain Unicode characters would cause a kernel panic, boot loop, or other fun stuff.
People often posted messages containing these Unicode characters in Discord groups (tagging @everyone) for the lulz, because every iPhone user in that group would get a notification containing those unicode characters and then kernel panic.
It's only a matter of time until someone finds a bunch of false positives, spams them around for the lulz, and boom people's iCloud accounts are disabled.
Even if you believe the theory is mathematically sound, the implementation need not be.
damn, I was thinking of getting an iphone. Now I'm thinking of ditching the entire apple ecosystem altogether.
The idea that apple can scan files on my system without my consent is pretty sickening. I don't care what its purportedly used for. This is a slippery slope to all sorts of privacy violations.
Apple is putting the entire database on every phone to scan and promising to only do it for certain situations. They can alter that deal at any time, you’d have to be a fool to think that isn’t their plan.
On-device scanning of “encrypted” cloud content where Apple has always held the key makes 0 sense unless your plan is total device surveillance.
Apple already controls the entire photos pipeline on your device, and in fact all of the OS code. If they want total device surveillance, they've got it today.
They're doing this because they are announcing it, because they think it's a net positive. You might not agree, but the argument that this is somehow creating the technology for scanning and is therefore nefarious is missing the point that if they wanted to do this surreptitiously and nefariously, they (a) wouldn't announce it and (b) could have been doing it for years. This isn't some fancy new tech, except for the privacy bits, which obviously wouldn't apply to Evil Schemes.
This is weird. Apple's own announcement only talks about hash matching, but other reporting (e.g., [0]) talks about a system called 'neuralMatch' that's doing AI on user photos. To me, the privacy implications (and chance of false positives) seems quite different. Quite a discrepancy.
No, Apple's announcement talks about machine learning to power the iMessage nudity detection algorithm. Machine learning in the context of ruining someone's life (and/or landing them on a government watchlist forever) is a huge no in my book.
The reporting isn't helping, with sentences like 'the company is rolling out a new machine-learning tool that will scan iPhones for images that match certain "perceptual hashes"'... is that describing a system that classifies new photos, or is it talking about comparing hashes to a known-evil set?
- Matching against a known set of CSAM (Child Sexual Abuse Material) hashes occurs on-device (as opposed to the on-server matching done by many other providers)
- Multiple matches (unspecified threshold) are required to trigger a manual review of matched photos and potential account suspension
You should reread the first section of TFA, titled "Communication safety in Messages." This goes beyond the scope of CSAM: they're scanning all Messages photos sent to or from a minor's account for any possible nudity, not just CSAM hash-matching.
It sure seems like this is two different techniques/technologies at work. There is a CSAM deterctor using a known database and then, it looks like, there is a separate model in messages for detecting pornographic content without a known database.
It doesn't sound like it can "detect pornographic content" since the difference between pornography and nudity is not going to be reducible to a coefficient matrix.
I feel like the absolute positions are they easy ones to take. But be honest with me, would you not celebrate an outcome where a system like this eliminates the potential use of iOS for CSAM and in turn causing significant decreases in the abuse of children?
No I would not celebrate such an outcome, because I have been around long enough to be aware of and care about higher order consequences of decisions like these.
Have we already forgotten about where fear of terrorism has landed us?
you seriously think they are going to stop at CSAM? that just their foot in the door. a Generation ago, they would have used this to identify lgbt people for instance.
While I share your view that child abuse is a serious problem that hackers and privacy advocates would prefer to wish away, the idea that child abusers' decision calculus is centered on the need to have an iPhone seems bizarre.
Anyone motivated to engage in that kind of behavior is just going to use a different device for their illicit material.
This isn't about children! Children mean fuck all to these people. This is the surveillance state building more instrumentation and monitoring into the fabric of society. "Thinking of the children" is the time cherished tradition for pushing through abusive tech.
This will be used in Xinjiang and Hong Kong. And soon, it will be used against you.
The only relief I have is that Millennials and GenZ aren't having kids and many couldn't give two fucks about them "being protected" from imagined goblins at a cost to their convenience. Maybe this argument won't work anymore.
The previous discussion was based on a few Tweets. This is Apple's information release, and it is more specific. For me, the first thing on Apple's release is the most important: that they will begin censoring Messages content for all possible nudity, not just hashed CSAM, when the recipient is a minor. That's a step beyond the earlier discussion.
Perhaps this same approach can also soon be applied to filtering out the lies and misinformation of the enemies of the Party and to disrupt Emmanuel Goldstein's networks of treasonous domestic terrorists. After all, such misinformation is a plain harm and danger to IngSoc. That Apple is willing to eliminate nudity in private messages gives hope that the Party will soon be able to free all society from doubleplusungood differenthink.
They will pop a warning when the minor sender attempts to send something that is flagged as nudity, or the minor receiver receives something that is, if enabled as part of parental controls for the account.
It's opt-in for the account.
The CSAM scanning is different, unrelated to the age of account holder, is a condition of iCloud Photos usage only, and is not opt-in (and the only opt-out is to not use iCloud Photos).
As an obvious vector - an abusive husband can set up the account on their partners phone, set them up with a child account and "check" they are not sending anything suggestive. And we have no idea how reliable the scanning is for the sexting piece. A false positive notification could easily get some beaten, even possibly killed.
Scanning against a known database of material uploaded to iCloud didn't sound too terrible for me, but this sexting feature is incredibly dangerous.
Let's loop back in an hour and see if this post is still on the front page. I suspect it will have a front page worthy number of votes, but will not be there
33 minutes and it's already off the front page with 161 votes in one hour, compared to items on the front page with 167 votes in 5 hours or 24 votes in one hour.
No official source is protecting this one from users flagging it off the front page.
The additional information is good reason for this post to also be on the front page. It is not a reason to flag the original and bury that discussion.
This is what happens when you abandon individualism and replace it with corporate worship. When you make brands a part of your identity, you can't think about the real world damage being done.
If you're on the fence about Apple, and their lockdown of free and open computing wasn't enough, or their strong arming of the mobile commerce economy didn't do it, please let this sink in. This is where we're headed. 1984 is a plausible future.
Stop buying Apple and ask your representatives to break up the monopolies. This order came from the FBI and intelligence communities (and probably the CCP and respective organs in China).
Not all US lawmakers are beholden to this abuse of power, and they can help us put an end to it. Call them. And stop buying Apple.
Fight for your democracy and freedom, lest it be completely wrested from us.
1: create a child safety organization, or require an existing one to accept your images
2: add images of the children of dissidents (or journalists, or leaders of other political parties), photoshopped to be sexually explicit
3: dissident iphones informs on them. Apple turns the information over to the authorities in the host country
4: if Apple pushes back, threaten iphone sales. Or just improve your doctoring.
5: if Apple plays along or doesn't complain, insist on the ability to detect terrorists, criminals, etc. Again, threaten iphone sales, allow Apple to keep the agreement secret.
This may only work once or twice, but it's worth a shot! If you make it to step 5, you have a really bespoke, beautifully designed, Apple managed intelligence apparatus. made with love in Cupertino.
Maybe this is an overly cynical take, and in addition to the cryptography they have rock-solid, audited governance and internal controls that would prevent it and/or insider abuse.
Maybe localities with real data privacy laws (EU) will be able to offer protections to their citizens with fines big enough Apple will begrudgingly agree, so that a repressive regime can't target their citizens as well as citizens in the host country.
Maybe this isn't a slippery slope to more exotic forms of surveillance, like scanning your contact list for pedophiles.
> The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image.
"additional encrypted data" makes it sound like it's encrypted to keep your information safe, when it actually means "encrypted so you can't see what's being sent" and that "only Apple (and people/governments Apple shares decryption keys with) can decrypt it." And what does "additional encrypted data" even mean? I'm guessing it includes a thumbnail. It could include a list of people you shared the image with and their home addresses. The point is, you don't (and can't) know.
While I believe there is genuine good intent behind this I can’t support apparatus that can be repurposed and can be used to cut me off from my entire digital life due to an arbitrary review process.
Not only that I do not consent to using the device I purchased for such measures.
I need a grand reset to 2005 levels of technology at this point.
Woah. I definitely don't want this. I don't want apple trolling through my data.
Terrifying. This will be used in other ways. Whenever you hear "protect the children" as an excuse to increase surveillance you know they're up to something horrible. The future is bleak AF.
Google has been doing this with their cloud services since at least 2014 (from a quick google search) and Facebook at least 2011. From talking with a founder of a popular image service, this was a requirement a decade ago, for liability reasons.
I don't think you quite understand what Apple is doing.
This is absolutely going to lead to deeper use of this kind of anti-encryption tech. third party doctrine loophole to the 4th amendment -- pretty soon every document on your PC will be crosschecked for a litany of things the government deems illegal.
Maybe I'm misunderstanding, but it's scanning cloud hosted images, correct?
> allow Apple to detect known CSAM images stored in iCloud Photos. ... Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account.
The only non-cloud hosted scan is for child accounts, with Apple getting no access, just transferring the warning to the parent.
> "developed in collaboration with child safety experts"
Prove that. Release all git commits and emails and communication and who was involved in these features. I don't see any reason they would not do that, if this was about "child safety".
My guess is a portion of these "child safety experts" will have emails ending in "nsa.gov".
> Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Are they saying that they physically can’t access your iCloud until the threshold is reached, or just that they “promise not to”?
It sounds like this is a feature of the encryption used. Apple will not have enough of the key to decrypt anything unless there are enough vouchers which match.
Apple always has the decryption key for your iCloud data. When law enforcement requests an account's data, Apple decrypts the data and sends it over. Apple's law enforcement guidelines[1] explain:
> iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari Browsing History, Maps Search History, Messages and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, Business Chat, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the
server. When third-party vendors are used to store data, Apple never gives them the encryption keys. Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the customer’s account, may be provided in response to a search warrant issued upon a showing of probable cause, or customer consent.
This happens to thousands of accounts every year.[2]
Apple has been pro-encryption and pro-privacy lately and is using that as their key differentiator, especially against Google.
Various governments, on the other hand, have been very anti-encryption over the past couple decades. Australia's anti-encryption law[0] is just one example but I'm sure there are many others.
This presents an ongoing threat to Apple's current strategy.
The most common arguments used by governments to justify anti-encryption legislation are child protection and anti-terrorism.
I wonder if this is a tactic by Apple to undermine that common argument and pro-actively protect their rights to employ end-to-end encryption and other privacy features.
This is brilliant misdirection. The crypto is great, but where and how your content is scanned was never the problem to begin with, or at least, a small part.
The accuracy of not just the detection rate (there are some outlandish claims of once in billion here), but also the accuracy of the NCMEC database are really the main concern, as well as Apple keeping this system limited to this specific scope.
Interesting aside: I once attempted to get access to PhotoDNA, essentially the only insurance against malicious actors abusing upload fields on your website to "digitally swat" you (as has happened to a twitch streamer with an open Dropbox folder), and there is no way you'll get access without a department of lawyers. Why is NCMEC is so protective of an API with rate limits and automated reporting features and then would let Apple ship a bloom filter.
I remember reading at one point that perceptual hashes used in PhotoDNA could be reversed to reconstruct the image, which is why the database is closely guarded. I assume the bloom filter implementation significantly raises the bar on reversing.
I'm going to bet 100:1 that Android and Windows with Mac in tow are going to follow soon with their own implementations (though most likely technology supplied by three letter agencies) in next three years.
All that's needed is a law prohibiting unsecured computing devices by government to plugin any 'gaps' (cue: non tpm, locked down bootloader devices)
If I understand correctly pictures of my naked children never match any known hashes so I should be ok (1 in a trillion ok that is? What is the expectation value for this?)? Not that I make such pictures but perhaps by accident. It’s not that I find it weird but god I’m afraid they end up beyond my control.
I don’t use iCloud (only for MS MFA backups), is this affecting me?
I self host everything, what happens when my account is suspended, does my iPhone work?
I sure hope we get to learn how effective this has been in say, a year? It’s quite something that my device is going to scan and check my pictures. I’d really like there to be a large, confirmed benefit for the children.
1 in a trillion is not particularly impressive. Aren't there a billion iPhones? And probably 10,000 pictures on each phone on average (people take lots of selfies).
Well, since Talking About Statistics Is Hard, you have to look at the exact phrasing from the website: it "ensures less than a one in one trillion chance per year of incorrectly flagging a given account". So, each of those billion accounts has a 1:1T chance of false positive. If I remember my stat 101 correctly, that should translate to a 1:1000 chance of having at least 1 false positive on the planet during any given year. (And remember, even a false positive just means that a human reviews your photos, not that you get reported to the police.)
1) its only scanned on upload to iCloud, so if you don't upload then its not scanned
2) (per another article, https://techcrunch.com/2021/08/05/apple-icloud-photos-scanni...): Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM.
So you really can't opt out unless you avoid all cloud photos
Just because Techcrunch says it's so, does not mean it is true. My understanding of other services is they might not scan contents that are not shared. Techcrunch does not have first-hand knowledge of this topic and the other clouds are not explicitly saying they do or do not.
If those other services do it on their own computers I'm fine with that. I never expected privacy from them. I don't want my CPU, battery time and network bandwidth to be used for this.
Does Apple have a legal or regulatory requirement to scan iCloud photos for CSAM? I poked around but didn't find a quick answer.
Reading through the two linked PDF's, I got the impression that they're aiming to use cryptographic techniques to meet a (possibly self-imposed) requirement to scan for CSAM while revoking their current ability to decrypt photos stored in iCloud. I guess they may want to revoke this ability so that they can no longer be compelled to hand over customer's photos except those for which the requirements of this new system are met.
I guess if it results in an increase of the privacy of iCloud content, that's nice as far as it goes. But it does skeeve me out to be reminded that my phone can (and does) paw through my stuff.
This raises some questions. Suppose my hypothetical 17-year-old daughter does send nudes to her boyfriend, and suppose I as a parent am totally okay with this. If this technology isn't using exact-file-matches but heuristics on the file, is there some world where this consensual naked photo gets sent as "evidence" to Apple/NCMEC?
"Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC" -- So doesn't "reviewing reports" in this hypothetical mean looking at a child's nudes without consents in this case? The documentation left out the part where this file, stored entirely locally, gets uploaded to apple. Or does this only happen for iCloud?
Only if your daughter’s nudes end up in NCMEC’s database, which happens if (among other things) your daughter confirms to NCMEC that they were generated through an abusive scenario and she would like them classified as such.
But that's using "perceptual matching" too isn't it? They say it's configured for a low risk of false-matches, though they don't detail their testing procedure. Presumably they would know almost immediately of false matches, maybe if the odds are really 1/trillion they should commit to disabling the system if any false matches happen at all.
Also, is this all configured on the server-side? Suppose Apple has a pervy employee who vouches my daughter's account "looks suspicious," would he/she then be able to override or "manually review" all her pictures? Would I be notified of any such review?
It seems like none of your concerns are unaddressed. The following is all from the linked article:
1. Nobody new can see images sent over iMessage under the proposed changes.
2. A perverted employee cannot flag an account as suspicious; only a device can flag itself when a threshold number of photos matching the NCMEC database are found.
3. Accounts which meet the threshold and have the photo thumbnail verified as objectionable will be suspended with an opportunity to appeal.
No you didn't address my first concern. Which is, it's very easy for a company to write a blog post full of promises saying anything they want (e.g "1 in a trillion chance of false positives"), but it's all just words without some objective source giving oversight into the process.
Take for example point 2. According to what they've written here, yes that's how it works. But the point is, you have no flipping clue what's actually true, just what they typed. There could be debug modes, the false-positive numbers could be entirely incorrect, they never said what the "threshhold number" is, so that could be 1. They could have even meant everything that they said, but there could be a bug in the code.
Suppose there is a bug they discover, and the false-positives are exponentially higher than they quoted. Do you think they would publicly admit that? Also how big are these thumbnails and under what circumstances do they have access to them?
My main question is whether the system makes it highly impractical to look for 1 to a small number (let's say 5) newly introduced images. We have the secret sharing threshold, how easy is it for the server to change this on the fly? Putting aside the principle for a moment (personally I don't like this because it's a cop in your phone, full stop), if we want to practically prevent government abuse to look for leakers (or whatever category you prefer) then it's helpful if Apple can't reasonably comply with an order to find someone who has just 1-5 specific images.
Upon reviewing TFA it appears this is intended for iCloud users as a privacy-focused alternative to CSAM-scanning the images you upload to iCloud on-server. Rather than leaving images unencrypted on-server so thay they can be checked, the images are checked on your own device prior to upload (to dramatically simplify it). The sell here is that CSAM checking is then done without revealing the image to Apple...unless it's CSAM, and there's enough of it.
iCloud was already scanning for CSAM, this time though they do it without seeing what you upload to the server.
This is essentially encryption with a backdoor, and the chief objection is basically the same as the Clipper chip/Skipjack. There are privacy issues with the idea of backdoored encryption, as the EFF pointed out. To counter misinformation that is being spread, it's not a little AI cop doing blanket scans on your image library and app caches - it is expressly intended as a part of iCloud CSAM detection.
But at the same time it seems trivial for Apple to expand it to that someday, since they just deployed a method capable of doing so trivially. Conceivably they can make the API so that it does a scan when an image is saved to disk. This is not currently what will be done but whether or not they will is anyone's guess.
I posted in this thread a little bit ago. I couldn't find anything coherent to say at the time. Now I have something to say.
I will be pushing for our current & future customers to use alternative device vendors as part of our product stack. We have developed a cross platform implementation of our iOS application recently, and this is a great opportunity to start putting it to use.
I don't know if Apple is paying attention, but there are enterprise customers out there paying them $300/yr for the permission to use their presumably-secure devices for purposes of running custom LOB applications. Our organization and product are part of the reason you have financial institutions making bulk purchases of iPads and other Apple accessories.
If Apple makes it impossible for us to provide assurances to our customers that no data is leaving the devices (aside from agreed-upon application protocols), then we have a serious problem. I am not putting myself in a position where I push a customer to purchase 1000+ iPads for their business if I will then have to answer to their auditors 6 months later on when sensitive customer documents hashes or whatever other horrible things start getting sent to the mothership as part of v2.0 of this shitty idea. Apple even said it themselves - "These efforts will evolve and expand over time.".
Is there a special order form if I need to purchase a non-bugged iPad for a classified government setting or very pissed-off enterprise customer? If not, our default recommendation is going to be Microsoft Surface and we will totally deprecate usage of Apple hardware over the course of the next 18-24 months. This is setting a completely unacceptable precedent, and most of our customers I have spoken with today also agree.
I'm not strictly against this if it can effectively help protecting children (which needs completely separate discussions of course). But Apple's usual privacy/security PR masquerade doesn't seems very compatible with this, especially with the sentence "These efforts will evolve and expand over time".
Sexual predators and sex trafficking are trojan horse reasons for deeper momitoring and privacy violations. No one would argue against child pornograpy being bad and by transitive properties there is an attempt to make arguing against violations of your own privacy equally morally abhorrent.
The US has a rigid structure in place (NCMEC, etc), which other countries don't. Also, they have to start somewhere, and they understand the landscape of their home market best.
The US also has stronger laws so this is also about protecting Apple.
I'm happy to see people held accountable for being scummy human beings but this does raise the question of where the line will be drawn and who gets to make that decision.
I'm going to contradict the rest of the commenters here: I strongly support this change. The ease of producing and profiting from child pornography has risen tremendously in recent years due to the advent of technology, and it's perfectly ethical for tech companies to account for their impact and attempt to mitigate it. There is no slippery slope from child porn prohibition to political censorship, as so many other commenters fear.
> There is no slippery slope from child porn prohibition to political censorship, as so many other commenters fear.
The UK ISPs implemented Cleanfeed, a joint database to block access to child abuse materials, many years ago. Your feelings were pretty common.
Since then Cleanfeed has expanded to censor copyright infringing sites by court diktact, and proposed legislation (which will, in some form pass) will effectively use it to block all sites not suitable for under 13 unless they implement age verification and where the guidelines can be updated at will explicitly for policial purposes by the Secretary of State.
So it is pretty hard to take your statement seriously where such slope exists and many of us are much further down it.
> The ease of producing and profiting from child pornography has risen tremendously in recent years
Have I missed something?
Running a porn site with underage content and receiving big money from big payment processing systems stopped being an option in the early 2000s after some big busts. Then individuals would try it, and would try to use shady payment sites linked to other crimes. Then child porn, and even traces of past discussions of it, simply disappeared from open web (thanks to Google silently sugarcoating the search results, and hosting and ad businesses being increasingly picky about what they want to see). A typical person won't be able to find any of those producers (apart from obvious scams and honeypots) neither on open web nor on anonymous networks. Heck, even I, having some experience, probably won't be able to find anyone “selling child porn” even if I was told to do so and visit every possible place of discussion.
Though I agree that producing all kinds of media with modern technology got easier enormously. A dedicated handheld camera looks like a steam engine today.
I'll take an unpopular stance. This is the least bad way a backdoor scan could be implemented.
At the very least, this makes retroactive scans of a newly banned content difficult without explicit deployment by Apple, and explicit software update from a user. In Western countries, this friction is enough to ensure people will be able to file suit in courts to block the government.
In authoritarian governments, it's not, but then, nothing is.
Until this I was typically defaulting to Apple products for privacy reasons.
This seems like a major issue to me. This is an ideal spy tool, disguised as an privacy-preserving feature.
Any government of a country with a large enough market for Apple, can force Apple to include hashes hashes into localized versions of OS.
So my concern about this has less to do with the technology itself, and more to do with...
What happens when lawmakers make this mandatory? In such a way where the user can't be allowed to remove it?
If that ever happens, you've just outlawed phones/computers that allow their users to have ultimate control over the software running on them. No more alternative desktop/mobile OSes.
Can someone who understands cryptography well enough please comment, if these hash comparisons can easily be extended to other areas such as contextual analysis of photos or texts?
For example would it be easy now to get to the hypothetical scenario where a text containing certain phrases will be flagged if some partner / regulator demands that?
It's almost comical the tendency companies/societies/countries have to invoke absolutely abhorrent concepts in order to justify their future unethical behaviors and shield themselves from any valid criticism.
The fact that they raised the children card speaks volumes about the level of fuckery they're gonna deploy on this one.
I like that these features are being introduced with something similar to a press release and are being discussed in the open, as well as from the point of view of privacy.
It all may still end up being a slippery slope; but with this perhaps the chance is slightly lower (than if Apple introduced these changes without as much of a comment).
> The Messages app will add new tools to warn children and their parents when receiving or sending sexually explicit photos.
> Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.
Almost every government blocking system started for the sake of the children, then expanded for this or that reason. In some cases, it expanded before starting: notoriously, first sites blocked when Russian government censorship was introduced (to protect our kids from the dangers of the internet!) were well known media entities that had criticized Putin for a long time. The zest was that they were hurriedly blocked even before the relevant acts came into effect. Like, why keep the act after the deed is done? So now there is a government agency that has been blocking oh-so-edgy pictures of nooses on DeviantArt and similar content for years to pretend that they care about the children, and not execute censorship (and sell their extortion services to people who want to remove some information from the Web on the side). Whether that helped any kid is an open question.
Send or otherwise inject problematic images to the target, but make sure the target can't see it (image too small, part of larger collage), but their device does. The target gets flagged by Apple for having CSAM.
Can somebody please explain what the big issue here is? I thought this only applies to iMessage and there are many other possibilities available to communicate securely.
They have created and implemented the technology to scan your personal phone for specific documents from a government database and alert the authorities if matches are found. If you don't consider this an issue, you trust your government way too much.
That is not true, they are only scanning photos uploaded to iCloud against the database. The only on-device things being done are detecting nudity in iMessage for minors and alerting the minor/parent. Also something about Siri and detecting searches for CSAM content.
Edit: Minor correction, the scanning on your photos does happen on-device but only if you are going to upload it to iCloud.
So they have implemented the technology. Scanning for photos not being uploaded to iCloud and/or non-photo documents will be a matter of flipping a switch. I am willing to bet serious money that the switch will get flipped sooner rather than later. Perhaps with a facade of government forcing them to do so, so Apple won't look like a bad guy. But in any case, it will get flipped.
I'm not disagreeing or arguing that I don't worry about a similar thing happening. All I'm saying is your original comment is not currently the truth. I'm all for talking about the slippery slope but it's important to be truthful in what we are talking about. You stated "They have created and implemented the technology to scan your personal phone for specific documents from a government database and alert the authorities if matches are found.", that is not true. They only scan photos and only photos that are being uploaded to iCloud. Do I worry they might flip that switch and do more than photos and/or scan them even if you don't upload them? Sure but the truth matters and at this point in time they aren't doing that. Again, in an attempt to ward off some replies, I'm not saying I agree with this move or that I'm happy with it, I just dislike things being stated as facts when that's not the case.
They are also scanning iCloud documents to match photos against known child abuse photos and reporting any that they find to law enforcement as well as locking you out of your iCloud account.
No, they're potentially reporting after some high threshold of matches. They explicitly say that individual matches do not reach that bar, although they do not actually say what "high" is other than the statistical claim.
> These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.
As long as you stay on iOS 14/iPadOS 14/watchOS 7/Big Sur then you will never get the on-device features. The photo scanning is happening on photos you upload to iCloud. Other than for malware I'm not aware of Apple ever pushing an OTA (in the sense that the user doesn't get to choose to install the update since almost all updates are technically "OTA"). Even the most recent Pegasus hack fix is in an update you can choose to install.
This feature is a big plus for parents. It's hard to appreciate how hard it is to protect kids online if you don't have kids yourself. I get that those without kids will find it intrusive, but it sounds like these feature are opt-in.
I consider iOS to be the best platform for kids today. That said, Apple: if you're listening: please tighten up parental controls around time limits and re-loading apps!
I have another idea to protect kids, let's say a kid is moving on a highway and it's speed according to a phone gps is 66mph in 60mph zone, wouldn't be it wise to notify the police about his parents braking a law and endangering a child? There are so many kids dying every year on our roads.[/s]
This is completely unacceptable. Apple is giving themselves the ability to monitor and/or censor anything you do on your phone, no matter if it's end to end encrypted or not. It's not at all a stretch to go from monitoring iMessage for CSAM to monitoring it for gay pornography or any pornography at all. Or monitoring for anti-government sentiments.
Sure, they're only scanning for images right now. But there's nothing stopping them from scanning for other things, like Mein Kampf or the writings of Karl Marx. Or some other author who suddenly finds themselves unpopular with the government of the day.
Done with Apple now. I will never own a device that is a revolving door for law enforcement to look into my doings.
This constant yo-yo of privacy marketing and bad privacy decisions is too time consuming and mentally stressful.
I was waiting to update my 2017 MBP to a M1 16" when they come out and when my iphone 11 is coming up 2 years. These products will no longer be apple products and idgaf if I have to deal with linux desktop and jailbreaking an android to get it done.
I think Apple has good intentions. However history shows the next step will be detecting less serious types of abuse, crime, etc. This system will expand slowly, step by step, until it is normalized again.
A sad say for privacy and freedom.
Obviously any child porn owners will stop using Apple devices, while the rest of us suffer the consequences.
I always wonder what would happen if Tim Cook will retire one day and be replaced by someone like Donald Trump. Once all the tools are in place...
As a young adult I have to ask this - you guys really fucked up this hard on defending the privacy landscape? How long do I have until a mini space hitler in my pocket that censors and snitches on my activities?
Your actions or lack thereof will have grave consequences for billions of people across generations to come.
First off, I don’t think this is some evil plan to kill our privacy. I think this project is done with good intentions, if nothing else.
However I think this is an interesting question: how does Apple know that the hashes they’re supplied match CSAM, and not, say, anti-government material? How would they know if the people they got hashes from started supplying anti-government hashes? Apple will only be receiving the hashes here - by design, even they won’t have access to the underlying content to verify what the hashes are for.
Safetyism, particularly with children, is the vehicle that is most often used to enable bad precedent-setting policies, because it can seem virtuous and morally acceptable in that single application. Here, Apple is using a "think of the children" argument to open the door on their intrusion into customers' private data and policing/moderation of the same. Google started doing this themselves recently (https://news.ycombinator.com/item?id=23275308) and it is just as unacceptable here. I guess that's the end of me using or recommending Apple products.
Doesn’t this ease the worries from yesterday’s thread? They’ve taken steps to minimize false positives (one in a trillion accounts per year), and it’s not happening on private photos like rumored, only photos that are synced to iCloud (which is what they were doing server side already (?)). I don’t buy the slippery slope argument personally, if you want to get up in arms when they scan for politically motivated content, do so when they actually do that.
I took that to mean further protections for children, not expanding the overall scanning. Also isn't it you not caring about their property rights? Should Apple be forced to host CSAM on their servers? Should all businesses be required to?
It never occurred to me before reading this post that someday child porn will inevitably be completely vanquished. Open to contradictory views, but my thinking is that someday, eventually, AI will be advanced enough to identify an image's content unambiguously and without error, just like a human user.
No human could possibly be unable to correctly identify an image of child porn, and someday algorithms will reach that point too. And once they do....it's probably not a big leap from there for browsers and operating systems to start denying the images altogether, just blacking them out and preventing their transmission over networks.
The flipside is that it is impossible for even a human to tell the difference between a 17yo and an 18yo, and moreover impossible for a computer (and arguably for a human) to know whether the user of a device is sending images of themself (i.e. a 17yo sexting with their bf/gf) or whether it's exploitation. That's harder.
So for like this new post from apple, this is going to be pretty shitty for high schoolers trying to sext their boyfriends and girlfriends.
To the latter point, a minor user transmitting sexual images of themselves may be breaking the law depending on the state and the age of the recipient, so an algorithm may not need to "special case" a "legitimate" reason for the transmission of a sexual image of a minor -- or if the algorithm did, it would need to be very complex to account for the state and ages of both sender and recipient. See e.g. Louisiana's sexting laws: https://www.criminaldefenselawyer.com/resources/teen-sexting...
I run GNU/Linux, and use encryption. Any mandatory technology which could scan my hard drive for child porn could scan it for:
- Antigovernment subversive materials
- Embarrassing things to blackmail me with
- Non-PC communications
- Right-wing/left-wing political materials
- And so on
I'd like to continue to have a computer I control, and I'd like my child to grow up with that freedom too. I'm much more worried about risks to my child's freedom and privacy than the (exceptionally rare but frightening) chance of exploitation for child pornography.
As a footnote, many algorithms able to correctly identify child porn would also be able to generate it.
"We're going to scan your photos, on your encrypted device, to look for badness. Right now, we're going to claim that's only for the really icky people that nobody is going to defend, but, hey, once the tech is in place, who's to say we can't scan for dank memes and stuff?"
I think I'm done with Apple. Sad, really. I was hoping that their bowing to China with iCloud wasn't a sign of what's to come, but apparently it was. They had done such nice stuff with privacy too.
Demote my phone to a house phone and go without, I suppose.