Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't shit in my hand and call it chocolate ice cream.

"We're going to scan your photos, on your encrypted device, to look for badness. Right now, we're going to claim that's only for the really icky people that nobody is going to defend, but, hey, once the tech is in place, who's to say we can't scan for dank memes and stuff?"

I think I'm done with Apple. Sad, really. I was hoping that their bowing to China with iCloud wasn't a sign of what's to come, but apparently it was. They had done such nice stuff with privacy too.

Demote my phone to a house phone and go without, I suppose.



These processes seem to only move in one direction. In 5 years this exact comment will probably make you sound like an "extremist" if you say it to a random person. "What, why wouldn't you be okay with apple searching through your photos?"

I see it everywhere and it literally seems like some kind of one-way entropic process. I can't think of anything that would reverse it. It would be like turning an omelet into an egg.

Is there something about modern life that just inescapably creates this complacent, servile temperament in most of the population? Or has it always been there and I'm just overthinking it? It's really depressing either way so I try not to think about it


>I see it everywhere and it literally seems like some kind of one-way entropic process. I can't think of anything that would reverse it. It would be like turning an omelet into an egg.

Thomas Jefferson had some arboriculture advice that seems relevant to this kind of thing.


In addition to separation of church and state, we need separation of corporation and state


If the Founding Fathers were still alive today, I wonder what they would think of all this.


Assuming they were not struck dead by future shock (photo, radio, television, telephone, mobile handsets), they would lose their damn minds that Black people have escaped slavery, non-landholders can usually vote, and the Presidency is (mostly) determined by the popular vote.

I do not think any of the founders, however amorphously the term is applied, would have interesting thoughts about cryptosystems for the examination of private photographs stored in commercial cloud setups.


They would die promptly from any of a number of diseases common to us that they have no natural immunity to.


Yeah I think it wilted a while ago, as fun as it might be to think about.


> Is there something about modern life that just inescapably creates this complacent, servile temperament in most of the population? Or has it always been there and I'm just overthinking it?

I only have anecdotal evidence, but it seems like most people don't care about problems unless they affect themselves personally right now. Also, most of us (HN aside) don't tend to think seriously about how things will go wrong, we're generally optimists that give more weight to the good something will do.


I appreciate that Apple is trying to find technical means to reduce child porn. That's a more important problem to solve than optimizing ad click rates. But my concern is that once the tools for local image scanning exist, Apple will come under pressure from authoritarian regimes like China, Australia, and Saudi Arabia to also search for images associated with lesser crimes or even just criticism of the government. It's tough for a company to refuse. Either comply with the orders, or risk having your business shut down and your local employees punished.


What makes you think that child porn is such an important problem, apart from rag articles on vague “Darknet”? Shouldn't all the phones get scanned for the evidence of tax evasion, a much more widespread crime? All those rich people dealing with offshore businesses have iPhones™, it's only natural to make Apple do their part as a honest member of society… I mean, honest corporation of society.

If you allow me to guesstimate wildly, most “child porn” these days, in a technical sense, is made by kids themselves having access to an internet-connected device with a camera. Sometimes it is extorted by despicable abusers, sometimes it is done for no one in particular, just for the perceived fame/popularity/likes on social services/etc. Big services have an army of moderating grunts to keep Victorian purity of blissfully ignorant common public intact, things are a bit different in poorer parts on the Net and on the Globe. Should we expect the naked selfie of a teenager sent to their significant other to automatically cause a police interrogation of the same teenager? What if the device is shared with older members of the family? Another man-made dystopia enabled by people who enjoy to express outrage over the racy stories in the media, and people who are too afraid to speak up.

In essence, Apple has introduced a software agent to signal whether you have files that belong the list someone provides. If I recall the scandal correctly, this is what Kaspersky allegedly (ab)used, and what other antivirus tools (including Microsoft's built-in and enabled-by-default Windows scanner that for some seemingly important reason nags all the time if uploading of files to Microsoft is disabled) surely enjoy offering to various agencies around the world.

I don't think you should worry about “China, Australia, and Saudi Arabia” so much, there's elephant in the room you don't like to notice.


You are stating it in reverse. Think about the children is always an excuse to get started with the next big things (like all the other things that you mentioned). So its step 1 of a grand plan on the lines of 1984 (probably to get some more markets to open for them by cozying up to governments).


> probably to get some more markets to open for them by cozying up to governments

Which markets are closed to Apple by its own government currently? North Korea?


China has certainly made threatening noises about wanting access to everything, and they're a huge Apple market.

Apple has bowed to them on iCloud server locations and access already.


China, the next billion users one


The state must declare the child to be the most precious treasure of the people. As long as the government is perceived as working for the benefit of the children, the people will happily endure almost any curtailment of liberty and almost any deprivation. -- Adolf Hitler



Do we know the true origin of the quote? The source you link links to another source and makes it sound like that was the origin, but the source it links to doesn't use the full quote and already is using it as a quote, so it makes me think the actual quote predates that usage. It may be the first time it was misattributed to Hitler, but is it actually the origin of the quote?


"This quotation is often cited as having come from Mein Kampf, the autobiography and political polemic written by Hitler during his stay in Landsberg prison, but no such passage appears in that work. In fact, this quotation appears to have been virtually non-existent until 2014, when it was used as the heading to Chapter 11 of Willfully Ignorant, a novel by Pat Miller set in Nazi-era Germany. (That book also mistakenly cited Mein Kampf as the source of the quotation.)"

https://www.snopes.com/fact-check/hitler-control-quote/


There is something like a 0% chance this program wasn't co-developed with representatives of the government entities you least want rifling through your data.


I find discussions around this kind of stuff frustrating because often times what's actually happening always gets muddled by the hysteria, even here on Hacker News.

From the link: "Before an image is stored in iCloud Photos..."

This leads me to believe that only data that is going to get uploaded to their servers is going to be scanned. If anyone has a different interpretation or thinks I'm wrong, feel free to reply.

It's my understanding that all cloud services do this type of scanning, when they are technically able to.

But all data on iCloud is encrypted by default, so Apple can't scan for this kind of material once it is on their servers. Doing it on device before it gets encrypted and uploaded is the only place they even could do a scan like this.

Additionally, they make it clear in the article that there has to be more than one hit (they don't say the actual number) which would mitigate risk of hash collision false positives.

If this type of scanning makes you uncomfortable, you can just not use their cloud services.

I do agree that this is still not a good direction to go, even with all the precautions they've taken. But I had to do some digging to figure out what was actually going on, the comments/commentary made it seem like Apple is now routinely scanning all your photos/videos if you have an iPhone.

Once the code is there to do local scanning, it might make it easier for a zero day exploit to do phone scanning and grab data it might not otherwise have access to or for governments to force Apple to conduct scans of content on a phone when they ask.


iCloud isn't end-to-end encrypted and Apple is technically capable of accessing the data stored there. As I understand, they already do server-side scanning for abusive material.


I'm not an expert in how this works, but I am skeptical that Apple would do this if they had a way of scanning iCloud photos and videos on their servers.

They also confirmed it is only when the photos upload to iCloud: https://www.macrumors.com/2021/08/05/apple-csam-detection-di...


A generous explanation would be that they intend to add end-to-end encryption to iCloud eventually, together with this new scanning technology as a "government backdoor". A less generous explanation would be that they intend to expand this scanning to non-iCloud media eventually.


iCloud photos today. Anything which the state has issued a search warrant for tomorrow.


I agree- I think this is a bad decision on Apple's part. It really undercuts a lot of their statements about privacy by doing any kind of on device scanning of your content, even in such a narrow context.

I guess the point I am making is that as of now, this only applies if you're using their cloud services. I'm not sure if Apple would announce if they were compelled to use this functionality through a court order.


The US is one of the main countries that spies on their citizens. I do wonder if Apple developed this to keep FBI happy. The same way they stopped on E2EE cloud backups to keep the FBI happy.


I'm always a bit wary about child porn arguments. For example, the Australian police ran a child porn website for 11 months https://www.vg.no/spesial/2017/undercover-darkweb/?lang=en. A child protection agency spread pictures of child pornography http://saucenao.blogspot.com/2021/04/recent-events.html. At some point, I start to wonder about the intentions of the people behind these.


This tool has nothing to do with child pornography. Youre reading the marketing paper.


They need to at least have a canary clause that can be tracked to determine if they had complied with any such requests.


bool is_the_canary_dead = is_the_feature_released;


You are right to worry. Once the tech is employed out in the field you literally cannot go back.


forget about authoritarian regimes committing genocide, think about the children sharing pepe memes with ok signs. There are lots of pitchforks ready to go, and some of them already work withing Apple campus who are writing design docs on how to make Apple ecosystem even more safe


Who has access to update what are considered bad images and when to alert the authorities? This could very easily be made into an anti-wikileaks feature.


Exactly, if some whistleblower is going leak a picture of some classified documents, the intelligence agency could simply upload a picture of the documents to the database. And bam, the journalist is arrested.

Moreover, if the database doesn't store actual images and instead only the perceptual hashes, it would be impossible to audit, even if the auditor has access to the database.

All the auditor would see is a bunch of hashes, they wouldn't be able to tell which hash actually represents CP and which hashes represent pictures of confidential documents.


The next question is since this database is not exact binary hashes, but an ill defined "perceptual hash" - how many false positive images has this system surreptitiously extracted from customer hardware and shown to presumably human control personnel? Were these customers notified? What is the recourse for them?

PhotoDNA has been around for a while and you might think about it what you want, but they have never answered those very pertinent questions.


Supposedly, the false positive rate is 1 in a trillion.


This is all a few months after touting themselves as a company focused on privacy. Decentralization is the only way to truly know you have privacy today I suppose.


You’re already completely trusting them if you’re using their device? I mean I get the privacy concern, but it’s odd that this is the line if you were fine with things before. That said I’m all for ditching the smartphone and best of luck to you


Historically, Apple has come down, hard, on the side of "Your device is your device, and where that assumption is violated, we will work to protect it." They have added increasing layers of hardware security around device encryption keys, have added the various mitigations for password guessing attacks, etc.

And when the FBI said, "Hey, can you write and sign a custom bootloader for this phone to bypass that stuff?" they told the FBI to pound sand and made the hardware security features stronger so even Apple couldn't break them.

And then they bowed to China regarding iCloud and in-country servers, which clearly are accessible to the government. And then this. Whatever claims they've been making about privacy are obviously now crumbling under some external pressure.


> they told the FBI to pound sand and made the hardware security features stronger so even Apple couldn't break them

You didn't really believe that cheesy open letter, did you? The real reason they declined was because FBI innocently said “We can't bother to transfer phones to Apple all the time, just give us the tool, we've seen how it's done enough times, and can use it just the same. Here's the official demand.” If Apple did that, the next day the tool would be leaked to multiple other agencies and actors, and on the day after any stand on any Chinese market would offer the iPhone unlock and reset for a small price. In other words, the shattering of the legend of privacy protection marketed so hard, and financial trouble. The FBI effectively demanded to hold one of the keys to Apple's power, and was found not significant enough to be esteemed.

Then they all pretended to have a legal fight to smokescreen the public. Apple itself said that it had always cooperated in doing the job on premises, and you can be pretty sure that every official case, semi-official case, and special case leaving no paper trail was routinely accepted. Apple was protecting its own secrets, not yours.


> Historically, Apple has come down, hard, on the side of "Your device is your device, and where that assumption is violated, we will work to protect it."

On the contrary, if my iPhone really was "my device" then Apple wouldn't make it so hard to sideload software (it was literally impossible to sideload without building software from source yourself until very recently) - and they'd let me change my default browser/maps/contacts/email apps too.

Apple's stance on privacy is something that really doesn't cost them anything but from which they gain massively by using it as a bludgeon against Google and Android (and Facebook...).


To be fair China needs Apple less than Apple needs China.

The nation state is still supreme- cyberpunk was wrong on that.


Recent NYT Daily podcast demonstrates the opposite:

https://www.nytimes.com/2021/06/14/podcasts/the-daily/apple-...


[flagged]


> Imagine being dumb enough to believe that PR campaign. If you actually believe that the spooks don’t have iPhone back doors after the PRISM slide leaks, I don’t even know what to say…

My pet theory is that because Apple can ship personalized iOS software updates to specific iPhones - and assuming that iPhone has automatic iOS update installation enabled (which is the default now, previously it wasn't) then Apple could essentially OTA-push an intentionally compromised iOS build to someone's phone and they'd be none-the-wiser.

That hardly requires any backdoors or even any security vulnerabilities - just someone with access to Apple's iOS build signing keys.


At the limit you are trusting the sum of the people who are creating the software you use - and that includes drivers, firmware, etc. Linux is not exempt, but it may be easier to find issues in open source code.

Given this, Apple was not a bad person to trust. Yes they wanted you to pay them money, but were a) quite happy to deliver value in return, b) prepared to do lots of hard work to fight on your behalf and c) it was quite clear that you were their customer, not their product. Google is an advertising company, even if you pay them money you are always also the product.

I am seriously wondering what happened to Apple that they came up with this idea, it doesn't seem to be in their interests at all.


Im worried about the millions of peolle who dont read hacker news


If they outright lie about their device spying on you, the repercussions are much worse than them openly spying on you.


Agree, that seems to be the pattern. Slow rolling consent


> I was hoping that their bowing to China with iCloud wasn't a sign of what's to come, but apparently it was.

It's likely that this tech originated from Apples capitulation to China and has been in place. So it's probably been in place for quite some time.


They're already scanning your photos to classify the images. That's how you can search for photos by descriptions. They also can OCR the text in images. Seems like a weird time to start getting angry about your privacy being invaded


Would you please describe a solution to finding CSAM which you would find acceptable? It seems to me that on-device scanning for known CSAM content is on the lower end of the privacy-violating spectrum.


There is no solution to searching private data that I am going to find acceptable.

* If the "target" database is secret, then government entities are free to use a "CSAM" database to monitor dissidents by adding their own items of interest to the target list.

* If the "target" database is public, the only way to validate its contents is to ... traffic in child sex abuse material. That's not great.

Basically, there is no way to create and validate this database in public view. I don't need yet another flavor of secret policing in America.

This doesn't seem like a problem that can be solved without compromising my privacy.

I would rather the problem went un-solved than allow the state to rifle through all of my private files to "prevent the distribution of child sex abuse material," knowing full well the state will define that "material" however they like, then use parallel construction to prosecute dissidents whenever they have gotten information illegally.


Isn’t apple necessarily in the loop to validate the database?


No. That's the whole point of this endeavor.

It's a privacy-preserving framework to allow the government to monitor the contents of iCloud directly, with few, if any, Apple employees ever having to get their hands dirty.

Apple just handles a target database that gets distributed to phones, and then compiles a list of users whose data had hits against the target database.

Apple employees don't have to dirty their hands with what, perhaps, is in that target database. Not their problem!


That isn't accurate. They're not blindly handing lists of users over to the government.

If an account uploads multiple images that match to known exploitative images and exceeds a threshold, then the account is flagged for review by Apple. (Note the threshold is selected to provide a ~1 in 1 trillion probability of incorrectly flagging an account.) Once they review and confirm a match, it's then forwarded to the National Center for Missing & Exploited Children for further action (and presumably referral to Law Enforcement.)

More details in their whitepaper: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


There are no details in the whitepaper.

The "1 in 1 trillion" figure is accidental flagging on the target database, but there is no validation whatsoever on the target database. How can you, or I, or any other citizen, know whether non-CSAM items are present in the target database?

-------

The NCMEC is a patsy for the police state on this one. It's gross, it's ugly, and it is a terrible outcome for the charity.

In their participation in this program, they make themselves into a front for the CIA, FBI, and DIA forces that are aching for opportunities to crack down on dissent in America. This is an awful, terrible outcome.

--------

The whole thing is an incredibly thin, easily pierced veil for any government. Even if you think the secret police forces of the United States generally do well by citizens, how do you feel about China, or Russia, or Eritrea, or Burma, or Turkmenistan using these tools to flag people trafficking images with undesirable fingerprints?


This is a good point. I believed that the fact that apple manually reviews the content implies they will compare the images against those in the database. Without the database of content it does imply that outside organizations are uploading hashes to apple and that apple cannot determine the scope of the content.

However, that does not invalidate the fact that apple is in the loop! It’s not just the NCMEC that has to be corrupted - it’s also apple employees. Apple has stated in their whitepaper that they review all flagged content before forwarding to the NCMEC. If the apple employees forward the non-CSAM matches then that is a failure of the reviewers who have betrayed their duty to prevent authoritarian abuse of this system.


“Apple just handles a target database” ==> apple in the loop


Read the papers published by Apple. The target database is (intentionally) designed to make it impossible for Apple to know what the targets are.

This is a complex cryptosystem designed to keep Apple out of the loop. There is a target database whose intended targets they cannot know, and customer data they prefer not to know.


Yes but they review the matches! It is the same thing in the end: if apple reviewers start to see that a bunch of political pictures are being flagged they will realize that the system is no longer being used to flag CSAM content.


The database distributed to phones is just hashes. By itself it can't be validated.


Traditional police work that respects the principle behind the 4th amendment, no searches of private information, algorithmic or otherwise, directly by the government or otherwise, where you do not have probable cause.


Yes, but the big exception to the fourth amendment are private searches. As long as it's not the government, or someone working on their behalf searching, it's not covered by the fourth amendment. The trend these days is to eliminate constitutional protections by farming the violations out to private corporations.


I agree it's not a violation of the fourth amendment (so long as Apple is not being forced to do this by the government), which I why I said "the principle behind the fourth amendment" instead.


>or someone working on their behalf

I wonder what the limits to this are. Clearly if government asks for help then that counts, but what happens if government slowly builds up partnerships where certain 'good partner' behavior is expected but never explicitly required in a contract?

Say government started favoring companies who scan for drug images for any contracts over those who don't? Or say that while working with companies that scan for drug images they build up relationships that lead to both better treatment and to better chances at winning contracts? Maybe companies that refuse to implement the drug abuse material scans end up getting investigated more often.

I wonder how long until the fourth amendment can be considered dead due to this one exception?


The U.S.A.'s weak privacy protections aren't the entire world, you know.

In the E.U., and other places, companies are definitely held to stricter standards as to what extent they can search a private device, even if they claim the o.s. is only a service.


The fourth amendment is irrelevant here. The constitution is meant to limit the power of government because government is ubiquitous and it has a monopoly on violence. If it were to become too powerful then it will oppress its citizens.

Preventing a digital platform from governing what kind of content it hosts is just absurd and makes no sense. If apple decides that it does not want to be the platform where CSAM content is distributed then they should be free to do so. If you don’t like being a part of a platform which scans images for abusive content migrate to another platform.


> where you do not have probable cause.

Where you do not have both probable cause and a warrant, based on and identifing that cause, signed by a publicly elected judge in the user's (not company/operator's) juridiction.


Why can’t algorithms do the police work for some crimes?


Seriously? For one, this is basically a warrantless search which is illegal (obviously they get around this because Apple is a private company). Also, trusting algorithms for critical things like this is beyond absurd.


I still don’t understand the reaction. People get tied up in a knot over this but it is an effective deterrent to child pornography full stop. Even if it rubs you the wrong way to have software fingerprinting your files, I really don’t care if it means placing a deterrent in place against child trafficking.

Let’s walk through this.

1. Criminal kidnaps child and abuses him. 2. Criminal produced video of said abuse and sells it on the web. 3. Criminal continues to sell it and it spreads. 4. The video is detected by authorities who promptly add it to database. 5. Video is cryptographically hashed and now anybody who stores this content in iCloud can be identified 6. A customer of the criminal is caught 7. Forensics leads authorities to criminal who produced the video 8. One less criminal to profit from kidnapping and abusing children

Everyone tries to make this approach as a slippery slope to facial. It doesn’t have to be that way if the right people are in the loop to blow the whistle.


> I still don’t understand the reaction. People get tied up in a knot over this but it is an effective deterrent to child pornography full stop. Even if it rubs you the wrong way to have software fingerprinting your files, I really don’t care if it means placing a deterrent in place against child trafficking.

Most reductions of privacy toward the police would act as an effective deterrent to that crime and other crimes.

Deterring crime is not enough to justify a reduction in privacy.


Hypotheticals: What if your platform is causing a growth in the crime rate? What if your platform is enabling new forms of crime and potentially on an unprecedented scale? Is it justified then?


For the first one it depends on how much growth and in what crimes specifically.

For the second one, maybe, but I seriously doubt icloud encryption is going to do that.

Either way, just talking about baselines and percents is a good improvement over just "this would decrease crime". Add in the downsides too and you have yourself a good platform for discussion!


How about a counterexample? I am a consenting adult in my thirties. I create a photo or video and send it to my partner. The algorithm flags it as CSAM when it only shows a fat bald guy. Before I know it I'm under investigation and my life is ruined because the algorithm got it wrong. Even being accused of this sort of thing is enough to destroy someone and drive them to suicide.


The "algorithm" isn't some sort of neural network trying to intelligently identify things that "look like" CP. It's a perceptual hash matching against a database of known CP. It has to find multiple matches before it flags the account for review to reduce false positives. Only after review confirming a match to known exploitative images is the info referred to NCMEC for action.

Full whitepaper: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


Who validates said database of "known CP"? How do we know the pictures are actually not "find dissident"? Who watches the watchers?

This entire endeavor hands the keys to an unaccountable police state.


"It is an effective deterrent" to using this one specific platform to distribute CSAM. The problem with this solution is the exact same problem with the tired old "solution" to E2E encryption that gets trotted out every couple of months. If you add monitoring to the tool that criminals are using -- especially, especially if the company loudly and publicly announces that they are adding monitoring! -- you will, at best, catch a few of the very dumbest possible criminals, while the rest move on to one of countless available non-monitored tools.


> anybody who stores this content in iCloud can be identified Any reason to believe anybody does this before the child reaches the retirement age?

You are describing the reverse cause of targeted search for the individual in which each step has probability that is much less than 100%. The technology discussed is a broad sweep: everything, everywhere, every time.

Oh, well, no need to worry so much, Apple has just added data collection ability which others had in one way or another.


As someone said:

if you really want to put a dent in abuse, mandate cameras in every home because that is where most abuse happenes. Anybody who opposes that clearly has something to hide...

Police can always pinky promise to never use it for anything except catching the baddies.

Yes. For anyone who wonders. This is sarcasm.


Algorithms can absolutely do the police work for crimes, provided they only perform searches that are supported with probable cause.


I don’t think fingerprinting and comparing against a database of cryptographic hashes of CSAM is “illegal searches”.


If the government mandated them, they would absolutely be illegal searches. The government is not entitled to check whether or not I have a document containing certain content (illegal or not) without probable cause and a warrant to suggest that I do have that document (and that it is illegal).

I'm carefully not alleging that this is illegal, because it's a third party doing it and it's not obvious that they are acting as the governments agent. Regardless of whether or not that solves the legal problem of unlawful searches, it does not solve the moral problem that we have a right to be free from unreasonable searches.

Replace "child porn" with "political posters" here. If you would have a problem with that search, I claim that you should have a problem with this search, because the there is no evidence that the person you are searching is committing a crime, and as a result the claim for this to be a morally valid search needs to not be about guilt (which would require probable cause first), but about it not being a prohibited search in the first place.


I'll be cool with it when you can get the algorithm to swear a warrant, testify in court and navigate probably cause.


Because the price is too high. The same argument is being made to outlaw encryption: it will make police work much easier. Yeah, it sure will, but it will bring a lot of problems as well, and they far outweigh the benefits.


Who's responsible when the algorithm starts targeting minorities and putting innocent people in prison?


Algorithms don't put people in jail, people do. The person who irresponsibly followed the output of the algorithm is, and the person who vouched for the algorithm being correct and put it into production is.

Don't think of an algorithm as an independent actor, think of it as an awesome tool that makes you 100x more productive.


> Algorithms don't put people in jail, people do.

... and guns and cars don't kill people, murderers do.

But now a company is creating s new kind of "gun" that huge chunks of the community of security professionals are warning about because it is insanely powerful and dangerous.

We have a number of reasons to warn about it:

- China will undoubtedly demand to have Apple scan for their hashes too. The databases these hashes are collected from are not public for obvious reasons so this means they can slip whatever they want into it. Maybe Apple will check the images before they send them over but that is not necessarily the case as far as I read it, also it takes one rogue employee to correctly classify as not abuse material but make a note of the account and send it back after work.

- This is not your average sha256 hash. This is perpetual hashes. They are made to catch not only the exact document but all kinds of variations of it. I have not specialized in perpetual hashes but as far as I can understand it goes without saying that the more resistant this is to modifications, the easier it becomes to create innocent images that triggers it.

- even if we had a magic algorithm that resonated with everything good and only matched the images we wanted there still is potential for abuse. When I was younger I browsed through the cache folder of my machine and I remember there being a lot of images there that I can't remember having seen on any site I visited. Now it is said that this algorithm will only flag images about to be uploaded so obviously cache folders won't be scanned. But once this tool is in place why won't governments start applying pressure to Apple to scan everything? And what prevents someones soon-to-be ex from downloading and slipping some into iCloud when one leaves the phone unlocked?

- Due process should sort many of the problematic cases here, but child abuse is for good reasons one of the worst things you can get accused. A mere accusation is often enough to ruins someone's life even if it later becomes clear to law enforcement that the person is innovent. Also most places have a way to go wrt due process.


Okay but what about the intense war going on for youtube content creators getti g banned or content struck right now. Or all the wrongful bans on platforms like facebook or Twitter.


The blame for that falls squarely on the

a) Executives at Alphabet, who are ultimately in charge of the companies decisions

b) The individual product managers, lawyers, engineers, and whoever else is involved in creating, maintaining, and running the system that bans people inappropriately.

The algorithm they are using is a tool that they are using to do this, using a tool does not absolve them of the responsibility in any way.


I'm still mostly thinking about an awesome tool that will target 100x more minority people and send 100x more innocent people in jail.


They could, and their first step would be to get a warrant.


There is a scale where that doesn’t work anymore.

If one child abuser can sell his contents to thousands via nothing but WhatsApp, word of mouth, and a Bitcoin wallet, what will you do to fight that?

I’m all for freedom of speech and freedom from fishing expeditions and what not, but if you don’t deter these things they will grow.


If (s)he sells to thousands via WhatsApp and word-of-mouth good old police work will have him/her bagged shortly.

WhatsApps only claim to be private is end-to-end encryption of messages. The moment a polkce officer opens the phone of a buyer they can identify the seller. Same if an undercover cop hears the word-of-mouth and buys.

This is just ordinary good police work that I whole-heartedly support.


Find the websites distributing it, infiltrate them, generally do the legwork to find what's going on - which is exactly what they've been doing.

"But the children!" is not a skeleton key for privacy, as far as I'm concerned.

I reject on-device scanning for anything in terms of personal content as a thing that should be done, so, no, I don't have a suggested way to securely accomplish privacy invasions of this nature.

I'm aware that they claim it will only be applied to iCloud based uploads, but I'm also aware that those limits rarely stand the test of governments with gag orders behind them, so if Apple is willing to deploy this functionality, I have to assume that, at some point, it will be used to scan all images on a device, against an ever growing database of "known badness" that cannot be evaluated to find out what's actually in it.

If there existed some way to independently have the database of hashes audited for what was in it, which is a nasty set of problems for images that are illegal to store, and to verify that the database on device only contained things in the canonical store, I might object slightly less, but... even then, the concept of scanning things on my private, encrypted device to identify badness is still incredibly objectionable.

In the battle between privacy and "We can catch all the criminals if we just know more," the government has been collecting huge amounts of data endlessly (see Snowden leaks for details), and yet hasn't proved that this is useful to prevent crimes. Given that, I am absolutely opposed to giving them more data to work with.

I would rather have 10 criminals go free than one innocent person go to prison, and I trust black box algorithms with that as far as I can throw the building they were written in.


>on the lower end of the privacy-violating spectrum

I think that's the thing, privacy minded people want solutions that aren't on the spectrum at all.

I can come up with some solutions for greatly reducing CSA that are really low on violating parental rights but they have little chance of gaining support because the only solutions people want are the ones that don't violate parental rights at all (except when parental rights have been suspended due to evidence of CSA being found).


Yeah. They can scan photos that are actually passed around, instead of something that's just in a photo library. That will likely result in 2-3 orders of magnitude fewer false positives.


That's exactly what they're doing. And they're requiring multiple matches before an account is flagged, further reducing false positives. They claim they've set the threshold to achieve a ~1 in 1 trillion probability of an account being improperly flagged.


False, they are scanning all photos in your library, regardless of whether or not you send them on to anyone else.

1 in 1 trillion is not a particularly impressive number, considering that users have tens of thousands of photos in their library, and there are a billion active iPhones.

That's if you do believe their claimed false positive rate -- which I don't (yet), because they have published absolutely nothing about how neuralMatch actually works. They have an interesting body of work around how suspected matches are encrypted so Apple can't see them until certain conditions are made, but nothing about how they identify suspect photos.


That is a good point.


No, the lower-end is infiltrating CSAM distribution networks the old-fashioned way.


Boosting the amount of influence schools have in childrens’ lives is something we could do better at.

More teachers, smaller classes, stronger relationships, better discipline, more love. I see too many teachers where I work who just don’t care any more.

Educating school leaders to educate their teachers to help children root out the family member that is abusing them. That there would be a pretty neat solution — and I really don’t mean to be inflammatory about this — than catching villains based on photo evidence of the abuse that has become systematic enough to be shared online.

It is very expensive and time consuming though. Teachers are hard to recruit (low pay, low quality job) which exacerbates the problem which makes it even harder to recruit them.


Not OP, but a warrant, upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

In other words, you don't get to yell think of the children and randsack peoples homes. If you don't have specific evidence suggesting that I am a criminal, you assume that I am not _and_ let me be.


How about not doing it.

The "privacy" of a on-device scanning is meaningless when you upload to iCloud where they will send to law enforcement and an unaccountable nonprofit as soon as it matches something.


Yes. Find the one's producing it and use proper infiltration tactics.

Empower children to report abusers and protect them when they do.


Anything which

- directly prevents actual abuse of children in the physical world (obviously)

- fights the sale of images created through abuse (as this incentivizes abuse).

Those are the main underlying reasons to fight CSAM, besides the obvious horrificness of the stuff in itself.

Please also consider the second-order effects of an increasingly totalitarian society on the wellbeing of children.


Them having to get a warrant or some form of fair adjudication instead of a carte blanche of privacy invasion?


Don't spy on my device. I literally don't care if that doesn't help some pet cause.


Sorry, but they're not "spying on your device". If you choose to upload your photos to their cloud service (iCloud Photo Library), they'll use perceptual hashing and private set intersection tech to check against a database of known exploitative child imagery to make sure you aren't uploading that to their service. That's it.


You just described “spying” with more words.


Child pornography is the boogyman used to compromise privacy.

There is significantly more illegal material of greater concern such as communication to plot terrorists attacks or companies ignoring safety regulations and leading to the deaths of many in doing so, and all that isn't scanned for either on the theory that it might exist, not that there is due cause for a search warrant or some similar mechanism and burden.

But child pornography is the one thing Anglo-Saxon culture is notoriously emotional about, and willing to surrender all it's freedom and privacy for. For a while terrorism counted among this too, but that seems to have fallen off.


Lawsuit incoming? How this is possible if this seems to go 180° against them previously pretty much putting a privacy guarantee in EULA.

Will they announce an EULA change, reversing the guarantee, and freeze anybody who haven't clicked it through?

I thought Apple was one of a kind company brave enough to put such a guarantee in writing.


Based on the press release, I don't think you're right. It doesn't appear to be all photos, just ones uploaded to iCloud.

It looks like it only triggers when you go to upload a photo to iCloud. In that case, it (maybe?) scans only that photo on your device and then decides if it's CSAM.

I guess Apple really really doesn't want CSAM on their servers? I don't know why they wouldn't just perform the scanning on their own server otherwise.


Apple’s press release is a lie.

iCloud stuff already isn’t E2E and they already scan.

This system is built exactly to surveil the entire contents of your mobile device, cloud enabled or not.

Apple has burned years of social capital in 2 days.


If the software is scanning everything, and they were already scanning uploaded content, why have a press release at all? If you're deploying an unrestricted panopticon to all your devices, why on earth would you go to the trouble of announcing it in the first place?


Ah yes, the "slippery slope" argument, where evidence is never required, pessimism always wins over realism, and any counter-argument can be dismissed with "but they COULD!"


I remember when they put a fucking U2 album on everyone's phone. Remember that? Many found that suspicious, but I recall people like you saying "oh Apple can write, who cares, but they'd never read, you're being ridiculous".

I remember when an Apple auth server went down and no one could launch non-Apple applications because Apple needed to see the hashes of the things people ran on their computers.[0]

When will the slope be inclined enough for you?!

[0] https://mobile.twitter.com/llanga/status/1326989724704268289


The U2 album was a marketing tactic gone horribly wrong. But the same basic tactic was used to automatically upgrade people's movies to 4K versions. It's just assigning a right to an asset by a given account. It was not an invasion of privacy or some new technical advance that gave Apple some advanced privacy invading features.

Apple said "Hey everyone owns this U2 album now". And if your phone had automatic downloads turned on it downloaded the album.

The idea that even belongs on the slope is … you're not a serious person and you're not making a serious argument.


>Apple said "Hey everyone owns this U2 album now". And if your phone had automatic downloads turned on it downloaded the album.

It wasn't that simple. The usual functionality of slide-to-delete didn't work at all, because of iCloud, or something. A week after they foisted it on everybody, they decided to implement a dedicated page to allow frustrated users to delete it, because of how different it was from a mere purchase authorization from the iTunes Store. [0]

"The slope", in this case, as far as I understand the root-level ancestor comment, refers to the idea that Apple is increasingly paternalistic about its powers over the device you ostensibly own:

>>>>"We're going to scan your photos, on your encrypted device, to look for badness. Right now, we're going to claim that's only for the really icky people that nobody is going to defend, but, hey, once the tech is in place, who's to say we can't scan for dank memes and stuff?"[1]

I think bringing up the U2 album is entirely within scope re paternalistic overreach. I assume your arbitrary goalpost movement to "invasion of privacy or some new technical advance" is considered by you to be indicative of your personal and argumentative seriousness?

[0] https://www.eonline.com/news/1071481/remember-when-apple-for...

[1] https://news.ycombinator.com/item?id=28078435


It was actually that simple.

Apple gave everyone the album. It just assigned that album to everyone's library. It could be deleted from the device, but it stayed attached to your account. So it showed up in your library when you searched even if you didn't have it downloaded, because that's how Apple's library worked.

I'm not sure how that's "paternalistic" in any sense of the word.

I didn't move the goalpost. This article, and commentary, is specifically about how Apple is adding technology that can be used to invade our privacy or be abused by state actors.

A bad marketing tactic from seven years ago that amounted to a free gift no one wanted is an absolutely ridiculous example.


The OCSP example is stupid and people should stop using it. Apple was following an RFC written specifically because PKI sucks.


I'm sure I'll be revealing my ignorance here, but what does revoking certs have to do with Apple needing to know who's running youtube-dl or tor or anything else? Was the revocation of something somehow contingent on the specific application being run, or vice versa?


Apple requires signing of code that's going to run on MacOS. You can disable this, but it is a nice little security feature that allows Apple to quickly react if a virus starts spreading on their devices. They revoke that certificate and OCSP (which is a standard way of checking for revocation) blocks that software from running.

Apple's design was fail closed, so if OCSP is down, assume the application has had its key revoked.

Unfortunately that's just how OCSP is, your browser (if you're using Firefox) does this with CAs. The unfortunate thing is, due to the nature of desktop applications, OCSP stapling doesn't really work when you're not the one serving content.


>Unfortunately that's just how OCSP is

Uh, sure, fine. But Apple decided they needed to see the hashes of the things people ran on their computers, to possibly block execution if they decide that's necessary, and they don't have to. I'm kinda shocked that you're framing this as if it's innocent, and therefore a bad example of Apple's increasingly paternalistic control. I don't particularly care what sort of RFC they're following or which alternative implementation sucks or which fig-leaf covers their true intent; Apple chose to have the ability to see what people are running in real time. If you're a regular of this forum, and can't immediately imagine how this sort of information might be used to harm users, rather than help them, now or in the future, I don't know what to tell you.

I also don't quite understand how this even helps with the stated goal of virus-corralling. Does the hypothetical virus that they're trying to guard us against change an executable? If so, then the hash is immediately different, but presumably no longer matches its signed checksum, and so could be rejected at the OS level without needing the whole 'real-time seeing what people run' aspect. Does the hypothetical virus run independently? How could it, given the prohibition against non-signed code? I guess the idea is 'prevent a once-legit app from pushing a malicious update and turning several nations worth of Macs into a botnet', style of thing?


Apple doesn't go and check every application thats being signed basically. The idea is that all applications get signed but if apple notices something it can revoke them and basically make them unusable.

https://en.wikipedia.org/wiki/Online_Certificate_Status_Prot...

They're just following basic pki. Again, Firefox does this with https certs. The CA knows who went to what website.


>Apple doesn't go and check every application thats being signed basically

...uh, then how did the failure of the auth server mean that nothing (except Apple apps) could run? My understanding was that the auth server checked the hash of every application that was being run, and the absence of that auth server meant nothing could run.

Moreover, I don't care about the fact that they're just using plain ol' certs, just like Firefox. They could be using screen recording software and Mechanical Turk to decide whether users can execute some third-party software -- the point is that they are deciding whether users can execute some third-party software. The technical implementation is unimportant, it's a bad thing.


Yes, Apple chose to add technology to its system which identifies application developers and allows them to tell your machine that specific developers or applications cannot be trusted.

You can turn that feature off.


Sure, I can. You can, too! But most people can't...


They never should have picked that RFC.

Code revocation checking should be local.


Exactly. Reaching out over the network like that is an invitation to, if not a revelation of, malfeasance of some kind. Apple intentionally chose to implement this "feature" in a suspect way, and specious comparisons of an OS and a browser [0] don't strike me as justification, or even explanation. Maybe it's just me, but I like my machine to stay as mine as possible.

[0] https://news.ycombinator.com/item?id=28080107


Yes one way is to do CRLs and that has a lot of problems too.


The "slippery slope" argument is not a fallacy. If you were in 2001 and told people what the state of data collection and privacy would look like in 20 years, they'd call you an absolute conspiracy nutjob. It was a national news story then that a printer driver phoned home. Now look at where we are at, and every loss of privacy has had some seemingly reasonable excuse.


It's absolutely a fallacy according to any course on logic because it is impossible to refute.

https://www.txstate.edu/philosophy/resources/fallacy-definit...


For the narrow formal definition in your link, that being an assertion without evidence, then yes, it's a fallacy.

In the colloquial language we are using here, there is absolutely evidence, tons of it, that this will take the same downward trend of previous infractions.


Slippery slope has almost always meant a fallacy, because it typically invokes "the sky is falling" examples that have no evidence to support it coming to pass - even here.


While I see the point, "slippery slope is a fallacy" seems to have been abused far more than "slippery slope" on HN over the decade I've hung around here.


That's because nobody makes an argument by starting with "I'm making a slippery slope argument" - it's always identified by someone else. Same with whataboutism, no true scotsman, etc. by their nature.


No, it is because people invoke slippery slope fallacy for actual, real slippery slopes.

The slippery slope fallacy us real - but so are the slippery slopes.


Are you under the impression that people who teach logic are unaware of real world evidence that "things change and some of the those end up being bad"?


No, I am under the impression that in this thread we are using the term "slippery slope" to mean something else that the one you provided in the link.

Also your characterization of "things change and some of the those end up being bad" is not the nature nor specificity of the evidence in this case.

Lastly, perhaps you should remain consistent and not utilize the "appeal to authority" fallacy.


A slippery slope is a logical fallacy. There's not a colloquial version of that term that means something different. It's a style of argument, where you asset that one action will lead to another action without any evidence of a direct casual link between the two.

Second, I'm not appealing to authority. You said a slippery slope isn't a fallacy. I provided a link which makes the case as to why a slippery slope is considered a fallacy.

We are not using "slippery slope" in any way which is different from the examples given.

That one thing has lead to a bad thing in the past is not evidence that this thing will lead to this specific bad thing in the future.

Yes, sometimes, slippery slopes end up being slippery, and sloped.

But that doesn't mean anyone has to take it as a given that any other specific slope will be slippery, nor slippery in some specific way which leads to some specific outcome.



Do you trust your government to not abuse "But they could?" types of capabilities?

I don't. I've seen what the FBI claims, that they then magically find ways around the "impenetrable encryption" they claim they can't break (but then do), and I don't trust them to not require Apple to add hashes of "things they find problematic," and include a gag order with it so Apple can't report it.

I've worked computer security long enough to know that it's always worse than advertised, always will be abused, etc.


If you've worked in computer security long enough, you know that Apple could already have added a backdoor into every build of iOS that allows the government to read any material on your device they chose to read.

My contention isn't that I trust my government, but instead that this specific technology is no more threatening than the technology already present on my phone and the trust I've already placed in the vendor.


> If you've worked in computer security long enough, you know that Apple could already have added a backdoor into every build of iOS that allows the government to read any material on your device they chose to read.

Yes. They could. However, getting caught doing that, from Apple, would be an absolute disaster for them, and they've both made the right statements and made the right actions in terms of things like physical device security (moving more and more into the secure enclave to protect against demonstrated attacks).

It's entirely possible they've done this for specific cases, with a gag order, and nobody knows. However, it cannot be widespread, or it would end up known.

This, however, is adding a blackbox image matching algorithm (we know it's not matching image file data based on statements about how it's robust against cropping/rotation/resizing/etc), and a blackbox database of "Stuff claimed to be Really Bad." There is no way to audit the database and see what's in it, because of the one way nature of hash functions, and there's almost certainly not a pile of the source material laying around for audit reasons (because that would be a really nasty legal problem). So you've got a "Trust us, this stuff is bad" database, and a "Trust us, it matches images accurately" hash function.

And, if those find enough matches (also unspecified), you get yourself reported directly to the various authorities.

It's guilt by algorithm, across every single Apple device (running the latest OS, etc... handwave as needed).

That's very, very different from a customized backdoor deployed to a small number of phones.


Dumb question: do we know from what they've released publicly if it will be possible for security researchers to snag a copy of the database, perform the same perceptual hash algorithm on a given image, and determine if there's a "hit", without violating some kind of license term?

Perhaps the community could run a crowdsourced "keep them honest" service web service -- upload the latest illegal-in-China Winnie-the-Pooh meme, oh hey look at that, it's in the China-only version of the database, isn't that weird, etc etc. (Obviously you wouldn't want people "testing" images that are in the database for the actual stated purpose...)


Yeah, image files.

The scanner that can be fooled by embedding the images into some document is going to get useless fast. Therefore, PDFs and office formats have to be checked, too. Well, and archives, without doubt, should be unpacked on-the-fly. What if someone embeds the base64-encoded image into HTML? And so on, and so on.

You can't just look at media files, you naturally need to scan everything, and the working thing is likely to be antivirus-like, with many file format filters, heuristics, and extended description language.

Do you have the Collateral Murder video? Abracadabra, and now you don't have it.


So, your argument is that Apple couldn't have implemented a wide-scale backdoor because people would know about it but Apple could use this new technology in a nefarious way at scale and no one would know about it.


Just the other day in the Netherlands it came out that cameras placed with the strictest promise that they would only record license plates and only trigger when it matched with a known license plate now also capture persons in the car. And before that the systems were changed to store pictures for 28 days in order to look back. It also came out that the deletion doesn’t always work and that the obfuscation that is supposed to happen also doesn’t always happen.

The reaction of the police seems to be “why do you care about this? We have the data let’s use it. And we could use it to find those ~killers~ these people that allegedly just killed the most popular crime news reporter. So let us do our work.”

Just like software expands to eat cpu/memory/disk resources and people start living up to their salaries: whatever is possible will happen.

To me the slippery slope argument is perfectly valid.


I look forward to the day that marrying your hamster is legal.


You're right in one sense. In another, isn't this exactly what people said about intelligence agencies ("they could but they don't") and then... Snowden.


[flagged]


That he demonstrated that those agencies were actually doing the things people said they would do, while being defended by people who used arguments quite like gp comment.

I used the word "Snowden" not to refer to him, but to the event. Why? Because I didn't want to type the three letter keyword into my comment. Which is an example of the chilling effect these activities cause.


You're really scared of typing NSA?


I think we've learned from experience that tech companies will happily exploit anything they’re capable of exploiting, and this new capability has plenty of potential for exploitation.

Searching everyone's media for evidence of criminal wrongdoing, but setting only one example of the kind of wrongdoing you're looking for is very susceptible to an actual slippery slope, given that there are plenty of other criminal activities they could start looking for should they decide that it's part of their mission.

The only thing that makes CSAM so attractive to go after is how disgusting society feels that it is. Up next could be reporting drug-related texts of known convicts to parole officers. Next could be drug-related texts of everyone to police officers. Next could be letting the police officers make their own searches, where they find everyone talking about a certain kind of political organizing.

We should refuse to accept even the first instance of this kind of thing.


Prior to this change, they couldn't. Now they could.

It's much easier to change the policy than to add a new mechanism. Governments, including the US government have previously attempted to force Apple to break into users' devices, and often apply gag orders to such attempts.

Now they could, and that's enough for me to be unwilling to keep sensitive information on an iOS device.


You do not seem to understand what was actually implemented.


It is my understanding that Apple implemented a program that generates a hash of a file and compares it to a blacklist, and notifies Apple in the event of a match. It's not clear, but it appears this is only run when uploading to iCloud.

The blacklist itself is not maintained by Apple, but by the US government or a third party like NCMEC, which means Apple can't be sure content that isn't child abuse imagery hasn't made it onto the list. Perceptual hashes probably can't be abused to target non-image/video content because they're an inherently image-oriented technology.

Apple could, however cause such a program to match on different criteria with a simple update, and such a change would likely be difficult to detect. Most of us assume Apple wouldn't voluntarily do such a thing, but it's very probable that they would do it involuntarily. The US government has already attempted to compel Apple to create a tool to compromise the security of an iPhone, and might have eventually succeeded in court if they hadn't gained access by other means. That fight took place in public, but the next one might well take place in secret.


In what way does the OP not understand what was implemented? This gives apple the ability to monitor any file on your device. Not only that, but all we have is the government's pinky promise that the database contains only CSAM. There is no verification. How do we know that it isn't going to end up filled with gay pornography involving consenting adults when the "feature" is rolled out in Saudi Arabia?


How do you know that the image classifier on your iPhone, that exists today, and detects faces, isn't also detecting whether the photo has a gun in it?


We don't know, but a photo of a gun isn't going to break your marriage, make you lose your job, make your friends hate you and get you thrown into jail for weeks.

Someone finding a way to plant this kind of material on your phone does that.

Besides, the fact that something that can be misused exist isn't a very good defense for creating something else that widens the opportunity for abuse significantly.


Slippery slope? What they already announced is abhorrent. Scanning all of our photos against our wishes?

Who cares what it's for? The fact that they are even doing this at all is a huge infringement to their customer's autonomy.


You're right, but isn't the rapid expansion of censorship and surveillance the past year evidence that we are currently slipping down the slope?


It is a slippery slope argument, but wouldn't you agree that it's been proven by the past decade of events (e.g. Snowden) that we can expect governments to try (and succeed) to abuse originally well intentioned technologies to spy on anyone?


Considering that Chinese iPhone users already do not enjoy the same privacy features as non-Chinese iPhone users, I would not be at all surprised if Chinese iPhones are going to have a lot more "child porn" hashes to check against.


All data in China is already stored on servers that the CCCP has access to. Why would they need to invoke a convoluted hash match scheme when they can just perform broad checks on the data they can access?


Yep. Good point.

But this will further open possibilities for abuse not only in China but everywhere else as well.


Edward Snowden's revelations provide very solid evidence that what can be done is being done.


Slippery slope is a shockingly predictive heuristic.


It very, very rarely is.


If anything, the last decade has been a huge slippery slope.

If you haven't noticed jokes is on you.


Well, there's not really any slope here. Apple plans to have a target database of BAD FILES!!1! that will get your name handed to the government.

It's not subtle. No expansion to the program is necessary to violate your privacy or endanger dissidents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: