Many of my oppressive country's laws are introduced under pretext "save the children". For example, public discourse of homosexuality is essentially banned, because otherwise underdeveloped minors might get involuntarily exposed to it (and supposedly get psychologically traumatized). Then another law allows banning websites that talk about drugs, LGBT, opposition protests etc. without court order, to save children from being involved in those traumatizing things of course (now it's used to ban opposition sites). And it's hard to argue against it, because you are pushed back, "what, you hate kids? you don't want them to be safe?" It's a clever ugly trick, because most adults are parents, their parental instincts kick in, hearing about all that abuse, and they will support any cause that'll make their kids safer
I'm not saying Apple is definitively involved in some shady stuff, but from my perspective, it does look like NSA forced them to do some sort of file scanning backdoor and they came up with this "it's about saving the children" explanation, already successfully in use in oppressive countries.
On the flip side, "think of the children" does not necessarily mean the argument is without merit. There are countless debates in which it is a valid point. For example, climate change is a big "think of the children" issue because the impact will be felt greater by children than by the people who are currently in power. I think some people have become too cynical and see a "think of the children" argument and reflexively take the opposite stance in order to be contrarian. It is basically the fallacy fallacy[1] that you see all over the place, especially in debates on the internet.
> On the flip side, "think of the children" does not necessarily mean the argument is without merit. There are countless debates in which it is a valid point. For example, climate change is a big "think of the children" issue because the impact will be felt greater by children than by the people who are currently in power.
It's funny you wrote this because you actually committed a straw man fallacy in your line of reasoning above. An argument of "Think of the children" generally means restricting something from dissemination, discussion, or presentation because it's availability or the process by which it is made available would be a negative children as they aren't fully grown and responsible adults. Restricting porn, or discussion or drugs or sexuality, or child porn laws all fit this. It does not literally mean doing anything positive because it will benefit a child.
Saying that any argument that benefits children is a "think of the children" argument is a strawman fallacy. [1]
> An argument of "Think of the children" generally means restricting something from dissemination, discussion, or presentation because it's availability or the process by which it is made available would be a negative children as they aren't fully grown and responsible adults. Restricting porn, or discussion or drugs or sexuality, or child porn laws all fit this. It does not literally mean doing anything positive because it will benefit a child.
Says who? And the argument in question is actually more "if we don't do this, there might not be an liveable earth for our descendants" - you don't have to agree, but from their point of view its far from "doing anything positive because it will benefit a child".
We do lots of things with the state "to protect children". With one MAJOR exception: actually taking decent care of children dependent on the state. Which, to me, really calls into question all these efforts. The number of efforts attempting to "catch" various problem situations for kids, only to throw the kids found into state care, which is horribly bad.
And whilst I understand that some home situations are indeed beyond horrible and kids need rescuing from them, if you actually go look at the stories of kids in state care, actual abuse is pretty rare. Actual abuse is much more commonly found in youth prisons than it is found in child services institutions. Poverty, mental problems and substance abuse by parents at home are the biggest problems (yes, each bigger than the stereotypical criminal reasons. And even the relatively small group of criminal kids outnumber actually physically abused kids 10-to-1. Kids abandoned by their parents, also a tiny group, far outnumber actually abused kids)
It seems to me the whole "protecting kids" is just a 1000 major efforts to catch people in bad situations ... and punish them (situations often not their fault, especially poverty, or certainly not fully their fault). Then, because the kids involved are an annoying detail left behind when parents are punished like, they are ... well, first of all, deeply hidden, far from view. Needless to say, once hidden they are not taken care of very well at all.
This is done so a thousand efforts, and the police and child services itself can continue focusing on the frontend and catching yet-another-new-kind-of-child-abuse to satisfy various interest groups. From the LGBTQ lobby (fortunately these days in the right direction: making sure kids are allowed to be LGBTQ, rather than as little as 10 years ago, punishing them for it) to saving money and/or trouble in schools (such as expensive adaptation of school buildings to handicaps), to helping courts enforce divorce settlements (when parents refuse to abide visitation rights for instance).
But even with very serious child abuse you have to wonder: the child protection system is saturated with violence, drugs and even prostitution. That means the system is not capable of protecting children from violence, drugs or prostitution. But nobody seems to care about that.
It seems clear that if the rule is introduced that the situation of the child must improve as a result of child protection or child protection must leave the child alone after 6 months or so, that it will shrink 90%+ in short order. Such a rule would have the enormous benefit that it would force child protection efforts to improve children's situations directly, rather than finding problems and throwing the kids found into an abyss.
So no these efforts, from CSAM to ... are just to punish people for not abiding by mostly cultural standards. They are not about protecting kids at all. The most generous thing you can say is that they sometimes make it difficult to profit from abusing kids.
Not commenting on any specific emerging technical measures, I believe in general detecting & reporting CSAM possession, while not without side-effects, is probably one of the very few ways to reduce child abuse (including that at home) without large-scale intrusive monitoring of family affairs. (Other ways could be commercials encouraging abused children to complain, presumably of limited effectiveness.)
That said, addressing the underlying causes of abuse (which could probably only be done very indirectly, such as by improving general well-being and working on mental health issues among the population) as the proper long-term solution shouldn’t be ignored.
I'm trying to make the point that the state itself abuses "protected" kids. The living conditions of "protected" kids are horrible, abusive and destructive. Watch a movie like:
Conditions for kids in state care are little better than a trailer camp. Education in "juvie" is often better than that offered to victims of child abuse (which, yes, causes some kids to commit a serious crime to get some education. Why? It's the only way they can guarantee a placement, or a new placement (because some places in youth services are horrible, EVEN by the very low standards of youth services), or the opposite: guarantee a stop to constant changes in placement). The kids are kept there by force regardless of whether it's juvie or not (which is just one more reason violence is a constant in the life of these kids).
I find it absurd that the government, abusing kids in their care, have the slightest interest in using CSAM detection to protect kids. I don't want to say it, not for all, but for the vast majority of abuse, a kid is better off abused at home. Not because that's good for them. Because youth services and state abuse is far worse than the worst they face at home.
Detection of CSAM forcibly brings more kids into this system and, frankly, it's almost always better to leave them in an abusive family (esp. because most cases are about possession of CSAM, not about abuse of their own kids. Is this ideal? Of course not! Is it better than youth services? Absolutely)
Without an extreme rise in the minimum standards of care for kids in state care, I am entirely opposed to any and all attempts to detect child abuse. It makes things far worse for more kids instead of better.
And, frankly, I question the intent of people doing the detection. Because every effort just comes down to being one more "trap" for these kids to fall into. Do X (or have it done to you), and you get thrown into the hell that is youth services.
You want to help kids? Care well for specific kids. Adopt them. Make some kids yourself with a partner. DO NOT interfere in the lives of other children unless you're willing to make the full commitment: have the kid living with you, whatever stunts they pull, whatever beliefs they have, whether they accept you or not, for AT LEAST 10 years (ie. until 25 or so).
If you're not willing to do that, there is no form of help those kids need or want from you. Go and help kids with homework in your local school. Help out at a hospital. Whatever.
That are different kind of children. In case of climate change it means the next generation of adults.
In terms of cyber security it means actual children and is simply used to trigger protective instincts and make counter arguments seems cruel and suspicious.
Especially because child porn isn't not the initial but followup crime. The initial one is the actual abuse which isn't prevented.
>That are different kind of children. In case of climate change it means the next generation of adults. In terms of cyber security it means actual children and is simply used to trigger protective instincts and make counter arguments seems cruel and suspicious.
"Actual children" eventually become "the next generation of adults" so I'm not sure the point you are trying to make as you seem to be implying tomorrow's adults are more important than today's kids. Either way, society would be better if future adults weren't abused as kids.
>Especially because child porn isn't not the initial but followup crime. The initial one is the actual abuse which isn't prevented.
You are assuming that all the abuse would still occur and none of it is encouraged by the demand for that content. I don't know why we should make that assumption. What percentage of traditional porn would you guess is documenting sex that would be happening if there was no demand for that content?
> Especially because child porn isn't not the initial but followup crime. The initial one is the actual abuse which isn't prevented.
>> You are assuming that all the abuse would still occur and none of it is encouraged by the demand for that content. I don't know why we should make that assumption.
There's near universal agreement that child abuse is wrong and should be stopped. Profiting off of child abuse is even more wrong. But how much CSAM detected on phones is going to be because the user engaged in commerce with CSAM?
I think you'll have a few different cohorts caught by this process, but almost none of them will be producers/resellers/purchasers of CSAM content.
I'd bet most of the CSAM detected will be from the occasional legal case that stems from teenagers sending explicit content to each other. Once that content is registered, any classmates who downloaded photos from group texts and backed them up to icloud without thinking about will be raided and prosecuted.
Another group will be people who had CSAM planted maliciously, because this creates an easy vector for swatting if you can get photos onto someone's device. Not to mention the adversarial network attacks that have been on HN in the last few days about this.
Also, because we can't inspect the CSAM database, it's quite likely that some percentage of what people thought was adult porn was actually underage. I see nude photos shared on forums all the time. I'm sure some people save those photos to their devices, and it's unknowable if the busty teen just an 18 year old that felt like sharing or revenge porn content from a 17 year old's CSAM case. Surely some porn addicts that have been saving every image they found attractive for years will get caught up in this.
Even the group ostensibly being targeted by this technology is unlikely to help prevent actual child abuse. For example, someone might join a porn Telegram channel, be exposed to CSAM with or without realizing that's what it is, and save it to their device. People caught in this group probably do need some kind of mental help, which the criminal system will not provide. Piracy may create some demand, but it's still pretty far removed from the abuse. Out of this group, an even smaller fraction may have actually paid for CSAM material, which is the first group where you'd find unanimous support for law enforcement action.
The people we all want to see locked up, those that actually abusing children, or those profiting/funding the abuse of children, are unlikely to be caught in this type of dragnet.
I'd imagine that when Apple turns this feature on, it will immediately catch thousands of people in the dragnet, and almost none of them will be involved in the production or sale of CSAM.
> I'm not saying Apple is definitively involved in some shady stuff, but from my perspective, it does look like NSA forced them to do some sort of file scanning backdoor and they came up with this "it's about saving the children" explanation, already successfully in use in oppressive countries.
It does seem that way because it's just so hard to draw a line from "apple scans all your images to match previously known child abuse" to "child abuse is prevented".
I think the absolute best outcome we can hope for here is that some people who purchased CSAM, essentially low-level users, will be discovered and prosecuted. Maybe that's a good thing, and perhaps it will remove thousands or even tens of thousands of dollars from the underground economy. But it hardly seems worth the cost to privacy and security if we never stop any actual child abuse.
The famous porn actress Traci Lords was 13 when she got started in the business. She has talked about what happened back then, and the fact that well known actors like John Holmes knew about it and didn’t care. They just gave her more drugs to keep her quiet while they did their business.
So, anyone with those videos or explicit pictures of her that were taken before she officially turned 18, would be guilty of having kiddie porn on their computer, or otherwise on their premises.
It’s gotten better since then, but the industry is still pretty seriously messed up.
Where the crime becomes even greater is when the “ignorance is no excuse” policy is used to harass and even convict innocent people for crimes they could not possibly have known that were committed by other people.
It would be easier to take that seriously if other predictable consequences the next generation will have to deal with were given a tiny a fraction of that deference. Like a back breaking national debt, the razing of civil liberties, ever diminishing prospects of a quality of life comparable to their progenitors, etc.
Technically we stopped global cooling by successfully fighting acid rain. Sulphur emissions have a cooling effect. Eventually CO2 would've overtaken that effect anyway.
That is the point. Funny how these things tend to call for the same cure. At least for that one we got a movie narrated by Spock, detailing how we were all going to freeze to death.
So which of those words describes the dissemination of propaganda, "pursuing"? Also, how was the falsifiable anthropological global cooling hypothesis disproven? Careful, you don't want to accidentally disprove global warming or draw comparisons to inconsistently applied standards of proof with that answer.
I don't think anyone is saying that it won't benefit children (at least not initially), the point is that the cost of doing so is high and it could easily lead to further costs which don't benefit children at all, perhaps even harms children.
A good example is the argument for mass surveillance. Sure, if you know what everyone is up to at all times, you're probably going to be better at catching the baddies, but the loss in rights to everyone innocent is arguably unfair. To take it to its extremes, if you're goal is to simply put as many bad people in jail as possible, put everyone in jail.
> For example, climate change is a big "think of the children" issue because the impact will be felt greater by children than by the people who are currently in power.
So they think.
Many people currently in power will still be alive in 20 or 30 years and recent events have shown us how quickly the shit is coming at us…
Child abuse, terrorism, money laundering and tax evasion.
These are any government’s four horsemen of the apocalypse. According to them, they are roughly the same degree of evil and all deserve the strictest prosecution.
If this is true, why is it so hard for the US to increase enforcement funding for the IRS while purported anti-sex abuse laws like SESTA/FOSTA are passed with broad bipartisan support?
One additional issue that I haven't really seen discussed is how to handle a situation when a false accusation is made.
If a person knows the right people who work at these companies, things get sorted out, but I imagine sometimes a person is forced to just handle the consequences.
Stepping away from CSAM and going back to something like developer account and apps getting banned on platforms for violating vague "guidelines". It's someone's livelihood that's sometimes destroyed. Demonetization, apps getting banned, payment processors freezing accounts are mostly black box events and most situations aren't even related to crimes dealing with CSAM.
If it was something the government made a mistake with, there's legal ways to fight for your rights. There's generally a level of transparency that is afforded to you.
It is concerning that people flagged for handling CSAM will not know if they have been manually reviewed. The need to keep the forwarding to authorities a secret is understandable, but a human review before forwarding is only necessary if you expect false positives to begin with. Keeping that flag secret seems like another black box you can't fight as a user.
I don't deny the value of catching these criminals, but it throws the idea of due process out the window when the only assurance so far has been "trust us to do the right thing".
It's also weird how Apple has chosen to intentionally insert itself into the investigation pipeline rather than just let NCMEC handle it like all other cloud providers.
I am glad this hasn't flown under the radar just because it is Apple who is making these promises. I have heard non-tech people talk about this but there's a lot of misunderstanding.
> It's also weird how Apple has chosen to intentionally insert itself into the investigation pipeline rather than just let NCMEC handle it like all other cloud providers.
Not so weird actually in my opinion:
Apple wants to make sure their customers doesn't get into trouble for no good reason, that would be bad for business.
Personally for me, a European living in Norway I do have great respect for the local police (mostly). Great people, mostly going out of their way to serve the public (again, mostly).
Norwegian child protection however is someone I'd rather stay clear of. They are well known to simultaneously ignore kids who need help and also harass innocent parents.
Again, not all of them are like this, many of them try to do good, but they seem to a large extent to be working outside of control of the courts so if they mess up you have to go to a EU court to fix it. (Two or three cases just last 18 months from a small country like Norway.)
So something similar might also be at play, but I don't know what reputation NCMEC has, only that it is well known that a number of people have gotten serious trouble because of overeager employees at photo labs reporting innocent images.
I honestly believe that's not Apple's place. Apple is a technology company, not a non-profit founded to help find missing or abused children.
NCMEC has a pretty good and non-controversial reputation and you really only heard about it if you were working in that kind of law enforcement field. Or you knew of the database if you were personally working on tech for filtering and managing submission of photo/video from the public, like facebook image uploads and had to manage submitting reports of found CSAM. I personally only knew of the amber alerts, which are national broadcast messages that show up on my phone informing of cases of serious abduction incidents.
I think for people informed about tech, the most unpopular thing about NCMEC is their stance on promoting a backdoor in all E2E encryption to ensure CSAM is flagged. [1] But even that, you understand where they're coming from even if you believe that backdoors are a bad idea. The NCMEC is a government funded non-profit for the explicit purpose of finding missing and exploited children.
It's just not Apple's place unless it will also have a department for the explicit purpose of finding missing and exploited children. And at that point it's a branch of NCMEC working in Apple.
Child protective services isn't NCMEC. And NCMEC isn't law enforcement, the organization works with law enforcement and doesn't directly go after anyone.
I have never heard of people that have gotten into serious trouble because of overeager employees. I wasn't able to find it via google either, I would love to get more information if you have a link.
I have heard of three letter government agencies allegedly planting CSAM on targets they want to get leverage on. But whether actual CSAM is planted or they lie in order to get probable cause to do searches and intimidate, I'm not sure.
This article from 2019 outlines an instance where law enforcement accused someone of handling CSAM based on filenames and an IP address, but weren't able to find the file after doing an actual search on the device. [2] They ended up dropping the case instead of letting the lawyers examine the method used tag alleged predators. They actually dropped cases instead of disclosing how the software worked multiple times for the sake of protecting how the software works, and arguably let alleged predators walk free.
But all of that is separate from NCMEC which just handles a cyber tipline and follows up on reports from tech companies about any potential CSAM and is in charge of handling the database of known positives in the US.
Apple has stated they will require the database to be cross-checked across different countries and jurisdictions in order to ensure no one can add false matches. So they don't want to identify CSAM for just the US, they want it to be a global program even if they are just rolling it out in the US.
There has been some controversy of NCMEC. Specifically, they claim to a private organization, not subject to any of the constraints that public actors have (4th amendment).
In a case decided by future SCOTUS justice Gorsuch, it was decided NCMEC is “public agent.” That is, an agent of the government.
That’s because is possess immunity from prosecution for possession of CSAM.
When an electronic communication is referred to NCMEC, after, say, being identified by a hash, a NCMEC analyst reviews it. NCMEC claimed this wasn’t a search, because it’s not part of the government and the communication was sent by a non government agent.
The 10th Circuit politely said “bullsh*t.”
NCMEC analysts are shielded from laws against viewing CSAM. That’s a privilege granted by the state to assist law enforcement. Therefore, when NCMEC analysts view CSAM, they are government actors.
It’s a very interesting case, least of all because Gorsuch is an amazing writer. But it makes NCMEC look extremely disengenous, and the governments sponsorship like a way to slide around its obligations.
I’m not saying NCMEC is a bad organization. I’m just saying that there is controversy.
And NCMEC isn’t merely a private organization. It’s a private organization that has been delegated state powers to assist the state in performing state functions.
> It's also weird how Apple has chosen to intentionally insert itself into the investigation pipeline rather than just let NCMEC handle it like all other cloud providers.
So, I am not much of conspiracy theorist, but I do like to sometimes fantasize about alternative realities in which they were true.
I am not saying the US government had any involvement in Apple's decision, but what if they did? I do agree with your point about how this topic more or less came out of Left-field. It's clear that Apple did not just recently acquire the technological ability to produce this feature in 2021. This feature could have been implemented years ago (like many other companies with a consumer-available cloud storage model already did to some degree). I am just curious if Apple did not really have a "choice" in this matter. Perhaps my monkey brain just want this to be the case.
A number of technical factors in the implementation cryptography-wise (homomorphic operations on blinded hashes, threshold key release via tickets) have only been deployed in consumer-facing systems very recently.
From the system as I understand it, these are fairly essential to some of the privacy characteristics they were shooting for, such as LE requiring server code to scan for users who have some amount of non-reviewed matches below the threshold.
I don't know for sure if the US government has publicly endorsed creating a backdoor after the San Bernardino shooting incident. That was 2015/2016. The Apple vs Epic case made an interesting email public about Apple's head of fraud acknowledging the problem of CSAM on their platform in Feb 2020 [1] but I agree with you that this kind of feature has to have been in development since before then because it's clearly been red teamed. The feature wasn't released until they knew they would have a response to all of the technical questions concerning the actual implementation.
The UK policy, from 2017, around age verification [2] before giving access to pornographic material has definitely also played a part. Even youtube recently has made a strong verification effort going as far as asking for a credit card or photo ID as a response to COPPA [3] and EU policy. But Apple hasn't framed the porn blurring for minors feature as a response to that policy, which is surprising. And none of those government policies explain the need to do on device scanning of CSAM instead of in the cloud like everyone else.
I personally believe Apple felt a requirement from a business perspective more than to avoid government regulation. It doesn't seem to be a secret warrant type situation, because publicly disclosure wouldn't really make sense. If catching criminals consuming/sharing CSAM is the motive, warning those criminals that this change is happening before implementation seems counter-productive.
They even went as far as going after leakers a couple of months ago [4].
> The letter purportedly cautioned leakers that they must not disclose information about unreleased Apple projects because it may give Apple's competitors valuable information and "mislead customers, because what is disclosed may not be accurate."
It's clear in hindsight that these "features" leaking early is what Apple was afraid of as it's been confirmed that the CSAM scanning algorithm was found, inactive, in iOS 14.3. The current stance is that Apple will have an additional check but if the presence of the scanning algorithm was leaked then, they wouldn't have been able to do the same PR spinning that they're doing now.
I agree with you that it really doesn't fit Apple's narrative of privacy first. They are the same company that developed the security enclave to make a more secure product starting with the iphone 5s in 2013.
I hope we do get some clarity on the motivation because it's clear that no one buys the "we did it for better privacy" narrative they are currently pushing. Their hand being forced doesn't seem out of the question to me either, but their own public response seems to make it doubtful to me.
I think it speaks volumes that the first item that shows up on google when you search the term "icloud scanning apple" is a lifehacker article titled "How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives".
> It's also weird how Apple has chosen to intentionally insert itself into the investigation pipeline rather than just let NCMEC handle it like all other cloud providers.
Apple seems to design systems like this so that there are multiple parties involved, in order to require coordination to abuse the system and to prevent attempted abuses of the system from becoming public knowledge.
Until independent review, Apple/reviewers has not seen if the input hash actually corresponds to CSAM.
From what I've read about the other cloud providers, they make it easy to report matches to NCMEC but I haven't seen anything about a human review as a middle step. I really only know of PhotoDNA which it seems like Apple essentially has copied but not identically. PhotoDNA does not include human review.
From what I understand, legally, only people at NCMEC are allowed to view the material raw which is what's confusing about Apple's announcement of human review.
Exactly. I do not believe that Apple will be reviewing the actual images. More likely they will be reviewing the metadata for "chain of evidence" purposes. When Apple pass evidence to NCMEC they will be able to say "this was flagged for CSAM and we have double-checked that it actually came from this person's phone".
A version of the image that has had the picture data removed or obscured (so only the metadata can be reviewed) is consistent with that statement.
This is how marketing works. A company comes up with a plan and then the marketers come up with the best sounding technically true description consistent with it. The fact that the description creates an entirely false impression on the reader is by design.
Every instance of “government” in this article comes with some qualifier, like “foreign” or “other” – watch out for those foreign governments who might spy on their foreign citizens.
Is the implication that this technology could only do evil in other countries? If Apple deploys this in the US, they’re saving the children, but if they deploy it in China, they’re facilitating an oppressive autocracy?
I think the understanding is that US so far hasn't pushed into law any policy instructing companies like Apple to publicly censor people. Secret surveillance and privacy has been debated, but not freedom of speech. The US has not used the kind of public censorship used by other countries to facilitate an oppressive autocracy.
The US government tends to use one of the four horsemen, CSAM, drugs, terrorism, or organized crime as motivation to deploy censorship and undermine privacy but freedom of speech is generally protected.
Foreign governments censor things like undesirable political opposition, LGBTQ+ activism, women's rights activism, and historical events like the massacre of protestors.
I think the implication is that the technology is likely to do a lot more harm in other countries compared to the harm done in the US, so "it's okay" if it's only deployed in the US in the name of saving children. A lot of people from the US are strongly against the Apple policy regardless.
US so far hasn't pushed into law any policy instructing companies like Apple to publicly censor people
That depends on your definition. It sure looks like WikiLeaks was targeted by the US government. It also surely looks like the credit card monopolists are cracking down globally on sex sites because of changes in US law.
From where I stand, it looks like the US government is using corporate monopolies as a front to censor people.
The common sentiment in the US is that Mega Corporations have government influence that they shouldn't, not the other way around.
Wikileaks was seen as a national security threat because it leaked files in an unredacted form. Wikileaks has even released sensitive personal information like credit card numbers and social security numbers.
The reason Wikileaks was targeted after the election was because it was seen as a front for Russian manipulation in the US elections.
And unfortunately Sex Workers in the US have always had a hard time with payment processors. The law you're talking about went into effect in 2018, FOSTA-SESTA, and was painted as a law to help fight sex trafficking. Backpage was was the reason the law went into effect and that website was used for sex trafficking. Safer sights got caught up in the new law.
From what I have read the change in handling OnlyFans might have been a response to the BBC investigation about how the site dealt with people posting illegal content. [1] They used a warning system instead of immediate consequences for people that brought in a lot of users and revenue.
It's also not just US laws. Both US COPPA and EU AVMSD laws on porn and age verification made YouTube ask for credit cards or a driver's license in order to verify your age.
The UK has much stricter rules on the type of porn you can consume and make, the US is relatively wild in comparison. [2] The US definitely has issues but if any country can be blamed for draconian sex policies, I would argue it's the UK. Internet filtering was implemented "for the children" starting back in 2013.
We’ve been through all “for the children” brouhaha before: War on Rock Mudic Lyrics, War on Drugs, etc. It happens ny time a parent with influence decides that what’s best for their children is good for all children.
It doesn’t take a village - it just takes a mom with a loud voice.
When the very mainstream news media is under the same financial umbrella of all the defense contractors, is it no surprise we see the "undesirable political opposition, LGBTQ+ activism, women's rights activism, and historical events like the massacre of protestors" covered up or not even reported on to begin with?
Herman & Chomsky's "Manufacturing Consent" proposes the method through which the media influence isn't by way of "old men in a dark, smoky backroom" as the claim conjures images of. Rather, it's a series of filters that funnel like-minded people into the decision-making roles of the companies. If we simplify it a lot, it's the claim that a CEO is very likely to hire a senior editor that shares his views on the world, that editor in turn hires writers with a similar outlook on the world, etc. etc. Likewise, promotions within these companies is predicated on this same mechanism.
To the question "Do you think all journalists are self-censoring? Do you think I'm self-censoring?", posed by the BBC's Andrew Marr, Chomsky answered "I think you believe everything you're saying because if you didn't you wouldn't be sitting here". Both quotes paraphrased but meaning-preserving. [0]
This is 1 of 5 filters laid out in "Manufacturing Consent". It's, in my eyes, a very compelling analytical framework.
[0]: https://youtu.be/lLcpcytUnWU?t=176
- Video title is terribly divisive but the quote is at the timestamp. The whole 3 minute video just gives a bit of context to why the question is asked. It also cuts off as Andrew Marr is about to say something, making the video's title bunk. Viewer beware.
I don't think any of those are offensive to defense contractors, etc. Polarizing the public on these issues is exactly the way to haves the public turn a blind eye took graft.
It does seem like an oxymoron at a superficial level, but if you look into freedom of speech, there's an implicit assumption that the speech isn't inciting violence or harm.
I can have any opinion I want about any other person and I can use any medium I want to share my opinion. I can make political cartoons that are racy, buy a megaphone and protest in front of the white house, make memes and share them all over the web. I wouldn't get in trouble for having unpopular or even conspiracy level opinions.
If I'm encouraging people to do something violent and it's clear that I'm not making an edgy joke, that might start to fall under terrorism and the government will look into how valid and serious I am as a threat. All of the four horsemen can be simplified to intentionally harming other people at a serious and maybe grand scale. The US government has a very clear line that they draw for when they will get involved.
So I do think that freedom of speech is generally protected in the US. In the context of this discussion, I can voice any type of opinion I want about the government without any repercussions. It's been ruled in the supreme court that even hate speech is protected as long as there's no imminent threat of violence.
But if you want to convince US lawmakers that US cops and spies shouldn't be given this power, convincing them that Russian/Iranian/Chinese cops and spies have evildoers among them is simpler than convincing them that American cops and spies have evildoers among them.
I mean, US lawmakers have literally allowed a secret court passing secret laws for spies, so they're obviously a lot more trusting of cops and spies than your average history book reader.
No, if you investigate how many police in the US act now with qualified immunity and get away with it scott free, you would be horrified. I would guess people like Chauvin is one of thousands.
The US gov already has access to all the photos on iCloud, can already force Apple to seed backdoors here and there and forbid them to disclose the request. We had the FBI case where Apple refused to cooperate, but it only stops there because there was other ways to do get to the same result.
So the basic premise of this discussion is that the US gov isn't already abusing its power to snoop on everybody all the time. If we give up this premise, all the discussions make a lot less sense and we're just arguing about different ways to access the same data.
(credits to John Siracusa on ATP for wording very clearly this elephant in the room)
I feel somewhat optimistic of the future when many groups saw through this push for on-device scanning for what it was. Damn straight the tech is dangerous.
I haven't seen this asked on any of the threads about this yet, but what happens if we identify a few of the pics in their database of Evil Pictures, and send them (presumably from a non-Apple device) to the iPhone of anybody we don't like.
Presumably the actual data on the device is still encrypted and can't be accessed remotely, which means we need to trigger a law enforcement investigation which involves seizing the device and compelling the owner to unlock it in order to determine if they actually are a kiddie diddler or something went wrong. Gee, can't see how that could possibly go wrong. /s
Meanwhile, the actual kiddie diddlers out there have probably read the 10 million articles published about this by now and know not to use iMessage to trade their pictures, so probably not many of them would actually be caught this way.
If you send it via WhatsApp, which has the option to save all pictures to Camera roll, it will get automatically uploaded to iCloud if that's enabled. So no need to trigger a law enforcement investigation yourself in that case.
Copying over my comment from the last article about this:
Nothing about those concerns seems specific to the end-to-end encryption compatible CSAM system they or Apple built...
Honestly if I were Apple I'd consider just scrapping the whole thing and doing server side CSAM testing on iCloud photos without rolling out E2E encryption for iCloud photos. It's just not worth the PR blowback.
And 99.9% of people don't care if your algorithm is O(n) or O(n^2).
It literally does not matter what clueless people think or say about this subject. The informed 0.01% (that number isn't correct but whatever) have a huge influence on things and Apple has just as many incentives to please them as they do for the "I bought a 2K device to jack off and watch Netflix" crowd.
I‘d guess you could add a few more 9s to that. Almost all people either don’t care or like this. And if you do not fear (or do not understand) the implications and risks I see why they like what apple does.
It’s one of the few topics where all of mankind (with very little exceptions) agrees: CSAM is bad!
That’s why „we do it for the kids“ always works.
Within my circle, people do not care. They will continue to use Apple products because, "I have nothing to hide." That's what they told me when I brought it up.
My group is a bit more nuanced: most will likely stick with Apple since the effect is somewhat invisible to them, but this topic brought the question up of “should I stay on iPhone” - which is not a question you want to come up very often if you’re trying to sell these devices.
I think I get your point but I don’t think that’s a fair question to ask. Allowing Apple (or any other big company) to scan my hardware or letting my friends scroll to it are the distinct „risks“. There may be stuff on their that’s neither illegal nor immoral but I still would not want my friends/relatives/acquaintances to see/discuss. Sure I‘d prefer that google/Apple would not care for my browser history as well, but that’s a less direct thread.
I think a lot of people don't even know it's a thing, because they don't pay attention to tech news or tech philosophy.
"Do you think Apple should install software on every phone that scans your device for files deemed wrongful to possess, then reports you to the authorities if such content is found?"
That's what Apple wants to do. And most people think that warrantless searches are pretty fucked up.
Today, those "files deemed wrongful" are images of children being abused. Tomorrow it's a video of a mass shooting that they think encourages copycats. The day after that, it's protest flyers or instructions on how to combat police during civil unrest.
And yes, the NCMEC is de facto an agent of the government.
Yes it’s an issue client side. I’ve never uploaded a photo to iCloud. I have no plans to do so. Getting scanned server side is effectively opt-in. Them putting the scanning client side has measurable, though minimal effects, even if I continue not to use iCloud.
Not to mention that within a decade they’ll be scanning every image that passes through people’s phones.
Why change now when the damage is already done? People who know how bad this can turn probably already left or are figuring out how to leave Apple's ecosystem.
I’ve already left. They doubled down on it straight away. Then wheeled out Craig to tell us lies. They’re not on my side. I don’t have to be their customer.
The dangerous part is they have advertised this capability. Now regimes may make that a prerequisite for allowing them to do business.
Cat’s out of bag. Ain’t going back in now. We have to reevaluate any data that we do not control ourselves or control access to now.
Ironically this isn’t the world I want for my kids.
Yeah this whole conversation has been totally bizarre. The only thing new here is tagging matches on the device prior to uploading to the backend. But a huge amount of discussion is going to fears of illegal content monitoring in general.
In a previous comment on this very same subject on Apple's attempt to flag CSAM I wrote:
This invasive capability on the device level is a massive intrusion on everyone's privacy and there will be no limits for governments to expand it's reach once implemented. The scope will always broaden.
Well in the article they correctly point out how the scope of scanning is already broad by governments around the world and a violation of privacy by content matching political speech and other forms of censorship and government tracking.
We already have that now on the big tech platforms like Twitter that censor or shadow ban contetnt that they as the arbiters (egged on by the politicians and big corporate media) of truth (or truthiness as Colbert used to say in the old show The Colbert Report) label as misinformation or disinformation.
Do we now need to be prevented from communicating our thoughts and punished for spreading the truth or non-truths, especially given the false positives, and malware injections and remote device takeovers and hijackings by the Orwellian Big Tech oligopolies.
Power corrupts absolutely and this is too much power in the hands of Big Corporations and Governments.
From the article in case you need the lowdown:
Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.
We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.
We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We’d planned to discuss paths forward at an academic conference this month.
That dialogue never happened. The week before our presentation, Apple announced it would deploy its nearly identical system on iCloud Photos, which exists on more than 1.5 billion devices. Apple’s motivation, like ours, was to protect children. And its system was technically more efficient and capable than ours. But we were baffled to see that Apple had few answers for the hard questions we’d surfaced.
China is Apple’s second-largest market, with probably hundreds of millions of devices. What stops the Chinese government from demanding Apple scan those devices for pro-democracy materials? Absolutely nothing, except Apple’s solemn promise. This is the same Apple that blocked Chinese citizens from apps that allow access to censored material, that acceded to China’s demand to store user data in state-owned data centers and whose chief executive infamously declared, “We follow the law wherever we do business.”
Apple’s muted response about possible misuse is especially puzzling because it’s a high-profile flip-flop. After the 2015 terrorist attack in San Bernardino, Calif., the Justice Department tried to compel Apple to facilitate access to a perpetrator’s encrypted iPhone. Apple refused, swearing in court filings that if it were to build such a capability once, all bets were off about how that capability might be used in future.
Maybe it is 4d chess. I am very pleased by the pushback on this, in fact given the tech community outcry, this will not happen for another 5 years. Apple bought themselves some time before the beast forced a move. (regardless, i have already made steps to move away from the apple ecosystem. take that tim, see what happens!!!!!)
One important point that I have not seen addressed is the fact that CSAM detection may be able to run in realtime.
The orwellian implications of realtime detection are even worse, a constant stream of information about the content a particular user consumes, all generated through the display pipeline because new iPhones have ML accelerators.
Quick question, isn't the scanning only happening on iCloud?
If yes, then honest question:
Does anyone actually believe that content uploaded into the cloud doesn't get scanned by big tech companies for whatever latest flavour of "innovation" someone comes up with?
For example, GitHub was scanning repos for Co-Pilot. We all know Google already scans everything, including Gmail and Microsoft is the king of "telemetry".
What is the difference that scares people about Apple's CSAM scanning? I admit I haven't followed all the news very thoroughly because I usually wait a few weeks to see which outrage sticks and which not. Too much outrage nowadays to keep up with everything so this is an honest question.
No, the scanning is happening on local devices, but only during the upload-to-iCloud process. That’s the key difference that is upsetting a lot of people.
I would say from a legal/moral/political philosophy point of view the difference is that the other implementations do the scanning on their servers, apple wants to do the scanning on my phone. Even if the outcome is the same this feels like a step too far.
Even though in reality, I don't want them scanning my data regardless of where it lives. We have already been sliding down this slope, this is just one more step.
The current submission got a surprising amount of upvotes for a post that remained underwater (below the front page): http://hnrankings.info/28238163/. It's on my list to detect threads like that and rescue them. This case will be a hell of an example for testing.
Just think what kind of power this would give USA when they invade countries like Afganistan. They could easily cancel all people who don't like their presence, and be able to shape the narrative with their propaganda. I'm thinking that maybe this is the actual reason they want tools like this. Afganistan failed because of freedom of speech -> need more tools to limit freedom of speech.
Terrorists killing innocent people, and Apple did nothing to help. But someone possessing 30 pics of CSAM would be so much more important that Apple needs to implement a whole new surveillance system to track down.
The values and priorities in our society are indeed upside down sometimes.
A question: if the software that Apple is proposing to use on iCloud is accurate when looking at images sent to/stored on iCloud, how difficult is it to adapt to look at the screen buffer before it’s even saved as an image?
The problem is that Apple have let the genie out of the bottle. With all the, very public, blowback and drama they have created, otherwise ignorant politicians are now aware of what is possible and could start demanding it.
I've already written this in a previous comment, however I think it bears repeating: I think Apple have introduced this client-side content scanning technology under the auspices of protecting against CSAM, while its true intention is to allow for the Chinese government to scan citizens' phones for subversive content. I'm convinced that Apple's upper management figure the minimal blowback they're experiencing for this privacy invading technology in the west is worth the expansion of their technology into a much more totalitarian Chinese market. I think that this development has been precipitated by a very visible decline in America's economic, and social position as a world leader. Why not risk this move? America's trajectory is that of almost definite decline.
I'm not saying Apple is definitively involved in some shady stuff, but from my perspective, it does look like NSA forced them to do some sort of file scanning backdoor and they came up with this "it's about saving the children" explanation, already successfully in use in oppressive countries.