> As far as this is concerned, seems like if you don’t use iMessage or iCloud you’re safe for now.
Yes, this is correct. The Messages feature only applies to children under 18 who are in an iCloud Family, and the photo library feature only applies if you are using iCloud Photos.
Oh come on, you really think thats their big plan? Announcing the scanning SW in public and then abuse it? If they want to to illegal spying they do it right. And without a second Snowden you will not hear about it.
I don't think it has anything to do with age. It has everything to do with you adding the phone to your family under settings and declaring that it belongs to a child. You control the definition of child.
I could imagine an abusive partner enabling this to make sure their partner isn’t sexting other people. Given the pushback for AirTags I’m surprised people aren’t more concerned.
Anyone 13 or older can remove themselves from a family sharing group. The only exception is if screen time is enabled and enforced for their device.
Frankly, if you have an abusive partner with physical control over you and a willingness to do this, the fact that Apple supports this technology is the least of your problems.
I’m not sure I’m misunderstanding. This is another feature that allows someone with access to another person’s phone to enable stalkerware like features.
If you read the article, you'd understand that among ALL the issues, this is not one:
- Photos scanning in Messages is on-device only (no reporting to govt.) and doesn't turn on unless you're an adult who turns it on for a minor via Family Sharing controls.
- iCloud Photos scanning doesn't take effect unless you save the photo and it's already in a database of flagged photos. So in your scenario, you'd have to save the photo received from the unknown number to get flagged.
I'm confused - the article explicitly states this scenario - minus the swatting.
Ie unless you're replying to purely the swatting part, the article seems to support this. Specifically a prediction that governments will creep on legally requiring Apple to push custom classifiers:
> Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.
That sentence is wrong. It simply isn't accurate of the current system. It relies on future changes to the system, not just changes to a database.
The iMessage feature is not a database comparison system, it's to keep kids from getting/receiving nudes unexpectedly – and it works based on classifying those images.
I don't dispute this is a slippery slope - one could imagine that a government requires Apple to modify it's classification system. However, that would presumably require a software update since it happens on device.
That refers to the icloud scanning, the idea being that if the hash database contains propaganda, people uploading that propaganda to icloud could get reported by their own device.
Didn’t apple also announce a feature for iOS 15 where iMessage photos are somehow automatically collected and shown in iCloud? A way to reduced hassle of creating shared albums. So with that, I think all users of iCloud photos are under risk here.
Y'know, I have no idea what I'd do in this situation and I really hope I'll never find out.
If a kilo of heroin just showed up in the back seat of my car, I'd throw it out the window and try not to think about it. I certainly wouldn't bring it to the police, because mere possession is a serious crime.
CP is the same way, except it comes with a nice audit trail which could sink me even if I delete it immediately. Do I risk that, or do I risk the FBI deciding I'm a Person of Interest because I reported the incident in good faith?
As far as this is concerned, seems like if you don’t use iMessage or iCloud you’re safe for now.