Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unless the entire stack you’re using is audited and open source this sort of thing is inevitable.

As far as this is concerned, seems like if you don’t use iMessage or iCloud you’re safe for now.



> As far as this is concerned, seems like if you don’t use iMessage or iCloud you’re safe for now.

Yes, this is correct. The Messages feature only applies to children under 18 who are in an iCloud Family, and the photo library feature only applies if you are using iCloud Photos.


Ha ha. They have a fully functional spying software installed on the phone and the government will stop at these restrictions?


Oh come on, you really think thats their big plan? Announcing the scanning SW in public and then abuse it? If they want to to illegal spying they do it right. And without a second Snowden you will not hear about it.


I’m fairly certain the age is different per region and hopefully tied to the age of consent (in this particular case).


I don't think it has anything to do with age. It has everything to do with you adding the phone to your family under settings and declaring that it belongs to a child. You control the definition of child.


I could imagine an abusive partner enabling this to make sure their partner isn’t sexting other people. Given the pushback for AirTags I’m surprised people aren’t more concerned.


Anyone 13 or older can remove themselves from a family sharing group. The only exception is if screen time is enabled and enforced for their device.

Frankly, if you have an abusive partner with physical control over you and a willingness to do this, the fact that Apple supports this technology is the least of your problems.


Except this would require consent of the abused partner when creating the account to set an age <13yo.

You can’t ser this to other accounts on you family remotely.


You’re misunderstanding what this is if this is an actual concern of yours.


I’m not sure I’m misunderstanding. This is another feature that allows someone with access to another person’s phone to enable stalkerware like features.


Would artificially inflating every child’s age to 18+ eliminate the iMessage problem


Ending of fourth paragraph:

> This feature can be turned on or off by parents.


>don’t use iMessage

1. Send someone you hate a message with cartoon making fun of tyrant-president.

2. That person is now on a list.

Its swatting-as-a-service.


If you read the article, you'd understand that among ALL the issues, this is not one:

- Photos scanning in Messages is on-device only (no reporting to govt.) and doesn't turn on unless you're an adult who turns it on for a minor via Family Sharing controls. - iCloud Photos scanning doesn't take effect unless you save the photo and it's already in a database of flagged photos. So in your scenario, you'd have to save the photo received from the unknown number to get flagged.


I'm confused - the article explicitly states this scenario - minus the swatting.

Ie unless you're replying to purely the swatting part, the article seems to support this. Specifically a prediction that governments will creep on legally requiring Apple to push custom classifiers:

> Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.


That sentence is wrong. It simply isn't accurate of the current system. It relies on future changes to the system, not just changes to a database.

The iMessage feature is not a database comparison system, it's to keep kids from getting/receiving nudes unexpectedly – and it works based on classifying those images.

I don't dispute this is a slippery slope - one could imagine that a government requires Apple to modify it's classification system. However, that would presumably require a software update since it happens on device.


So are you prepared to never update your device?


That refers to the icloud scanning, the idea being that if the hash database contains propaganda, people uploading that propaganda to icloud could get reported by their own device.


Didn’t apple also announce a feature for iOS 15 where iMessage photos are somehow automatically collected and shown in iCloud? A way to reduced hassle of creating shared albums. So with that, I think all users of iCloud photos are under risk here.


>So in your scenario, you'd have to save the photo received from the unknown number to get flagged.

Whew! I was worried there for a minute. Maybe for extra safety I could say "SIRI I DISAVOW OF THIS MESSAGE!"??


would you not report unsolicited child porn to the FBI anyway?


Y'know, I have no idea what I'd do in this situation and I really hope I'll never find out.

If a kilo of heroin just showed up in the back seat of my car, I'd throw it out the window and try not to think about it. I certainly wouldn't bring it to the police, because mere possession is a serious crime.

CP is the same way, except it comes with a nice audit trail which could sink me even if I delete it immediately. Do I risk that, or do I risk the FBI deciding I'm a Person of Interest because I reported the incident in good faith?

There are no good choices there.


The scan doesn't detect child porn, it detects photos in the CSAM database. The two may or may not be same thing, now it in the future.


Don’t talk to the police.

And that includes the FBI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: