Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

After reading the post and as a parent of two kids who are in middle school now...I'm pretty happy with what I see. I didn't expect to be based on the comments I read here before reading the article though.

I know a local family who has a daughter who's been in therapy for the last 3 years because she fell victim to the type of thing Apple is discussing in this post. They are firmly advocates for better parent education and oversight, sharing their experience so that other people can hopefully never have to deal with the same thing. They told us about an app called Bark[1] that's supposed to really help with a lot of this stuff and seems inline with what Apple is talking about here. I'm pretty happy to see it will be built in.

> The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.

All the parental controls in the world don't prevent the fact that getting your kids a phone in this day and age is a pretty terrifying experience if you know what type of things are out there.

> When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.

1 - https://www.bark.us/



I think the account-owner being able to turn-on or turn-off that type of capability is generally okay (though god knows if I was a 17 year old I'd certainly be switching to Droid for that reason alone).

It sounds like there's a second, unrelated thing going on which cannot be turned-off that reports images to apple that set off their warnings. The large concern there is that this type of techology could obviously someday be used to say report/delete all photos of police brutality. Since 20% of the world alone lives in China, I think the question of "How do we ensure a malicious authorities cannot use this technology against good users" is not really an afterthought but must be addressed head-on before people will buy-in.


This iMessage feature is good, it's the hashing all of your photos for comparison against CSAM hashes that's the problem. How do you know if you've tripped the wire? What about false positives?

Given how free US Law enforcement is with violence, any potential threat of involvement with them makes me very very nervous.

Going to need to reconsider hosting my photos with Apple devices.


My kids' school recommends Bark but, after looking into it, I felt I couldn't trust it. Unlike Apple, Bark appears to (at the time I reviewed them) transmit nearly everything the kid does on the phone to the Bark servers for review. That's a pretty terrible way of solving this problem, in my opinion.


> getting your kids a phone in this day and age is a pretty terrifying experience

there is a really easy solution to this problem


There is nothing easy about it.

a) Many places are still going in/out of lockdown or are schooling remotely and so their phone is the only way for them to communicate with their friends. Depriving them of social contact is incredibly unhealthy and harms their development.

b) For better or worse apps like Tiktok are a huge part of their culture and the popular dances etc are often known by everyone. Being the only child who is out of the loop can cause serious isolation.

Children are growing and making them not feel like they are part of a social group is incredibly harmful and can have permanent effects in adulthood. Giving them a phone but monitoring their activities is likely to be the least harmful approach.


I still don't understand how there are people on HN who think that giving their kids less access to technology is somehow a virtuous position to take. When I was the same age as my kids I could have gotten into all sorts of shit on a BBS or Compuserve forum -- my parents had no idea what was going on, but they'd given me a basic sense of right and wrong, and somebody to talk to if I was concerned. You've got to educate them about the world, but cutting them off from it is not the way to do that.


You've never considered that maybe growing up apart from the groupthink would be helpful to a child? Did you ever wonder why Jobs and all of the other tech ceos don't let their kids have these consumption devices? They're in charge of of this stuff yet they keep their progeny from utilizing it. But you actually want to give it to them as if there were nothing wrong with it because it's socially acceptable?

Your line of thinking is ridiculous but please continue.


I find it incredibly depressing that you hold this viewpoint


Oh I know. I'm the parent who's kids constantly complain "everybody else has one" and keeps holding out.


If they're complaining about that then they haven't grown up enough to realize they don't need one right now.


They're kids . . .


That's the point


The Messages app auto-blurring seems useful and respectful of the user, which is nice.

Additionally, the client side scanning seems very well-designed, but if iCloud Photos are not end-to-end encrypted, why are they going to such an effort to do this when they already have access to any image they want server-side?


The explanation is complicated but really fascinating. I think I understand it, but not well enough to explain it. Read the section entitled "What cryptographic tools are used in the implementation of the system?" in this write up about Apple's methodology.

https://www.apple.com/child-safety/pdf/Technical_Assessment_...

Also look at their full whitepaper here: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

My take is that performing the initial hash matching and encrypting the results in two separate layers on device prevents Apple from having meaningful knowledge of low (under the set threshold for being flagged) quantities of matches on a user account. This protects the use of the threshold as a way to further reduce false positives. For example they couldn't comply with a subpoena that said "Hey, we know you set a threshold of only flagging + reporting accounts with 50 image matches, but we want to see a list of all accounts with 10 or more matches because we think that's good enough."

This method lets them set and enforce a threshold to maintain their target false positive rate which they say is ~1 in 1 trillion accounts incorrectly flagged.

Disclaimer: I'm not a cryptographer and could be misunderstanding this.


I think your take is correct but doesn't answer the question about why this matching has to take place on the device, if it's only for photos that are going into iCloud, and the iCloud contents are already being stored unencrypted.

The only remotely plausible answer I've seen is that Apple wants to keep potentially-violating material out of their general storage, and flagged images are being sent to the review team instead of regular backup, but that's a pretty weak guess.


My kids are older teenagers now, but I wish Apple would have had some of this 5-8 years ago. Good on Apple for investing in helping real world problems and issues instead of investing in silencing opinions they disagree with in the name of "misinformation."


Is the blurred-photo feature available to adults as well? I can imagine that some people who get unsolicited photos sent to them might want it.

Also, it looks like the blurring feature is limited to the Messages app. That's pretty easy to work around.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: