Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I keep on reading that taking steps to prevent CASM is doing "huge damage" to their brand. Or that blocking minors from sending nudes if they are on a child account under their parent is doing huge damage.

Really?

You know some late night comedian is going to do some jokes about pedo's having to switch to android. You think that is damaging to apples' brand?

My guess is some android folks do a follow eventually (as usual).

I'm a parent. Even for those of us who are into privacy etc (yes, I did the early PGP key signing parties, EFF / ACLU stuff etc) I'm having a hard time seeing how this damages Apple's brand. I don't want this crap being sent to my children - PERIOD. If they are on a child account PLEASE screen it.

Folks - pay attention to the kind of laws that will get passed and do get passed. Most folks will throw away a lot of civil rights for these types of issues.

I found the arguments against this surprisingly uncompelling. I saw an HN article about how apple is committing felonies etc - it just didn't seem well founded. And everything is over hyperbolic over the top its insane.



I agree that the idea of the AG bringing felony charges against Apple employees because of their approach to dealing with CSAM is pretty far-fetched. But the article you're referring to [0] is correct to point out that Apple will be silently exfiltrating unencrypted data from its users' devices to be reviewed by its employees, meaning that they'll be viewing arbitrary content from Apple users' devices without their knowledge or consent. That's not speculation, it follows directly from Apple's explanation of how the system will work, with the only safeguard being Apple's purported one-in-a-trillion false positive rate for the on-device scanning that causes the data to be sent to Apple.

[0] https://www.hackerfactor.com/blog/index.php?/archives/929-On...


You are confusing encryption with access to encryption.

Apple ALREADY has access to all you icloud photos. Period. How do you think they offer them to you via website and various sync services.

So they can encrypt them at rest or in transit, but their KMS or whatever gives them access ALREADY to these very same photos.

This illustrates I think how bad this convo has been from the HN side. A lot of bad info out there on this which makes the totally overblown responses even worse.

Heads up - your phone ALEADY uploads photos to iCloud if you let it, and those photos are accessible by Apple ALREADY.


While logical to assume, that's not a given. I think some of us want to believe that the keys to those said images are uploaded encrypted to the cloud. And the ability to sync between devices just means adding more keys to your account. So unless you know something we don't, it's not 100% that they sit unencrypted on Apple's servers, or that they have the keys to decrypt them.

It is sound judgement tho, to assume that they do.


They are explicit about this.

"iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari Browsing History, Maps Search History, Messages and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, Business Chat, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the encryption keys.

Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the customer’s account, may be provided in response to a search warrant issued upon a showing of probable cause, or customer consent."

So unless you doubt apples own guide - they maintain the keys and will provide this info in response to govt requests.

They handled requests for 31,000 accounts in the last 6 months based on their reporting and provided data in 85% of those situations.


The AG bringing felony charges isn't far-fetched its ridiculous. Does anyone think a trillion dollar company with a giant legal staff hasn't vetted this in a hundred different ways? They are probably going country by country to validate legality. It will not be implemented anywhere they don't find that it is 100% legal.


The irony - my guess is many countries will require this or block e2ee. The idea that govt or public is against this seems unlikely


You may be conflating two features. 1 - Messages fix. 2 - scanning your entire Photos library and sending hashes to an unaccountable, no oversight nonprofit (NCMEC) to be compared with CSAM hashes - with no way to contest what you send.

1 is arguably a good feature (though it's very intrusive - it has a benefit).

2 is a monstrous invasion of privacy. It has no benefit to you, the user, only a massive potential threat.

Finally, the NCMEC is founded by someone who admitted, if he was judged by the own law he helped pass, he'd be considered a sex offender when he was dating his then-girlfriend.


> scanning your entire Photos library and sending hashes to an unaccountable, no oversight nonprofit (NCMEC) to be compared with CSAM hashes - with no way to contest what you send.

This is simply incorrect. It's clearly documented how this works. 1) only files about to be uploaded to iCloud are scanned, 2) Apple maintains the hashes for comparison in their servers (not NCMEC), 3) only after multiple photos match is the account flagged for review, 4) they are reviewed by Apple to catch false positives, 5) only after match is confirmed is the account disabled and the user reported to NCMEC, and 6) you can still appeal to Apple to have your account reinstated.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


> scanning your entire Photos library

The parts of it being uploaded to iCloud, not the whole thing, yes?

> and sending hashes to an unaccountable, no oversight nonprofit (NCMEC)

As I understand it, the whole point is that the hashes are computed on-device and compared to the NCMEC database on-device. Is my understanding incorrect?


> The parts of it being uploaded to iCloud, not the whole thing, yes?

This is policy, not capability. The capability is each and every image can be scanned.

> As I understand it, the whole point is that the hashes are computed on-device and compared to the NCMEC database on-device. Is my understanding incorrect?

You are incorrect but also the concern is NCMEC’s database’s accuracy. Also, going by other NPO’s the government has taken an interest in, NCMEC is very exposed to government control. They can easily decide to allow the government to slip a few extra hashes in their database and tell no one. They can do this in exchange for protecting their NPO status, a national security letter, anything. We will never know and if we find out, we will have no way to hold them accountable. Ever.

Anyway, the NMEC database is NDA’ed and a secret. It’s not open to review or auditing. It’s also grossly inaccurate and nearly useless.

You are taking these points entirely out of context and in a vacuum. It reads a bit bad faith and win an Internet argument at any cost.


> This is policy, not capability. The capability is each and every image can be scanned.

Are you familiar with the implementation? If it's in the uploader, then the capability for each image to be scanned sounds like it would involve a bunch of code changes.

But let's posit that the implementation just checks the "uploaded to iCloud" flag and hence this is "just policy". I agree that this is concerning, but I think it's important to distinguish between "could do X" and "is doing X" when describing a situation.

> You are incorrect

I would love to be enlightened here. Can you please point me to an explanation of how my understanding is incorrect?

> but also the concern is NCMEC’s database’s accuracy

I absolutely share this concern.

> You are taking these points entirely out of context and in a vacuum.

No, I don't think I am. I think there is enough heated rhetoric going on here, with people mis-stating what is actually going on to justify how they feel about it, that it's worth being very clear about what the problems here really are. Otherwise it feels like people are arguing against strawmen and makes it too easy to dismiss concerns that are very pertinent.

The original article for this thread, by the way, does a good job here.


Agree to disagree on the rest, I don’t feel there’s more to say that would convince you or vice versa.

But I will respond to this point:

I said:

> Anyway, the NMEC database is NDA’ed and a secret. It’s not open to review or auditing.

NMEC itself would never go for the entirety of the database sitting on each phone. It’s a jailbreak away from “Them” having the database and being able to check their images before uploading them places. (This overlooks the fact that it’s almost universally accepted by everyone outside of NMEC that deals with NMEC that “They” have the database several times over.)


> This is policy, not capability. The capability is each and every image can be scanned.

That capability is in every software/app that has access to your photos and to the internet, on any device.


> The parts of it being uploaded to iCloud, not the whole thing, yes?

If you leave your phone on default settings, that’s all of them. In fact you need to turn off a lot of things. Turn off backups, Photo Stream, Files, Mail Drop, album sharing (make sure you don’t accept any invites to shared albums or you will get flagged) and I’m sure there are more iCloud integrations I’m not aware of.

It’s actually quite hard to not use iCloud, by design.


Thank you, that is useful context!


Your understanding is incorrect. The NCMEC database is unavailable to your device. The hashes are checked by communicating with a server. In particular, only the server ever learns if there was a match, not your device.


Wrong.


No - the amount of bad info in these discussions has gotten incredible. The more overblown the claims the more trashy the data it seems they are based on.

It's funny, it is literally in the first paragraph of the system overview in terms of how the database of hashes is stored - "which is securely stored on users’ devices"


You misunderstand what that quote is referring to.


No, he's not. Read the documentation. The hash database your photos are compared to will live on your device, and scanning happens entirely on-device.

"Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices."


Read further down:

> the blinding step using the server-side secret is not possible on device because it is unknown to the device. The goal is to run the final step on the server and finish the process on server. This ensures the device doesn’t know the result of the match, but it can encode the result of the on-device match process before uploading to the server.


This is a confusing paragraph, but the "final step" is not matching per se. The final step reveals whether a match occurred - but it's clear that the database exists on-device and the database lookup occurs on-device.

Note the last part: "... but it can encode the result of the on-device match process before uploading to the server." The result of the match already exists before it is uploaded to the server. This is also made clear in the diagram.

Depending on how you define "matching," it either occurs on-device or no explicit matching actually occurs. The server gets a payload for each uploaded image and attempts to use its server-side secret to derive its decryption key from the header. If (and only if) the image was a match - as determined by the blinded on-device comparison - it will now have a key to decrypt the rest of the payload (for review). Note that neither client nor server explicitly compare the real database hash to the image hash. The comparison is absorbed into crypto "properties."


Do you agree with these two statements?

1. For every image that's scanned, whether matching or non-matching, some data will be sent to the server.

2. Your phone never learns and has no way to tell which images match and which ones don't.


Ugh. Feel free to expound any of your vague replies in this thread with actual explanations that make sense under any reasonable interpretation of "on device."


My expectations from Apple would be scaled way back if they weren't preaching fairly consistently that they treat privacy as a "fundamental human right". They've marketed this as not only a competitive difference but also a moral imperative.

To claim that you treat privacy as a fundamental human right is an extraordinary claim that requires significant effort and action to back up. To me, "fundamental human rights" apply to everyone - all humans - including children.

Their commitment to this was already called into question for several reasons, including their partial commitment to E2E as well as their actions in other countries (i.e. China).

When you give your users privacy with numerous conditions attached indicating all the times they don't have it, you aren't giving them privacy. Full stop.

It's like going to someone's home, having them tell you that they are champions for privacy and then going to the bathroom and seeing a damn camera attached to the wall. "Oh, that, don't worry. I only look at the footage if something gets stolen."


Apple's brand won't be sullied until they give-in to a government order to flag other types of illegal images (gay porn in Saudi Arabia, say, or pictures of the Tank Man in China) and are caught doing it.

Apple's rejoinder is that they will simply refuse to do that. And that's great, until you consider that all the iPhones are made in China and China is more than willing to apply immense pressure including but not limited to shutting down Foxconn if they feel strongly about it.


Most of the major tech co's have already adapted to China's requests.

That in general means hosting data about chinese nationals in china, usually using state govt controlled datacenters, and making sure the keys to decrypt content are also local and accessible to those state controlled employees.

You should have little to no privacy expectation in china as an example.

For example AWS is careful to use this language about it's china regions: "Amazon Web Services China (Ningxia) Region operated by NWCD"

They used to block KMS services in China as well.

Apple has said it will (generally) follow the law in the countries it operates in. Until we say apple can make its own laws that is probably what it'll have to do.


Apple already acquiesced to China in that manner, yes. They store all iCloud data in China and they gave the Chinese government the encryption keys.

This is a bit different in that they're scanning files on client devices, /not/ solely in the cloud.


Also a parent, and I couldn’t more strongly disagree with you.

We cannot protect our children by building a dystopia for them to grow up in, and normalizing this kind of invasive spyware on every device is pretty much guaranteeing that.


> I don't want this crap being sent to my children

You can always not buy your child a device.

Parents will recoil in horror at the suggestion, being told what to do and limiting their child’s freedom ! Indeed, welcome to our world, where we suffer huge affronts on our freedom and privacy in the name of “the children”.


>I found the arguments against this surprisingly uncompelling

I think it just goes to show that people don't actually care about privacy and civil liberties. You can't argue against "think of the children" without being labeled heartless or a pedophile so no one with true influence will argue against it since nobody wants to die on the child porn hill. This is what happens when your thought leaders are all cynical and value money and power above all else.

While I'm a privacy advocate I could see that the arguments were fruitless. The popular conception of the constitution today is that it is a joke. People mock liberties like freedom of speech so you just know privacy is something people do not care about.


People do care about privacy, but not in that absolute sense. You think privacy is 100% or 0%, but vast majority of the public don't think so.


exactly. This seems a pretty low intrusion way to deal with a somewhat important issue to a lot of folks. They already had access to these icloud photos on icloud, this was in a way something to keep the scanning OFF their servers.

I really continue to doubt this is going to have huge damage to apple's brand. More likely - others will rush to copy this (or be forced to by govts elected by people who want this).


So you would be in favor of installing cameras in all homes that detects child abuse, murder, and rape in an automated fashion? It can use microphones to detect people in distress. This would prevent many more heinous crimes than what apple is proposing we do and it will only be used for detecting horrible acts.

By your logic this would be acceptable and we could all still claim to value privacy.


We are discussing a specific set of technologies.

That said, camera's are already spreading pretty quickly (check out ring doorbell and friends) and homeowners are voluntarily registering these with local police departments.

So yes, people don't mind if they are recorded going into and out of their own house and voluntary let police review this footage.

Again, I think folks are overstating the "huge" damage to apple's brand.


People haven't had ring forced on them inside their homes yet.

Like I said in another comment, the principle is whether forcing surveillance on people to prevent crimes is ok. The set of technologies is irrelevant, Apple is going through personal information that has not been voluntarily shared with the public by the content owner to detect criminal activities. I don't see how that is different from installing microphones and cameras inside people's homes with the passive ability to detect crime. Why can't a landlord install this kind of technology in their tenant's apartments?


I don't think so. This is a case by case reasoning. There is no straight logic leads to jumping from scanning iCloud photos to installing cameras in homes.


The principle is that it's fine to install surveillance if we are preventing horrible things from happening.


> You know some late night comedian is going to do some jokes about pedo's having to switch to android. You think that is damaging to apples' brand?

No just the start of a whole stream of attacks from WhatsApp / Facebook:

https://www.theverge.com/2021/8/6/22613365/apple-icloud-csam...

Also just read the full range of comments.

The fact that you're saying elsewhere that everyone is misunderstanding and it's all overblown is sort of making my point that it's affecting the brand.

Plus you're saying other firms capture images so that's fine - no! Those other firms haven't made privacy a central feature of their brand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: