Posted by todsacerdoti 3 days ago
https://research.samsung.com/blog/The-Next-New-Normal-in-Com...
So unless they offer a way for us to run the "cloud services" on our own hardware where we can strictly monitor and firewall all network activity, they are almost guaranteed to be misusing that data, especially given Apple's proven track record of giving in to government's demands for data access (see China).
You are right. Apple is fully in control of the servers and the software, and there is no way for a customer to verify Apple's claims. Nevertheless system transparency is a useful concept. It can effectively reduce the number of things you have to blindly trust to a short and explicit list. Conversely it forces the operator, in this case Apple, to explicitly lie. As others have pointed out, that is quite a business risk.
As for transparency logs, it is an amazing technology which I can highly recommend you take a look at in case you don't know what it is or how it works. Check out transparency.dev or the project I'm involved in, sigsum.org.
> they are almost guaranteed to be misusing that data
That is very unlikely because of the liability, as others have pointed out. They are making claims which the Apple PCC architecture helps make falsifiable.
Transparency logs are capable of verifying that, it's more or less the whole point of them. (Strictly speaking, you can make it arbitrarily expensive to fake it.)
Also, if they were "transferring your data elsewhere" it would be a GDPR violation. Ironically wrt your China claim, it would also be illegal in China, which does in fact have privacy laws.
If you don't trust the boot process/code signing system then you'd want to do something else, like ask the server to show you parts of its memory on demand in case you catch it lying to you. (Not sure if that's doable here because the server has other people's data on it, which is the whole point.)
(You need to prove that the system is showing you the server your data is present on, and not just showing you an innocuous one and actually processing your data on a different evil one.)
Even if they were running open source software with cryptographically verified / reproducible builds, it's still running on their hardware (any component or the OS / kernel or even hardware can be hooked into to exfiltrate unencrypted data).
Companies like Apple don't give a crap about GDPR violations (you can look at their "DMA compliance" BS games to see to what extent they're willing to go to skirt regulations in the name of profit).
The log is publicly accessible and append-only, so such an event would not go un-noticed. Not sure what a non-transparent log is.
Maybe I'm not being clear; transparent logs solve the problem of supply chain attacks (that is, Apple can use the logs to some degree to ensure some 3rd party isn't modifying their code), but I'm trying to say Apple themselves ARE the bad actor, they will exfiltrate customer data for their own profit (to personalize ads, or continue building user profiles, or sell to governments and so on).
davidczech has already explained it quite well, but I'll try explaining it a different way.
Consider the verification of a signed software update. The verifier, e.g. apt-get, rpm, macOS Update, Microsoft Update or whatever OS you're running. They all have some trust policy that contains a public key. The verifier only trusts software signed by the public key.
Now imagine a verifier with a trust policy that mandates that all signed software must also be discoverable in a transparency log. Such a trust policy would need to include:
- a pubkey trusted to make the claim "I am your trusted software publisher and this software is authentic", i.e. it is from Debian / Apple / Microsoft or whomever is the software publisher.
- a pubkey trusted to make the claim "I am your trusted transparency log and this software, or rather the publisher's signature, has been included in my log and is therefore discoverable"
The verifier would therefore require the following in order to trust a software update:
- the software (and its hash) - a signature over the software's hash, done by the software publisher's key - an inclusion proof from the transparency log
There is another layer that could be added called witness cosigning, which reduces the amount of trust you need to place in the transparency log. For more on that see my other comments in this thread.
My concern is that Apple themselves will include code in their officially signed builds that extracts customer data. All of these security measures cannot protect against that because Apple is a "trusted software publisher" in the chain.
All of this is great stuff, Apple makes sure someone else doesn't get the customer data and they remain the only ones to monetize it.
That's the whole point of the transparency log. Anything published, and thus to be trusted by client devices, is publicly inspectable.
The source code provided is for reference to help with disassembly.
Edit link: https://security.apple.com/documentation/private-cloud-compu...
One of the projects I'm working on however intends to enable just that. See system-transparency.org for more. There's also glasklarteknik.se.
I think we have different understandings of what the transparency log is utilized for.
The log is used effectively as an append-only hash set of trusted software hashes a PCC node is allowed to run, accomplished using Merkle Trees. The client device (iPhone) uses the log to determine if the software measurements from an attestation should be rejected or not.
https://security.apple.com/documentation/private-cloud-compu...
Bottom line, I just hope that there will be a big checkbox in the iPhone's settings that completely turns off all "cloud compute" for AI scenarios (checked by default) and I hope it gets respected everywhere. But they're making such a big deal of how "private" this data exfiltration service is that I fear they plan to just make it default on (or not even provide an opt-out at all).
It is so much more than that, but you are entitled to your own opinion.
There's a key signing ceremony with a third-party auditor watching; it seems to rely on trusting them together with the secure boot process. But there are other things you can add to this, basically along the lines of making the machine continually prove that it behaves like the system described in the log.
They don't control all of the service though; part of the system is that the server can't identify the user because everything goes through third party proxies owned by several different companies.
> Companies like Apple don't give a crap about GDPR violations
GDPR fines are 4% of the company's yearly global revenue. If you're a cold logical profit maximizer, you're going to care about that a lot!
Beyond that, they've published a document saying all this stuff, which means you can sue them for securities fraud if it turns out to be a lie. It's illegal for US companies to lie to their shareholders.
Apple has lied to shareholders before, remember those "what happens on your iPhone, stays on your iPhone" billboards back in the day they used to fool everyone into thinking Apple cares about privacy? A couple years later, they were proudly announcing how everyone's iPhone will scan their files and literally send them to law enforcement if they match some opaque government-controlled database of hashes (yes, they backed out of that plan eventually, but not before massive public outcry and going through a few "you're holding it wrong" explanations).
So sue them.
> how everyone's iPhone will scan their files and literally send them to law enforcement
That was a solution for if you opted into a cloud service, was a strict privacy improvement because it came alongside end-to-end encryption in the cloud, and I think was mandated by upcoming EU regulations (although I think they changed the regulations so it was dropped.)
Note in the US service providers are required to report CSAM to NCMEC if they see it; it's literally the only thing they're required to do. But NCMEC is not "law enforcement" or "government", it's a private organization specially named in the law. Very important distinction because if anyone does give your private information to law enforcement you'd lose your 4th Amendment rights over it, since the government can share it with itself.
(I think it may actually be illegal to proactively send PII to law enforcement without them getting a subpoena first, but don't remember. There's an exception for emergency situations, and those self service portals that large corporations have are definitely questionable here.)