Usually this is done the other way around - servers verifying client devices using a chip the manufacturer put in them and fully trusts. They can trust it, because it's virtually impossible for you (the user) to modify the behavior of this chip. However, you can't put something in Apple's server. So if you don't trust Apple, this improves the trust by... 0%.
Their device says it's been attested. Has it? Who knows? They control the hardware, so can just make the server attest whatever they want, even if it's not true. It'd be trivial to just use a fake hash for the system volume data. You didn't build the attestation chip. You will never find out.
Happy to be proven wrong here, but at first glance the whole idea seems like a sham. This is security theater. It does nothing.
If it is all a lie, Apple will lose so much money from class action lawsuits and regulatory penalties.
> It’d be trivial to just use a fake hash
You have to go deeper to support this. Apple is publishing source code to firmware and bootloader, and the software above that is available to researchers.
The volume hash is computed way up in the stack, subject to the chain of trust from these components.
Are you suggesting that Apple will actually use totally different firmware and bootloaders, just to be able to run different system images that report fake hashes, and do so perfectly so differences between actual execution environment and attested environment cannot be detected, all while none of the executives, architects, developers, or operators involved in the sham ever leaks? And the nefarious use of the data is never noticed?
At some point this crosses over into “maybe I’m just a software simulation and the entire world and everyone in it are just constructs” territory.
I don't know if they will. It is highly unlikely. But theoretically, it is possible, and very well within their technical capabilities to do so.
It's also not as complicated as you make it sound here. Because Apple controls the hardware, and thus also the data passing into attestation, they can freely attest whatever they want - no need to truly run the whole stack.
Usually the attestation systems operate on neither side having everything to compute a result that will match attestation requirements, and thus require that both server-side and client-side secret are involved in attestation process.
The big issue with Apple is that their attestation infrastructure is wholly private to them, you can't self-host (Android is a bit similar in that application using Google's attestation system have the same limitation, but you can in theory setup your own).
Attestation requires a root of trust, i.e. if data hashes are involved in the computation, you have to be able to trust that the hardware is actually using the real data here. Apple has this for your device, because they built it. You don't have it for their server, making the whole thing meaningless. The maximum information you can get out of this is "Apple trusts Apple".
Under the assumption that Apple is telling the truth about what the server hardware is doing, this could protect against unauthorized modifications to the server software by third parties.
If however, we assume Apple itself is untrustworthy (such as, because the US government secretly ordered them to run a different system image with their spyware installed) then this will not help you at all to detect that.
Their device says it's been attested. Has it? Who knows? They control the hardware, so can just make the server attest whatever they want, even if it's not true. It'd be trivial to just use a fake hash for the system volume data. You didn't build the attestation chip. You will never find out.
Happy to be proven wrong here, but at first glance the whole idea seems like a sham. This is security theater. It does nothing.