Comment by stogot

Comment by stogot 5 days ago

9 replies

I’m not sure I understand the threat model for this. Why would I need to worry about my enclave being identifiable? Or is this a business use case?

Or why buy used devices if this is a risk?

coppsilgold 5 days ago

It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.

Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.

Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.

Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.

CGMthrowaway 5 days ago

For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage

Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.

It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing

  • saghm 5 days ago

    The scandal you cited was that guns controlled by the federal government don't have any obvious reasonable path to being owned by criminals; there isn't an obvious reason for the guns to have left the possession of the government in the first place.

    There's not really an equivalent here for a computer owned by an individual because it's totally normal for someone to sell or dispose of a computer, and no one expects someone to be responsible for who else might get their hands on it at that point. If you prove a criminal owns a computer that I owned before, then what? Prosecution for failing to protect my computer from thieves, or for reselling it, or gifting it to a neighbor or family friend? Shifting the trust doesn't matter if what gets exposed isn't actually damaging on any way, and that's what the parent comment is asking about.

    The first two examples you give seem to be about an unscrupulous government punishing someone for owning a computer that they consider tainted, but it honestly doesn't seem that believable that a government who would do that would require a burden of proof so high as to require cryptographic attestation to decide on something like that. I don't have a rebuttal for "an organization trying to avoid cross-tenant linkage" though because I'm not sure I even understand what it means: an example would probably be helpful.

storystarling 5 days ago

I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.

  • fc417fc802 4 days ago

    I'm confused. People talking about remote attestation which I thought was used for stuff like SGX. A system in an otherwise untrusted state loads a blob of software into an enclave and attests to that fact.

    Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).

    I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.

unixhero 5 days ago

At this point these are just English sentences. I am not worried about this threat model at all.