Comment by coppsilgold

Comment by coppsilgold 5 days ago

23 replies

Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.

Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.

They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.

This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.

Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.

b112 5 days ago

I tend to buy such things with cash, in person.

People dislike cash for some strange reason, then complain about tracking. People also hand out their mobile number like candy. Same issue.

  • BrandoElFollito 4 days ago

    > People dislike cash for some strange reason

    In my case it is because I would never have the right amount with me, in the right denominations. Google Pay always has this covered.

    Also you need to remember to take one more thing with you, and refill it occasionally. As opposed to fuel, you do not know how much you will need when.

    It can get lost or destroyed, and is not (usually) replaceable.

    I am French, currently in the US. I need to change 100 USD in small denominations, I will need to go to the bank, and they will hopefully do that for me. Or not. Or not without some official paper from someone.

    Ah yes, and I am in the US and the Euro is not an accepted currency here. So I need to take my 100 € to a bank and hope I can get 119.39 USD. In the right denominations.

    What will I do with the 34.78 USD left when I am back home? I have a chest of money from all over the world. I showed it once to my kids when they were young, told a bit about the world and then forgot about it.

    Money also weights quite a lot. And when it does not weights it gets lost or thrown away with some other papers. Except if they are neatly folded in a wallet, which I will forget.

    I do not care about being traced when going to the supermarket. If I need to do untraceable stuff I will get money from teh ATM. Ah crap, they will trace me there.

    So the only solution is to get my salary in cash, whihc is forbidden in France. Or take some small amounts from time to time. Which I will forget, and I have better things to do.

    Cash sucks.

    Sure, if we go cashless and terrible things happen (cyberwar, solar flare, software issues) then we are screwed. But either the situation unscrews itself, or we will have much, much, much bigger issues than money -- we will need to go full survival mode, apocalypse movies-style.

warkdarrior 5 days ago

Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation

  • coppsilgold 5 days ago

    > Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation

    Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.

      The solution first adopted by the TCG (TPM specification v1.1) required a trusted third-party, namely a privacy certificate authority (privacy CA). Each TPM has an embedded RSA key pair called an Endorsement Key (EK) which the privacy CA is assumed to know. In order to attest the TPM generates a second RSA key pair called an Attestation Identity Key (AIK). It sends the public AIK, signed by EK, to the privacy CA who checks its validity and issues a certificate for the AIK. (For this to work, either a) the privacy CA must know the TPM's public EK a priori, or b) the TPM's manufacturer must have provided an endorsement certificate.) The host/TPM is now able to authenticate itself with respect to the certificate. This approach permits two possibilities to detecting rogue TPMs: firstly the privacy CA should maintain a list of TPMs identified by their EK known to be rogue and reject requests from them, secondly if a privacy CA receives too many requests from a particular TPM it may reject them and blocklist the TPMs EK. The number of permitted requests should be subject to a risk management exercise. This solution is problematic since the privacy CA must take part in every transaction and thus must provide high availability whilst remaining secure. Furthermore, privacy requirements may be violated if the privacy CA and verifier collude. Although the latter issue can probably be resolved using blind signatures, the first remains.
    
    
    AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.
    • arianvanp 5 days ago

      Apple uses Blind Signatures for attestation. It's how they avoid captchas at CloudFlare and Fastly in their Private Relay product

      https://educatedguesswork.org/posts/private-access-tokens/

      • georgyo 5 days ago

        If I'm reading any of this correctly, this doesn't apply to hardware attestation.

        It seems apple has a service, with an easily rotated key and an agreement with providers. If the key _Apple_ uses is compromised, they can rotate it.

        BUT, apple knows _EXACTLY_ who I am. I attest to them using my hardware, they know _EXACTLY_ which hardware I'm using. They can ban me or my hardware. They then their centralized service gives me a blind token. But apple, may, still know exactly who owns which blind tokens.

        However, I cannot generate blind tokens on my own. I _MUST_ talk to some centralized service that can I identify me. If that is not the case, then any single compromised device can generate infinite blind tokens rending all the tokens useless.

        • coppsilgold 4 days ago

          The idea behind blind signatures is that the server will give you a signed token which is blinded and you can un-blind it on your end and then use it. The consumer of the token will not be able to collude with the issuer of the token to figure out who it was given to. There is more info here: <https://blog.cloudflare.com/privacy-pass-the-math/>

          I don't know if that's what Apple actually does. If it is, once it gets popular enough as an anti-bot measure there may be farms of Apple devices selling these tokens. It's a separate system from remote attestation anyhow.

    • zimmerfrei 5 days ago

      I don't think that a 100% anonymous attestation protocol is what most people need and want.

      It would be sufficient to be able to freely choose who you trust as proxy for your attestations *and* the ability to modify that choice at any point later (i.e. there should be some interoperability). That can be your Google/Apple/Samsung ecosystem, your local government, a company operating in whatever jurisdiction you are comfortable with, etc.

      • sam_lowry_ 5 days ago

        Most busunessed do not need origin attestation, they need history attestation.

        I.e. from when they buy from a trusted source and init the device.

  • pseudohadamard 5 days ago

    But what's it attesting? Their byline "Every system starts in a verified state and stays trusted over time" should be "Every system starts in a verified state of 8,000 yet-to-be-discovered vulns and stays in that vulnerable state over time". The figure is made up but see for example https://tuxcare.com/blog/the-linux-kernel-cve-flood-continue.... So what you're attesting is that all the bugs are still present, not that the system is actually secure.

    • chris_wot 5 days ago

      Well, if a rootkit gets installed later, attention might be handy? Or am I missing something?

      • direwolf20 4 days ago

        It comes rootkitted from the factory, and if you remove the rootkit, the device stops working.

[removed] 5 days ago
[deleted]
stogot 5 days ago

I’m not sure I understand the threat model for this. Why would I need to worry about my enclave being identifiable? Or is this a business use case?

Or why buy used devices if this is a risk?

  • coppsilgold 5 days ago

    It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.

    Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.

    Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.

    Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.

  • CGMthrowaway 5 days ago

    For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage

    Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.

    It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing

    • saghm 5 days ago

      The scandal you cited was that guns controlled by the federal government don't have any obvious reasonable path to being owned by criminals; there isn't an obvious reason for the guns to have left the possession of the government in the first place.

      There's not really an equivalent here for a computer owned by an individual because it's totally normal for someone to sell or dispose of a computer, and no one expects someone to be responsible for who else might get their hands on it at that point. If you prove a criminal owns a computer that I owned before, then what? Prosecution for failing to protect my computer from thieves, or for reselling it, or gifting it to a neighbor or family friend? Shifting the trust doesn't matter if what gets exposed isn't actually damaging on any way, and that's what the parent comment is asking about.

      The first two examples you give seem to be about an unscrupulous government punishing someone for owning a computer that they consider tainted, but it honestly doesn't seem that believable that a government who would do that would require a burden of proof so high as to require cryptographic attestation to decide on something like that. I don't have a rebuttal for "an organization trying to avoid cross-tenant linkage" though because I'm not sure I even understand what it means: an example would probably be helpful.

  • storystarling 5 days ago

    I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.

    • fc417fc802 4 days ago

      I'm confused. People talking about remote attestation which I thought was used for stuff like SGX. A system in an otherwise untrusted state loads a blob of software into an enclave and attests to that fact.

      Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).

      I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.

  • unixhero 5 days ago

    At this point these are just English sentences. I am not worried about this threat model at all.