Comment by louwrentius

Comment by louwrentius 8 hours ago

15 replies

I find the article a difficult read for someone not versed in “confidential computing”. It felt written for insiders and/or people smarter than me.

However, I feel that “confidential computing” is some kind of story to justify something that’s not possible: keep data ‘secure’ while running code on hardware maintained by others.

Any kind of encryption means that there is a secret somewhere and if you have control over the stack below the VM (hypervisor/hardware) you’ll be able to read that secret and defeat the encryption.

Maybe I’m missing something, though I believe that if the data is critical enough, it’s required to have 100% control over the hardware.

Now go buy an Oxide rack (no I didn’t invest in them)

crote 7 hours ago

The unique selling point here is that you don't need to trust the hypervisor or operator, as the separation and per-VM encryption is managed by the CPU itself.

The CPU itself can attest that it is running your code and that your dedicated slice of memory is encrypted using a key inaccessible to the hypervisor. Provided you still trust AMD/Intel to not put backdoors into their hardware, this allows you to run your code while the physical machine is in possession of a less-trusted party.

It's of course still not going to be enough for the truly paranoid, but I think it provides a neat solution for companies with security needs which can't be met via regular cloud hosting.

  • procaryote 3 hours ago

    The difference between a backdoor and a bug is just intention.

    AMD and Intel both have certainly had a bunch of serious security relevant bugs like spectre.

  • thrawa8387336 7 hours ago

    Hasn't that been exploited several times?

    • Harvesterify 4 hours ago

      Exploited in the wild, difficult to say, but there has been numerous vulnerabilities reported on underlying technologies used for confidential computing (Intel SGX, AMD SEV, Intel TDX, for example) and quite a good amount of external research and publications on the topic.

      The threat model for these technologies can also sometimes be sketchy (lack of side channel protection for Intel SGX, lack of integrity verification for AMD SEV, for example)

    • crote 7 hours ago

      I don't believe so? I have no doubt that there have been vulnerabilities, but the technology is quite new and barely used in practice, so I would be surprised if there have been significant exploits already - let alone ones applicable in the wild rather than a lab.

      • GauntletWizard 5 hours ago

        The technology is only new because the many previous attempts were so obviously failures that they never went anywhere. The history of "confidential computing" is littered with half baked attempts going back to the early 2000s in terms of hypervisors, with older attempts in the mainframe days completely forgotten.

  • louwrentius 4 hours ago

    How can I believe the software is running on the CPU and not with a shim in between that exfiltrates data?

    The code running this validation itself runs on hardware I may not trust.

    It doesn’t make any sense to me to trust this.

    • mjg59 4 hours ago

      The CPU attests what it booted, and you verify that attestation on a device you trust. If someone boots a shim instead then the attestation will be different and verification will fail, and you refuse to give it data.

      • louwrentius 4 hours ago

        That creates a technical complexity I still don't trust. Because I don't see how you can trust that data isn't exfiltrated just because the boot image is correct.

        If you control the hardware, you trust them blindly.

        • trebligdivad an hour ago

          Your right it is complex; but it's a 'chain of trust' where each stage is in theory fairly easy to verify. That chain starts with the firmware/keys in the CPU itself; so you have a chain from CPU->CPU Firmware->vTPM->guest bios->guest OS (probably some other bits) Each one is measured or checked; and at the end you can check the whole chain. Now, if you can tamper with the actual cpu itself you've lost - but someone standing with an analyzer on the bus can't do anything, no one with root or physical access to the storage can do anything. (There have been physical attacks on older versions of AMDs SEV, of which the most fun is a physical attack on it's management processor - so it's still a battle between attackers and improved defences).

          [edit: Took out the host bios, it's not part of the chain of trust, clarified it's only the host CPU firmware you care about]

mnahkies 4 hours ago

I saw what I thought was a nice talk a couple of years ago at fosdem introducing the topic https://archive.fosdem.org/2024/schedule/event/fosdem-2024-1...

Even when running on bare metal I think the concept of measurements and attestations that attempt to prove it hasn't been tampered with are valuable, unless perhaps you also have direct physical control (eg: it's in a server room in your own building)

Looking forward to public clouds maturing their support for Nvidia's confidential computing extensions as that seems like one of the bigger gaps remaining

  • louwrentius 3 hours ago

    I don't believe in the validity of the idea of 'confidential computing' on a fundamental level.

    Yes, there are degrees of risk and you can pretend that the risks of third-parties running hardware for you are so reduced / mitigated due to 'confidential computing' it's 'secure enough'.

    I understand things can be a trade-off. Yet I still feel 'confidential computing' is an elaborate justification that decision makers can point to, to keep the status quo and even do more things in the cloud.

    • mnahkies 2 hours ago

      I'm a relative layman in this area, but from my understanding, fundamentally there has to be some trust somewhere, and I think confidential computing aims to provide a way to both distribute that trust (split the responsibility between the hardware manufacturer and cloud provider, though I'm aware already sounds like a losing prop if cloud providers are also the hardware manufacturer) and provide a way to verify it's intact.

      Ultimately it's harder to get multiple independent parties to collude than a single entity, and for many threat models that's enough.

      Whether today's solutions are particularly good at delivering this, I don't know (slides linked in another comment suggest not so good), but I'm glad people are dedicating effort to trying to figure it out

      • trebligdivad an hour ago

        If you get it right (and damn you really need to ask your cloud provider to prove they have...) - you don't need to trust the cloud provider in this model at all. In reality most of the provided systems do trust the provider somewhere but only to the level of some key store or something in the back, not the people in the normal data centres.

SvenL 7 hours ago

Well there were some advances in the space of homomorphic encryption, which I find pretty cool and would be an encryption which does not require a secret to work on the data. Sadly the operations which are possible are limited and quite performance intensive.