Comment by crote

Comment by crote 7 hours ago

9 replies

The unique selling point here is that you don't need to trust the hypervisor or operator, as the separation and per-VM encryption is managed by the CPU itself.

The CPU itself can attest that it is running your code and that your dedicated slice of memory is encrypted using a key inaccessible to the hypervisor. Provided you still trust AMD/Intel to not put backdoors into their hardware, this allows you to run your code while the physical machine is in possession of a less-trusted party.

It's of course still not going to be enough for the truly paranoid, but I think it provides a neat solution for companies with security needs which can't be met via regular cloud hosting.

procaryote 3 hours ago

The difference between a backdoor and a bug is just intention.

AMD and Intel both have certainly had a bunch of serious security relevant bugs like spectre.

thrawa8387336 7 hours ago

Hasn't that been exploited several times?

  • Harvesterify 4 hours ago

    Exploited in the wild, difficult to say, but there has been numerous vulnerabilities reported on underlying technologies used for confidential computing (Intel SGX, AMD SEV, Intel TDX, for example) and quite a good amount of external research and publications on the topic.

    The threat model for these technologies can also sometimes be sketchy (lack of side channel protection for Intel SGX, lack of integrity verification for AMD SEV, for example)

  • crote 7 hours ago

    I don't believe so? I have no doubt that there have been vulnerabilities, but the technology is quite new and barely used in practice, so I would be surprised if there have been significant exploits already - let alone ones applicable in the wild rather than a lab.

    • GauntletWizard 5 hours ago

      The technology is only new because the many previous attempts were so obviously failures that they never went anywhere. The history of "confidential computing" is littered with half baked attempts going back to the early 2000s in terms of hypervisors, with older attempts in the mainframe days completely forgotten.

louwrentius 4 hours ago

How can I believe the software is running on the CPU and not with a shim in between that exfiltrates data?

The code running this validation itself runs on hardware I may not trust.

It doesn’t make any sense to me to trust this.

  • mjg59 4 hours ago

    The CPU attests what it booted, and you verify that attestation on a device you trust. If someone boots a shim instead then the attestation will be different and verification will fail, and you refuse to give it data.

    • louwrentius 4 hours ago

      That creates a technical complexity I still don't trust. Because I don't see how you can trust that data isn't exfiltrated just because the boot image is correct.

      If you control the hardware, you trust them blindly.

      • trebligdivad an hour ago

        Your right it is complex; but it's a 'chain of trust' where each stage is in theory fairly easy to verify. That chain starts with the firmware/keys in the CPU itself; so you have a chain from CPU->CPU Firmware->vTPM->guest bios->guest OS (probably some other bits) Each one is measured or checked; and at the end you can check the whole chain. Now, if you can tamper with the actual cpu itself you've lost - but someone standing with an analyzer on the bus can't do anything, no one with root or physical access to the storage can do anything. (There have been physical attacks on older versions of AMDs SEV, of which the most fun is a physical attack on it's management processor - so it's still a battle between attackers and improved defences).

        [edit: Took out the host bios, it's not part of the chain of trust, clarified it's only the host CPU firmware you care about]