Comment by youarentrightjr
Comment by youarentrightjr 5 days ago
> Sounds like kernel mode DRM or some similarly unwanted bullshit.
Look, I hate systemd just as much as the next guy - but how are you getting "DRM" out of this?
Comment by youarentrightjr 5 days ago
> Sounds like kernel mode DRM or some similarly unwanted bullshit.
Look, I hate systemd just as much as the next guy - but how are you getting "DRM" out of this?
As the immediate responder to this comment, I claim to be the next guy. I love systemd.
I don't like few pieces and Mr. Lennarts attitude to some bugs/obvious flaws, but by far much better than old sysv or really any alternative we have.
Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts
There are genuine positive applications for remote attestation. E.g., if you maintain a set of servers, you can verify that it runs the software it should be running (the software is not compromised). Or if you are running something similar to Apple's Private Compute Cloud to run models, users can verify that it is running the privacy-preserving image that it is claiming to be running.
There are also bad forms of remote attestation (like Google's variant that helps them let banks block you if you are running an alt-os). Those suck and should be rejected.
Edit: bri3d described what I mean better here: https://news.ycombinator.com/item?id=46785123
I agree that DRM feels good when you're the one controlling it.
> There are genuine positive applications for remote attestation
No doubt. Fully agree with you on that. However Intel ME will make sure no system is truly secure and server vendors do add their mandatory own backdoors on top of that (iLO for HP, etc).
Having said that, we must face the reality: this is not being built for you to secure your servers.
> Remote attestation is literally a form of DRM
Let's say I accept this statement.
What makes you think trusted boot == remote attestation?
Trusted boot is literally a form of DRM. A different one than remote attestation.
> Trusted boot is literally a form of DRM. A different one than remote attestation.
No, it's not. (And for that matter, neither is remote attestation)
You're conflating the technology with the use.
I believe that you have only thought about these technologies as they pertain to DRM, now I'm here to tell you there are other valid use cases.
Or maybe your definition of "DRM" is so broad that it includes me setting up my own trusted boot chain on my own hardware? I don't really think that's a productive definition.
> Secure boot and attestation both generally require a form of DRM.
They literally don't.
For a decade, I worked on secure boot & attestation for a device that was both:
- firmware updatable - had zero concept or hardware that connected it to anything that could remotely be called a network
Interesting. So what did the attestation say once I (random Internet user) updated the firmware to something I wrote or compiled from another source?
> Interesting. So what did the attestation say once I (random Internet user) updated the firmware to something I wrote or compiled from another source?
The update is predicated on a valid signature.
No, it's not "all applications of cryptography". It's only remote attestation.
Buddy, if I want encryption of my own I've got secure boot, LUKS, GPG, etc. With all of those, why would I need or even want remote attestation? The purpose of that is to assure corporations that their code is running on my computer without me being able to modify it. It's for DRM.
I am fairly confident that this company is going to assure corporations that their own code is running on their own computers (ie - to secure datacenter workloads), to allow _you_ (or auditors) to assure that only _your_ asserted code is also running on their rented computers (to secure cloud workloads), or to assure that the code running on _their_ computers is what they say it is, which is actually pretty cool since it lets you use Somebody Else's Computer with some assurance that they aren't spying on you (see: Apple Private Cloud Compute). Maybe they will also try to use this to assert "deep" embedded devices which already lock the user out, although even this seems less likely given that these devices frequently already have such systems in place.
IMO it's pretty clear that this is a server play because the only place where Linux has enough of a foothold to make client / end-user attestation financially interesting is Android, where it already exists. And to me the server play actually gives me more capabilities than I had: it lets me run my code on cloud provided machines and/or use cloud services with some level of assurance that the provider hasn't backdoored me and my systems haven't been compromised.
How can you be "pretty sure" they're going to develop precisely the technology needed to implement DRM but also will never use or allow it to be used by anybody but the lawful owners of the hardware? You can't.
It's like designing new kinds of nerve gas, "quite sure" that it will only ever be in the hands of good guys who aren't going to hurt people with it. That's powerful naïveté. Once you make it, you can't control who has it and what they use it for. There's no take-backsies, that's why it should never be created in the first place.
"cryptographically verifiable integrity" is a euphemism for tivoization/Treacherous Computing. See, e.g., https://www.gnu.org/philosophy/can-you-trust.en.html