Comment by mikkupikku
Comment by mikkupikku 5 days ago
Sounds like kernel mode DRM or some similarly unwanted bullshit.
Comment by mikkupikku 5 days ago
Sounds like kernel mode DRM or some similarly unwanted bullshit.
We all know who controls the keys. It's the first party who puts their hands on the device.
And once you remove the friction for requiring cryptographic verification of each component, all it takes is one well-resourced lobby to pass a law either banning user-controlled signing keys outright or relegating them to second-class status. All governments share broadly similar tendencies; the EU and UK govts have always coveted central control over user devices.
Doesn't have to be. While I'm not a fan of systemd (my comment history is there), I want to start from a neutral PoV, and see what it does.
I have my reservations, ideas, and what it's supposed to do, but this is not a place to make speculations and to break spirits.
I'll put my criticism out politely when it's time.
> Sounds like kernel mode DRM or some similarly unwanted bullshit.
Look, I hate systemd just as much as the next guy - but how are you getting "DRM" out of this?
"cryptographically verifiable integrity" is a euphemism for tivoization/Treacherous Computing. See, e.g., https://www.gnu.org/philosophy/can-you-trust.en.html
As the immediate responder to this comment, I claim to be the next guy. I love systemd.
I don't like few pieces and Mr. Lennarts attitude to some bugs/obvious flaws, but by far much better than old sysv or really any alternative we have.
Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts
There are genuine positive applications for remote attestation. E.g., if you maintain a set of servers, you can verify that it runs the software it should be running (the software is not compromised). Or if you are running something similar to Apple's Private Compute Cloud to run models, users can verify that it is running the privacy-preserving image that it is claiming to be running.
There are also bad forms of remote attestation (like Google's variant that helps them let banks block you if you are running an alt-os). Those suck and should be rejected.
Edit: bri3d described what I mean better here: https://news.ycombinator.com/item?id=46785123
I agree that DRM feels good when you're the one controlling it.
> There are genuine positive applications for remote attestation
No doubt. Fully agree with you on that. However Intel ME will make sure no system is truly secure and server vendors do add their mandatory own backdoors on top of that (iLO for HP, etc).
Having said that, we must face the reality: this is not being built for you to secure your servers.
> Remote attestation is literally a form of DRM
Let's say I accept this statement.
What makes you think trusted boot == remote attestation?
Trusted boot is literally a form of DRM. A different one than remote attestation.
> Secure boot and attestation both generally require a form of DRM.
They literally don't.
For a decade, I worked on secure boot & attestation for a device that was both:
- firmware updatable - had zero concept or hardware that connected it to anything that could remotely be called a network
Interesting. So what did the attestation say once I (random Internet user) updated the firmware to something I wrote or compiled from another source?
No, it's not "all applications of cryptography". It's only remote attestation.
Buddy, if I want encryption of my own I've got secure boot, LUKS, GPG, etc. With all of those, why would I need or even want remote attestation? The purpose of that is to assure corporations that their code is running on my computer without me being able to modify it. It's for DRM.
I am fairly confident that this company is going to assure corporations that their own code is running on their own computers (ie - to secure datacenter workloads), to allow _you_ (or auditors) to assure that only _your_ asserted code is also running on their rented computers (to secure cloud workloads), or to assure that the code running on _their_ computers is what they say it is, which is actually pretty cool since it lets you use Somebody Else's Computer with some assurance that they aren't spying on you (see: Apple Private Cloud Compute). Maybe they will also try to use this to assert "deep" embedded devices which already lock the user out, although even this seems less likely given that these devices frequently already have such systems in place.
IMO it's pretty clear that this is a server play because the only place where Linux has enough of a foothold to make client / end-user attestation financially interesting is Android, where it already exists. And to me the server play actually gives me more capabilities than I had: it lets me run my code on cloud provided machines and/or use cloud services with some level of assurance that the provider hasn't backdoored me and my systems haven't been compromised.
It's probably built on systemd's Secure Boot + immutability support.
As said above, it's about who controls the keys. It's either building your own castle or having to live with the Ultimate TiVo.
We'll see.