Lennart Poettering, Christian Brauner founded a new company
(amutable.com)372 points by hornedhob 5 days ago
372 points by hornedhob 5 days ago
Secure Boot only extends the chain of trust from your firmware down the first UEFI binary it loads.
Currently SB is effectively useless because it will at best authenticate your kernel but the initrd and subsequent userspace (including programs that run as root) are unverified and can be replaced by malicious alternatives.
Secure Boot as it stands right now in the Linux world is effectively an annoyance that’s only there as a shortcut to get distros to boot on systems that trust Microsoft’s keys but otherwise offer no actual security.
It however doesn’t have to be this way, and I welcome efforts to make Linux just as secure as proprietary OSes who actually have full code signature verification all the way down to userspace.
here is some actual security: encrypted /boot, encrypted everything other than the boot loader (grub in this case)
sign grub with your own keys (some motherboards let you to do so). don't let random things signed by microsoft to boot (it defeats the whole point)
so you have grub in an efi partition, it passes secure boot, loads, and attempts to unlock a luks partition with the user provided passphrase. if it passed secure boot it should increase confidence that you are typing you password into the legit thing
so anyway, after unlocking luks, it locates the kernel and initrd inside it, and boots
https://wiki.archlinux.org/title/GRUB#Encrypted_/boot
the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
however, this is still better than, at failure, letting anything run
sophisticated attackers will defeat this, but they can also add a variety of attacks at hardware level
I’d much rather have tamper detection. Encryption is great should the device is stolen but it feels like the wrong tool for defending against evil maids. All I’d want is that any time you open the case or touch the cold external ports (ie unbolted) you have to re-authenticate with a master password. I’m happy to use cabled peripherals to achieve this.
Chaining trust from POST to login feels like trying to make a theoretically perfect diamond and titanium bicycle that never wears down or falls apart when all I need is an automated system to tell me when to replace a part that’s about to fail.
> the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
Reminds me of my old Chromebook Pixel I wiped chromeos from. Every time it booted I had to press Ctrl-L (iirc) to continue the boot, any other keypress would reenable secure boot and the only way I knew to recover from that was to reinstall chromeos, which would wipe my linux partition and my files with it. Needless to say, that computer taught me good backup discipline...
Yes, "just as secure as proprietary OSes" who due to failed signature verification are no longer able to start notepad.exe.
I think you might want to go re-read the last ~6 months of IT news in regards of "secure proprietary OSes".
Just because OpenSSL had a CVE posted about today, that didn't mean we should go back to use HTTP for the web.
There is the integrity measurement architecture but it isn't very mature in my opinion. Even secureboot and module signing is a manual setup by users, it isn't supported by default, or by installers. You have to more or less manage your own certs and CA, although I did notice some laptops have debian signing keys in UEFI by default? If only the debian installer setup module signing.
But you miss a critical part - Secure Boot, as the name implies is for boot, not OS runtime. Linux I suppose considers the part after initrd load, post-boot perhaps?
I think pid-1 hash verification from the kernel is not a huge ask, as part of secure boot, and leave it to the init system to implement or not implement user-space executable/script signature enforcement. I'm sure Mr. Poettering wouldn't mind.
On arch it isn't particularly difficult to create UKIs other than changing like 2 lines in `mkinitcpio`'s config.
Then there is also `ukify` by systemd which also can create UKIs, which then can be installed with `kernel-install`, but that is a bit more work to set up than for `mkinitcpio`.
The main part is the signing, which I usually have `sbctl` handle.
> the kernel will verify anything beneath it
Yes that's the case - my argument is that Linux currently doesn't have anything standardized to do that.
Your best bet for now is to use a read-only dm-verity-protected volume as the root partition, encode its hash in the initrd, combine kernel + initrd into a UKI and sign that.
I would welcome a standardized approach.
Standardizing that approach is one thing that the systemd project has been working on. They've built various components to help with that, including writing specifications (via the UAPI group) on how that should all fit together.
ParticleOS[0] gives a look at how this can all fit together, in case you want to see some of it in action.
A basic setup to make use of secure boot is SB+TPM+LUKS. Unfortunately I don't know of any distro that offers this in a particularly robust way.
Code signature verification is an interesting idea, but I'm not sure how it could be achieved. Have distro maintainers sign the code?
Opensuse have been working on making secure boot/TPM FDE unlock easy to use for a while now. https://news.opensuse.org/2025/11/13/tw-grub2-bls/
> A basic setup to make use of secure boot is SB+TPM+LUKS. Unfortunately I don't know of any distro that offers this in a particularly robust way.
Have a look at Ubuntu Core 24 and later. Though it's not exactly a desktop system, but rathe oriented towards embedded/appliances. Recent Ubuntu desktop (from 25.04 IIRC) started getting the same mechanism gradually integrated in each release. Upcoming Ubuntu 26.04 is expected to support TPM backed FDE. Worth a try if you can set up a VM with a software TPM.
Keep in mind though, there's been plenty of issues with various EFI firmwares, especially on the appliances side. EFI specs are apparently treated as guidelines rather than actual specification by whoever ends up implementing the firmware.
Fine as long as it's managed by the user. A good check is who installed the keys. A user–freedom–respecting secureboot must have user–generated keys.
There is some level of misinformation in your post. Both Windows and Linux check driver signatures. Once you boot Linux in UEFI Secure Boot, you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode. You have to sign all of the drivers within the same PKI of your UEFI key.
> you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode
You don't need to load a driver; you can just replace a binary that's going to be executed as root as part of system boot. This is something a hypothetical code signature verification would detect and prevent.
Failing kernel-level code signature enforcement, the next best step is to have a dm-verity volume as your root partition, with the dm-verity hashes in the initrd within the UKI, and that UKI being signed with secure boot.
This would theoretically allow you to recover from even root-level compromise by just rebooting the machine (assuming the secure boot signing keys weren't on said machine itself).
Remote attestation is another technology that is not inherently restrictive of software freedom. But here are some examples of technologies that have already restricted freedom due to oligopoly combined with network effects:
* smartphone device integrity checks (SafetyNet / Play Integrity / Apple DeviceCheck)
* HDMI/HDCP
* streaming DRM (Widevine / FairPlay)
* Secure Boot (vendor-keyed deployments)
* printers w/ signed/chipped cartridges (consumables auth)
* proprietary file formats + network effects (office docs, messaging)
It very clearly is restrictive of software freedom. I've never suffered from an evil maid breaking into my house to access my computer, but I've _very_ frequently suffered from corporations trying to prevent me from doing what I wish with my own things. We need to push back on this notion that this sort of thing was _ever_ for the end-user's benefit, because it's not.
This happens much less frequently than the manufacturer of "my" computing device verifies that I haven't tampered with it. On net, it's a wholesale destruction of user freedom.
To play devil's advocate, I don't think most people would be fine with their car ramming into a military base after an unfriendly firmware update.
However, I agree that the risks to individuals and their freedoms stemming from these technologies outweigh the benefits in most cases.
The better question then is why the actual f** can an OTA firmware update touch anything in the steering or powertrain of the car, or why do I even need a computer that's connected to anything, and one which does more than just make sure I get the right amount of fuel and spark, or why on earth do people tolerate this sort of insanity.
If a malicious update can be pushed because of some failure in the signature verification checks (which already exist), what makes you think the threat actor won’t have access to signing keys?
This is not what attestation is even seeking to solve.
It's interesting there's no remote attestation the other way around, making sure the server is not doing something to your data that you didn't approve of.
There is. Signal uses it, for example. https://signal.org/blog/building-faster-oram/
For another example, IntegriCloud: https://secure.integricloud.com/
I am quite conflicted here. On one hand I understand the need for it (offsite colo servers is the best example). Basic level of evil maid resistance is also a nice to have on personal machines. On the other hand we have all the things you listed.
I personally don't think this product matters all that much for now. These types of tech is not oppressive by itself, only when it is being demanded by an adversary. The ability of the adversary to demand it is a function of how widespread the capability is, and there aren't going to be enough Linux clients for this to start infringing on the rights of the general public just yet.
A bigger concern is all the efforts aimed at imposing integrity checks on platforms like the Web. That will eventually force users to make a choice between being denied essential services and accepting these demands.
I also think AI would substantially curtail the effect of many of these anti-user efforts. For example a bot can be programmed to automate using a secure phone and controlled from a user-controlled device, cheat in games, etc.
> On one hand I understand the need for it (offsite colo servers is the best example).
Great example of proving something to your own organization. Mullvad is probably the most trusted VPN provider and they do this! But this is not a power that should be exposed to regular applications, or we end up with a dystopian future of you are not allowed to use your own computer.
On the other side, Mulvad is looking at remote attestation so that the users can verify their servers: https://news.ycombinator.com/item?id=29903695
> * Secure Boot (vendor-keyed deployments)
I wish this myth would die at this point.
Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Android lets you put your own signed keys in on certain phones. For now.
The banking apps still won't trust them, though.
To add a quote from Lennart himself:
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
Your system will not belong to you anymore. Just as it is with Android.
Banks do this because they have made their own requirement that the mobile device is a trust root that can authenticate the user. There are better, limited-purpose devices that can do this, but they are not popular/ubiquitous like smartphones, so here we are.
The oppressive part of this scheme is that Google's integrity check only passes for _their_ keys, which form a chain of trust through the TEE/TPM, through the bootloader and finally through the system image. Crucially, the only part banks should care about should just be the TEE and some secure storage, but Google provides an easy attestation scheme only for the entire hardware/software environment and not just the secure hardware bit that already lives in your phone and can't be phished.
It would be freaking cool if someone could turn your TPM into a Yubikey and have it be useful for you and your bank without having to verify the entire system firmware, bootloader and operating system.
Then work with the bank to prove the signer is trustworthy.
> This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.
I wish the myth of the spec would die at this point.
Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.
It's interesting how quickly the OSS movement went from "No, no, we just want to include companies in the Free Software Movement" to "Oh, don't worry, it's ok if companies with shareholders that are not accountable to the community have a complete monopoly on OSS, and decide what direction it takes"
FOSS was imagined as a brotherhood of hackers, sharing code back and forth to build a utopian code commons that provided freedom to build anything. It stayed firmly in the realm of the imaginary because, in the real world, everybody wants somebody else to foot the bill or do the work. Corporations stepped up once they figured out how to profit off of FOSS and everyone else was content to free ride off of the output because it meant they didn't have to lift a finger. The people who actually do the work are naturally in the driver's seat.
This perspective is astonishingly historically ignorant, and ignores how "Open Source Software" was a deliberate political movement to simultaneously neuter the non-company-friendly goals of FOSS while simultaneously providing a competing (and politically distracting) movement that deliberately courted companies.
The Free Software movement was successful enough that by 1997 it was garnering a lot of international community support and manpower. Eric S. Raymond published CatB in response to these successes, partly with a goal of "celebrating its successes" — sendmail, gcc, perl, and Linux were all popular projects with a huge number of collaborators by this point — and partly with a goal of reframing the Free Software movement such that it effectively neuters the political basis (i.e. the four freedoms, etc.) in a company-friendly way. It's very easy to note when reading the book, how it consistently celebrates the successes of Free Software in a company friendly way, deliberately to make it appealing to companies. Often being very explicit about its goals, e.g. "Don't give your workers good bonuses, because research shows that the better a ''hacker'' the less they care about money!".
A year later, internal memos from Microsoft leaked that showed that management were indeed scared shitless about Linux, a movement that they could neither completely Embrace, Extend, and Extinguish, nor practice Fear, Uncertainty, and Doubt on, because the community that built it were too strong, and too dedicated. Management foresaw that it was only a matter until Linux was a very strong competitor — even if that's taken 20 years, they were decently accurate in their fears, and, to be honest, part of why it's taken 30 years for Linux to catch up are deliberate actions by Microsoft wrt. introducing and adopting technologies that would stymie the Free Software movement from being able to adapt.
systemd solved/improved a bunch of things for linux, but now the plan seems to be to replace package management with image based whole dist a/b swaps. and to have signed unified kernel images.
this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
pottering recently works for Microsoft, they want to turn linux into an appliance just like windows, no longer a general purpose os. the transition is still far from over on windows, but look at android and how the google play services dependency/choke-hold is
im sure ill get many down votes, but despite some hyperbole this is the trajectory
> the plan seems to be to replace package management with image based whole dist a/b swaps
The plan is probably to have that as an alternative for the niche uses where that is appropriate.
This majority of this thread seems to have slid on that slippery slope, and jumped directly to the conclusion where the attestation mechanism will be mandatory on all linux machines in the world and you won't be able to run anything without. Which even if it would be a purpose for amutable as a company, it's unfeasible to do when there's such a breadth of distributions and non corpo affiliated developers out there that would need to cooperate for that to happen.
Nobody says that you will not have alternatives. What people are saying, is that if you're using those alternatives you won't be able to watch videos online, or access your bank account.
Eventually you will not be able to block ads.
> Nobody says that you will not have alternatives
Maybe you want to reread through this thread.
> Eventually you will not be able to block ads.
That's so far down the slippery slope and with so many other things that need to go wrong that I'm not worried and I'm willing to be the one to get "told you so" if it happens.
Immutable, signed systems do not intrinsically conflict with hackability. See this blog post of Lennart's[0] and systemd's ParticleOS meta-distro[1].
I do agree that these technologies can be abused. But system integrity is also a prerequisite for security; it's not like this is like Digital "Rights" Management, where it's unequivocally a bad thing that only advances evil interests. Like, Widevine should never have been made a thing in Firefox imo.
So I think what's most productive here is to build immutable, signable systems that can preserve user freedom, and then use social and political means to further guarantee those freedoms. For instance a requirement that owning a device means being able to provision your own keys. Bans on certain attestation schemes. Etc. (I empathize with anyone who would be cynical about those particular possibilities though.)
[0] https://0pointer.net/blog/fitting-everything-together.html
Linux is nowadays mostly sponsored by big corporations. They have different goals and different ways to do things. Probably the first 10 years Linux was driven by enthusiasts and therefore it was a lean system. Something like systemd is typical corporate output. Due it its complexity it would have died long before finding adoption. But with enterprise money this is possible. Try to develop for the combo Linux Bluetooth/Audio/dbus: the complexity drives you crazy because all this stuff was made for (and financed by) corporate needs of the automotive industry. Simplicity is never a goal in these big companies.
But then Linux wouldn't be where it is without the business side paying for the developers. There is no such thing as a free lunch...
> this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
Yeah. I'm pretty sure it requires a very specific psychological profile to decide to work on such a user-hostile project while post-fact rationalizing that it's "for good".
All I can say is I'm not surprised that Poettering is involved in such a user-hostile attack on free computing.
P.S: I don't care about the downvotes, you shouldn't either.
Does this guy do anything that is user-friendly and is as per open source ethos of freedom and user control? In all this shit-show of Microsoft shoving AI down the throat of its users, I was happy to be firmly in the Linux camp for many many years. And along come these kind of people to shit on that parade too.
P.S: Upvoted you. I don't care about downvotes either.
Exciting!
It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.
I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:
- Hal Finney's transparent server
- Keylime
- System Transparency
- Project Oak
- Apple Private Cloud Compute
- Moxie's Confer.to
I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.
I'd love to meet up at FOSDEM. Email me at fredrik@mullvad.net.
Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)
Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Hi, I'm David, founding product lead.
Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.
Edit: I've reached out privately by email for next steps, as you requested.
Hi David. Great! I actually wasn't planning on going due to other things, but this is worth re-arranging my schedule a bit. See you later this week. Please email me your contact details.
As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.
Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.
I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)
I'm super far from an expert on this, but it NEEDS reproducible builds, right? You need to start from a known good, trusted state - otherwise you cannot trust any new system states. You also need it for updates.
Well, it comes down to what trust assumptions you're OK with. Reproducible reduces trust in the build environment, but you still need to ensure authenticity of the source somehow. Verified boot, measured boot, repro builds, local/remote attestation, and transparency logging provide different things. Combined they form the possibility of a sort of authentication mechanism between a server and client. However, all of the concepts are useful by themselves.
Ah, good old remote attestation. Always works out brilliantly.
I have this fond memory of that Notary in Germany who did a remote attestation of me being with him in the same room, voting on a shareholder resolution.
While I was currently traveling on the other side of the planet.
This great concept that totally will not blow up the planet has been proudly brought to you by Ze Germans.
No matter what your intentions are: It WILL be abused and it WILL blow up. Stop this and do something useful.
[While systemd had been a nightmare for years, these days its actually pretty good, especially if you disable the "oh, and it can ALSO create perfect eggs benedict and make you a virgin again while booting up the system!" part of it. So, no bad feelings here. Also, I am German. Also: Insert list of history books here.]
no no, let him get distracted by it, the one thing that happened after he got bored with pulseaudio is that pulseaudio started being better.
What is the endgame here? Obviously "heightened security" in some kind of sense, but to what end and what mechanisms? What is the scope of the work? Is this work meant to secure forges and upstream development processes via more rigid identity verification, or package manager and userspace-level runtime restrictions like code signing? Will there be a push to integrate this work into distributions, organizations, or the kernel itself? Is hardware within the scope of this work, and to what degree?
The website itself is rather vague in its stated goals and mechanisms.
I suspect the endgame is confidential computing for distributed systems. If you are running high value workloads like LLMs in untrusted environments you need to verify integrity. Right now guaranteeing that the compute context hasn't been tampered with is still very hard to orchestrate.
That endgame has so far been quite unreachable. TEE.fail is the latest in a long sequence of "whoever touches the hardware can still attack you".
https://news.ycombinator.com/item?id=45743756
https://arstechnica.com/security/2025/09/intel-and-amd-trust...
No, the endgame is that a small handful of entities or a consortium will effectively "own" Linux because they'll be the only "trusted" systems. Welcome to locked-down "Linux".
You'll be free to run your own Linux, but don't expect it to work outside of niche uses.
Personally for me this is interesting because there needs to be a way where a hardware token providing an identity should interact with a device and software combination which would ensure no tampering between the user who owns the identity and the end result of computing is.
A concrete example of that is electronic ballots, which is a topic I often bump heads with the rest of HN about, where a hardware identity token (an electronic ID provided by the state) can be used to participate in official ballots, while both the citizen and the state can have some assurance that there was nothing interceding between them in a malicious way.
Does that make sense?
Entities other than me being able to control what runs on the device I physically posses is absolutely not acceptable in any way. Screw your clients, screw you shareholders and screw you.
Assuming you're using systemd, you already gave up control over your system. The road to hell was already paved. Now, you would have to go out of your way to retain control.
In the great scheme of things, this period where systemd was intentionally designed and developed and funded to hurt your autonomy but seemed temporarily innocuous will be a rounding error.
Nah man, yo are FUDing. systemd might have some poor design choices and arrogant maintainers, but at least I can drop it at any time and my bank wouldn't freak out about it. This one… It's a whole another level.
I don't think Mr Pottering was brought by accident, maybe his decade of contribution making sure systemd services can be manipulated by a supervisor (in the case of wsl and ms) is a valuable asset. Systemd don't even need to change much to become the devil itself, it just have to upstream merge changes already consolidated in the past 5 years or so... But logically it's safe because for this to become a problem systemd would have to be adopted by the majority of distributions and its maintainers would have to concede to the pressure of big corps and such...oh, wait
Do you plan to sell this technology to laptop makers so their laptops will only run the OS they came with?
Not all. The ones that ship Linux preinstalled and with support don't.
I hope you are mistaken. It's embarrassing how far behind in security the desktop Linux ecosystem is.
Do you really think Laptop makers would buy a whole company to figure out how to remove that option?
I think https://0pointer.net/blog/authenticated-boot-and-disk-encryp... is a much better explanation of the motivation behind this straight from the horse's mouth. It does a really good job of motivating the need for this in a way that explains why you as the end user would desire such features.
To me this looks bad on so many levels. I hate it immediately.
One good news is that maybe LP will get less involved in systemd.
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
See Android; or, where you no longer own your device, and if the company decides, you no longer own your data or access to it.
https://0pointer.net/blog/authenticated-boot-and-disk-encryp...
Yes, system data should be locked to the system with a TPM. That way your system can refuse to boot if it's been modified to steal your user secrets.
I mentioned it somewhere else in the thread, and btw, I'm not affiliated with the company, this is just my charitable interpretation of their intentions: this is not for requiring _every_ consumer linux device to have attestation, but for specific devices that are needed for niche purposes to have a method to use an open OS stack while being capable of attestation.
I really hope this would be geared towards clients being able to verify the server state or just general server related usecases, instead of trying to replicate SafetyNet-style corporate dystopia on the desktop.
>Amutable is based out of Berlin, Germany.
Probably obvious from the surnames but this is the first time I've seen a EU company pop up on Hacker News that could be mistaken for a Californian company. Nice to see that ambition.
I understand systemd is controversial, that can be debated endlessly but the executive team and engineering team look very competitive. Will be interesting to see where this goes.
Hello Chris,
I am glad to see these efforts are now under an independent firm rather than being directed by Microsoft.
What is the ownership structure like? Where/who have you received funding from, and what is the plan for ongoing monetization of your work?
Would you ever sell the company to Microsoft, Google, or Amazon?
Thanks.
> Would you ever sell the company to Microsoft, Google, or Amazon?
No matter what the founders say, the answer to this question is always yes.
> Where/who have you received funding from
I don't think you will ever get a response to that
I agree with you - but considering what they want to implement and what it can be used for there are probably investors that might not want to be outed (this early). Kinda paranoid I admit, but history has shown that stuff like this WILL be misused.
Lennart will be involved with at least three events at FOSDEM on the coming weekend. The talks seem unrelated at first glance but maybe there will be an opportunity to learn more about his new endeavor.
https://fosdem.org/2026/schedule/speaker/lennart_poettering/
Also see http://amutable.com/events which lists a talk at Open Confidential Computing Conference (Berlin, March)
Remote attestation requires a great deal of trust... I know this comment is likely to be down-voted, but I can't think of a Lennart Poettering project that didn't try to extend, centralize, and conglomerate Linux with disastrous results in the short term; and less innovation, flexibility, and functionality in the long term. Trading the strength of Unix systems for goal of making them more "Microsoft" like.
Remote attestation requires a great deal of trust, and I simply don't have it when it comes to this leadership team.
"We are building cryptographically verifiable integrity into Linux systems. Every system starts in a verified state and stays trusted over time."
What does this mean? Why would anyone want this? Can you explain this to me like I'm five years old?
Your computer will come with a signed operating system. If you modify the operating system, your computer will not boot. If you try to install a different operating system, your computer will not boot.
> If you try to install a different operating system, your computer will not boot.
That does not follow. That would only very specifically happen when all of these are true:
1. Secure Boot cannot be disabled
2. You cannot provision your own Secure Boot keys
3. Your desired operating system is not signed by the computer's trusted Secure Boot keys
"Starting in a verified state and stay[ing] trusted over time" sounds more like using measured boot. Which is basically its own thing and most certainly does not preclude booting whatever OS you choose.
Although if your comment was meant in a cynical way rather than approaching things technically, than I don't think my reply helps much.
How do you plan handle the confused deputy problem?[1]
Microsoft has fully embraced Linyx now, it's time to move to the next step.
Why on earth would somebody make a company with one of the the most reviled programmers on earth? Everyone knows that everything he touches turns to shit.
Hi Chris,
One of the most grating pain points of the early versions of systemd was a general lack of humility, some would say rank arrogance, displayed by the project lead and his orbiters. Today systemd is in a state of "not great, not terrible" but it was (and in some circles still is) notorious for breaking peoples' linux installs, their workflows, and generally just causing a lot of headaches. The systemd project leads responded mostly with Apple-style "you're holding it wrong" sneers.
It's not immediately clear to me what exactly Amutable will be implementing, but it smells a lot like some sort of DRM, and my immediate reaction is that this is something that Big Tech wants but that users don't.
My question is this: Has Lennart's attitude changed, or can linux users expect more of the same paternalism as some new technology is pushed on us whether we like it or not?
As someone who's lost many hours troubleshooting systemd failures, I would like an answer to this question, too.
You won't believe how many hours we have lost troubleshooting SysV init and Upstart issues. systemd is so much better in every way, reliable parallel init with dependencies, proper handling of double forking, much easier to secure services (systemd-analyze security), proper timer handling (yay, no more cron), proper temporary file/directory handling, centralized logs, etc.
It improves on about every level compared to what came before. And no, nothing is perfect and you sometimes have to troubleshoot it.
"In every way"
About ten years ago I took a three day cross-country Amtrak trip where I wanted to work on some data analysis that used mysql for its backend. It was a great venue for that sort of work because the lack of train-internet was wonderful to keep me focused. The data I was working with was about 20GB of parking ticket data. The data took a while to process over SQL which gave me the chance to check out the world unfolding outside of the train.
At some point, mysql (well, mariadb) got into a weird state after an unclean shutdown that put itself into recovery mode where upon startup it had to do some disk-intensive cleanup. Thing is -- systemd has a default setting (that's not readily documented, nor sufficiently described in its logs when the behavior happens) that halts the service startup after 30 seconds to try again. On loop.
My troubleshooting attempts were unsuccessful. And since I deleted the original csv files to save disk space, I wasn't able to even poke at the CSV files through python or whatnot.
So instead of doing the analysis I wanted to do on the train, I had to wait until I got to the end of the line to fix it. Sure enough, it was some default 30s timeout that's not explicitly mentioned nor commented out like many services do.
So, saying that things are "much better in every way" really falls on deaf ears and is reminiscent of the systemd devs' dismissive/arrogant behavior that many folk are frustrated about.
There’s a reason why Devuan (a non systemd Debian) exists. Don’t want to get into a massive argument, but there are legitimate reasons for some to go in a different direction.
The problem is not systemd vs SysV et al, the problem is systemd spreading like a cancer throughout the entire operating system.
Also trying to use systemd with podman is frustrating as hell. You just cannot run a system service using podman as a non-root user and have it work correctly.
Here are a few examples of problems systemd has caused me:
System shutdown/reboot is now unreliable. Sometimes it will be just as quick as it was before systemd arrived, but other times, systemd will decide that something isn't to its liking, and block shutdown for somewhere between 30 seconds and 10 minutes, waiting for something that will never happen. The thing in question might be different from one session to the next, and from one systemd version to the next; I can spend hours or days tracking down the process/mount/service in question and finding a workaround, only to have systemd hang on something else the next day. It offers no manual skip option, so unless I happen to be working on a host with systemd's timeouts reconfigured to reduce this problem, I'm stuck with either forcing a power-off or having my time wasted.
Something about systemd's meddling with cgroups broke the lxc control commands a few years back. To work around the problem, I have to replace every such command I use with something like `systemd-run --quiet --user --scope --property=Delegate=yes <command>`. That's a PITA that I'm unlikely to ever remember (or want to type) so I effectively cannot manage containers interactively without helper scripts any more. It's also a new systemd dependency, so those helper scripts now also need checks for cgroup version and systemd presence, and a different code path depending on the result. Making matters worse, that systemd-run command occasionally fails even when I do everything "right". What was once simple and easy is now complex and unreliable.
At some point, Lennart unilaterally decided that all machines accessed over a network must have a domain name. Subsequently, every machine running a distro that had migrated to systemd-resolved was suddenly unable to resolve its hostname-only peers on the LAN, despite the DNS server handling them just fine. Finding the problem, figuring out the cause, and reconfiguring around it wasn't the end of the world, but it did waste more of my time. Repeating that experience once or twice more when systemd behavior changed again and again eventually drove me to a policy of ripping out systemd-resolved entirely on any new installation. (Which, of course, takes more time.) I think this behavior may have been rolled back by now, but sadly, I'll never get my time back.
There are more examples, but I'm tired of re-living them and don't really want to write a book. I hope these few are enough to convey my point:
Systemd has been a net negative in my experience. It has made my life markedly worse, without bringing anything I needed. Based on conversations, comments, and bug reports I've seen over the years, I get the impression that many others have had a similar experience, but don't bother speaking up about it any more, because they're tired of being dismissed, ignored, or shouted down, just as I am.
I would welcome a reliable, minimal, non-invasive, dependency-based init. Systemd is not it.
anything that keeps him away from systemd is a good thing.
systemd kept him away from pulseaudio and whoever is/was maintaining that after him was doing a good job of fixing it.
I'll ask the dumb question sorry!
Who is this for / what problem does it solve?
I guess security? Or maybe reproducability?
I thought it was how to plug the user freedom hole. Profits are leaking because users can leave the slop ecosystem and install something that respects their freedom. It's been solved on mobile devices and it needs to be solved for desktops.
All vague hand waving at this point and not much to talk about. We'll have to wait and see what they deliver, how it works and the business model to judge how useful it will be.
Immutability means you can't touch or change some parts of the system without great effort (e.g. macOS SIP).
Atomicity means you can track every change, and every change is so small that it affects only one thing and can be traced, replayed or rolled back. Like it's going from A to B and being able to return back to A (or going to B again) in a determinate manner.
Hopefully he will leave systemd alone and stop closing bugs he doesn't understand now
It starts from there, then systemd takes over and carries the flag forward.
See the "features" list from systemd 257/258 [0].
So LP is or has left Microsoft ?
>We are building cryptographically verifiable integrity into Linux systems
I wonder what that means ? It could be a good thing, but I tend to think it could be a privacy nightmare depending on who controls the keys.
Verifiable to who? Some remote third party that isn't me? The hell would I want that?
https://0pointer.net/blog/authenticated-boot-and-disk-encryp...
You. The money quote about the current state of Linux security:
> In fact, right now, your data is probably more secure if stored on current ChromeOS, Android, Windows or MacOS devices, than it is on typical Linux distributions.
Say what you want about systemd the project but they're the only ones moving foundational Linux security forward, no one else even has the ambition to try. The hardening tools they've brought to Linux are so far ahead of everything else it's not even funny.
This is basically propaganda for the war on general purpose computing. My user data is less safe on a Windows device, because Microsoft has full access to that device and they are extremely untrustworthy. On my Linux device, I choose the software to install.
> Microsoft
the guys that copy your bitlocker keys in the clear
Considering that (for example) your data on ChromeOS is automatically copied to a server run by Google, who are legally compelled to provide a copy to the government when subject to a FISA order, it is unclear what Poettering's threat model is here. Handwringing about secure boot is ludicrous when somebody already has a remote backdoor, which all of the cited operating systems do. Frankly, the assertion of such a naked counterfactual says a lot more about Poettering than it does about Linux security.
Just an assumption here, but the project appears to be about the methodology to verify the install. Who holds the keys is an entirely different matter.
The events includes a conference title "Remote Attestation of Imutable Operating Systems built on systemd", which is a bit of a clue.
I'm sure this company is more focused on the enterprise angle, but I wonder if the buildout of support for remote attestation could eventually resolve the Linux gaming vs. anti-cheat stalemate. At least for those willing to use a "blessed" kernel provided by Valve or whoever.
Road to hell is paved with good intentions.
Somebody will use it and eventually force it if it exists and I don't think gaming especially those requiring anti-cheat is worth that risk.
If that means linux will not be able to overtake window's market share, that's ok. At-least the year of the linux memes will still be funny.
Only by creating a new stalemate between essential liberty and a little temporary security — anticheat doesn't protect you from DMA cheating.
> resolve the Linux gaming vs. anti-cheat stalemate
It will.
Then just a bit later no movies for you unless you are running a blessed distro. Then Chrome will start reporting to websites that you are this weird guy with a dangerous unlocked distro, so no banking for you. Maybe no government services as well because obviously you are a hacker. Why would you run an unlocked linux if you were not?
rust-vmm-based environment that verifies/authenticates an image before running ? Immutable VM (no FS, root dropper after setting up network, no or curated device), 'micro'-vm based on systemd ? vmm captures running kernel code/memory mapping before handing off to userland, checks periodically it hasn't changed ? Anything else on the state of the art of immutable/integrity-checking of VMs?
Sounds like kernel mode DRM or some similarly unwanted bullshit.
We all know who controls the keys. It's the first party who puts their hands on the device.
> Sounds like kernel mode DRM or some similarly unwanted bullshit.
Look, I hate systemd just as much as the next guy - but how are you getting "DRM" out of this?
"cryptographically verifiable integrity" is a euphemism for tivoization/Treacherous Computing. See, e.g., https://www.gnu.org/philosophy/can-you-trust.en.html
As the immediate responder to this comment, I claim to be the next guy. I love systemd.
I don't like few pieces and Mr. Lennarts attitude to some bugs/obvious flaws, but by far much better than old sysv or really any alternative we have.
Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts
Coming from software supply chain, I am excited to see such a cracked team handle this problem and I wish we talked more about this in FOSS land.
I see the use case for servers targeted by malicious actors. A penetration test on an hardened system with secure boot and binary verification would be much harder.
For individuals, IMO the risk mostly come from software they want to run (install script or supply chain attack). So if the end user is in control of what gets signed, I don't see much benefit. Unless you force users to use an app store...
Why have the responses to the post from the CEO been moved to their own top-level posts? Also, why are replies disabled for the CEO post?
My only experience with Linux secure boot so far.... I wasn't even aware that it was secure booted. And I needed to run something (I think it was the Displaylink driver) that needs to jam itself into the kernel. And the convoluted process to do it failed (it's packaged for Ubuntu but I was installing it on a slightly outdated Fedora system).
What, this part is only needed for secure boot? I'm not sec... oh. So go back to the UEFI settings, turn secure boot off, problem solved. I usually also turn off SELinux right after install.
So I'm an old greybeard who likes to have full control. Less secure. But at least I get the choice. Hopefully I continue to do so. The notion of not being able to access online banking services or other things that require account login, without running on a "fully attested" system does worry me.