Lennart Poettering, Christian Brauner founded a new company
(amutable.com)372 points by hornedhob 5 days ago
372 points by hornedhob 5 days ago
Well I was wondering when the war on general computing and computer ownership would be carried into the heart of the open source ecosystems.
Sure, there are sensible things that could be done with this. But given the background of the people involved, the fact that this is yet another clear profit-first gathering makes me incredibly pessimistic.
This pessimism is made worse by reading the answers of the founders here in this thread: typical corporate talk. And most importantly: preventing the very real dangers involved is clearly not a main goal, but is instead brushed off with empty platitudes like "I've been a FOSS guy my entire adult life...." instead of describing or considering actual preventive measures. And even if the claim was true, the founders had a real love for the hacker spirit, there is obviously nothing stopping them from selling to the usual suspects and golden parachute out.
I was really struggling to not make this comment just another snarky, sarcastic comment, but it is exhausting. It is exhausting to see the hatred some have for people just owning their hardware. So sorry, "don't worry, we're your friends" just doesn't cut it to come at this with a positive attitude.
The benefits are few, the potential to do a lot of harm is large. And the people involved clearly have the network and connections to make this an instrument of user-hostility.
I do sort of wonder if there’s room in my life for a small attested device. Like, I could actually see a little room for my bank to say “we don’t know what other programs are running on your device so we can’t actually take full responsibility for transactions that take place originated from your device,” and if I look at it from the bank’s point of view that doesn’t seem unreasonable.
Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.
Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.
I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.
But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.
"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.
I remember my parents doing online banking authenticating with smart cards. Over 20 years ago. Today the same bank requires an iOS or Play Integrity device (for individuals at least. Their gated business banking are separate services and idk what they offer there).
This is not a question of missing tech.
> I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
Most banks already do that. The secure, locked down terminals are called ATMs and they are generally placed at assorted convenient locations in most cities.
Yeah, to some extent I just wanted to think about where the boundary ought to be. I somewhat suspect the bank or Netflix won’t be willing to send me a device of theirs to act as their representative in my pocket. But it is basically the only time a reasonable person should consider using such a device. Anybody paying to buy Netflix or the bank a device is basically being scammed or ripped off.
Why should I need a separate device? Doesn't a hardware security token suffice? I wouldn't even mind bringing my own but my bank doesn't accept them last I checked. (Do any of them?)
If the bank can't be bothered to either implement support for U2F or else clearly articulate why U2F isn't sufficient then they don't have a valid position. Anything else they say on the matter should be disregarded.
> if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services
They would only do it to assert more control over you and in Netflix's case, force more ads on you.
It is why I never use any company's apps.
If they make it a requirement, I will just close my account.
The bank thing is a smoke screen.
This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].
These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.
I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".
[0] https://globalnews.ca/news/11026906/music-industry-limewire-...
Also worth remembering that around 2010, the music and film industry associations of America were claiming entitlement to $50 billion dollars annually in piracy-related losses beyond what could be accounted for in direct lost revenue (which _might_ have been as much as 10 billion, or 1/6th of their claim):
These guys pathologically have had a chip on their shoulder since Napster.
HN is for the kind of hacker who makes the next Uber or AirBNB. It's strongly aligned with the interests of corporate shareholders.
Yeah, as I am reading the landing page, the direction seems clear. It sucks, because as an individual there is not much one can do, and there is no consensus that it is a bad thing ( and even if there was, how to counter it ). Honestly, there are times I feel lucky to be as dumb as I am. At least I don't have the same responsibility for my output as people who create foundational tech and code.
Yup
Poettering is a well-known Linux saboteur, along with Red Hat.Without RH pushing his trash, he is not really that big of a threat.
Just like de Icaza, another saboteur, ran off to MS. That is the tell-tell sign for people not convinced that either person's work in FOSS existed to cause damage.
No, this is not a snarky, sarcastic comment. Trust Amutable at your own peril.
My tinfoil hat theory is devices like HDDs will be locked and only work on "attested" systems that actively monitor the files. This will be pushed by the media industry to combat piracy. Then opened up for para-law enforcement like palantir.
Then gpu and cpu makers will hop on and lock their devices to promote paid Linux like redhat. Or offering "premium support" to unlock your gpu for Linux for a monthly fee.
They'll say "if you are a Linux enthusiast then go tinker with arm and risc on an SD card"
> [T]he war on general computing and computer ownership [...] It is exhausting to see the hatred some have for people just owning their hardware.
The integrity of a system being verified/verifiable doesn't imply that the owner of the system doesn't get to control it.
This sort of e2e attestation seems really useful for enterprise or public infrastructure. Like, it'd be great to know that the ATMs or transit systems in my city had this level of system integrity.
You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
> You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
Once it's out there and normalized, the individual engineers don't get to control how it is used. They never do.
Unless Lennart Pottering uses remote attestation to verify who is attesting to whom.
System integrity also ends at the border of the system. The entire ecosystem of ATM skimmers demonstrates this-- the software and hardware are still 100% sanctioned, they're just hidden beneath a shim in the card slot and a stick-on keypad module.
I generally agree with the concept of "if you want me to use a pre-approved terminal, you supply it." I'd think this opens up a world of better possibilities. Right now, the app-centric bank/media company/whatever has to build apps that are compatible with 82 bazillion different devices, and then deal with the attestation tech support issues. Conversely, if they provide a custom terminal, it might only need to deal with a handful of devices, and they could design it to function optimally for the single use case.
You want PCIe-6? Cool well that only runs on Asus G-series with AI, and is locked to attested devices because the performance is so high that bad code can literally destroy it. So for safety, we only run trusted drivers and because they must be signed, you have to use Redhat Premium at a monthly cost of $129. But you get automatic updates.
Do you want the control systems of the subway to get modified by a malicious actor? What about damn releases? Heat pumps in apartment buildings? Robotaxis? Payroll systems? Banks?
Amutability is a huge security feature, with tons of real world applications for good.
The fact that mega corps can abuse consumers is a separate issue. We should solve that with regulation. Don't forsake all the good that this tech can do just because Asus or Google want to infringe on your software freedoms. Frankly, these mega corps are going to infringe on your rights regardlessly, whether or not Amutable exists as a business.
Don't throw the baby out with the bath water.
> At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
The problem is that there are powerful corporate and government interests who would love nothing more than to prevent users from controlling the keys for their own computers, and they can make their dream come true simply by passing a law.
It may be the case that certain users want to ensure that their computers are only running their code. But the same technologies can also used to ensure that their computers are only running someone else's code, locking users out from their own devices.
That's like saying we shouldn't build anything that can be used for good if it can also be used for evil.
By that logic, we should just turn off the internet. Too much potential for evil there.
More seriously, the argument being presented seems to just be "attestation tech has been used for evil in the past, therefore all attestation tech is bad," which is obviously an unsound argument. A sound argument would have to show that attestation tech is _inherently_ bad, and I've already provided examples that I think effectively counter that. I can provide more if needed.
I get that we want to prevent attestation tech from being used for evil, but that's a regulatory problem, not a technical one. You make this point by framing the evil parties as "corporate and government interests."
Don't get me wrong, I am fully against anything that limits the freedoms of the person that owns the device. I just don't see how any of this is a valid argument that Amutable's mission is bad/immoral/invalid.
Or maybe another argument that's perhaps more aligned with the FOSS ideology: if I want e2e attestation of the software stack on my own devices, isn't this a good thing for me?
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.
They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.
This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.
Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.
> People dislike cash for some strange reason
In my case it is because I would never have the right amount with me, in the right denominations. Google Pay always has this covered.
Also you need to remember to take one more thing with you, and refill it occasionally. As opposed to fuel, you do not know how much you will need when.
It can get lost or destroyed, and is not (usually) replaceable.
I am French, currently in the US. I need to change 100 USD in small denominations, I will need to go to the bank, and they will hopefully do that for me. Or not. Or not without some official paper from someone.
Ah yes, and I am in the US and the Euro is not an accepted currency here. So I need to take my 100 € to a bank and hope I can get 119.39 USD. In the right denominations.
What will I do with the 34.78 USD left when I am back home? I have a chest of money from all over the world. I showed it once to my kids when they were young, told a bit about the world and then forgot about it.
Money also weights quite a lot. And when it does not weights it gets lost or thrown away with some other papers. Except if they are neatly folded in a wallet, which I will forget.
I do not care about being traced when going to the supermarket. If I need to do untraceable stuff I will get money from teh ATM. Ah crap, they will trace me there.
So the only solution is to get my salary in cash, whihc is forbidden in France. Or take some small amounts from time to time. Which I will forget, and I have better things to do.
Cash sucks.
Sure, if we go cashless and terrible things happen (cyberwar, solar flare, software issues) then we are screwed. But either the situation unscrews itself, or we will have much, much, much bigger issues than money -- we will need to go full survival mode, apocalypse movies-style.
Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation
> Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation
Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.
The solution first adopted by the TCG (TPM specification v1.1) required a trusted third-party, namely a privacy certificate authority (privacy CA). Each TPM has an embedded RSA key pair called an Endorsement Key (EK) which the privacy CA is assumed to know. In order to attest the TPM generates a second RSA key pair called an Attestation Identity Key (AIK). It sends the public AIK, signed by EK, to the privacy CA who checks its validity and issues a certificate for the AIK. (For this to work, either a) the privacy CA must know the TPM's public EK a priori, or b) the TPM's manufacturer must have provided an endorsement certificate.) The host/TPM is now able to authenticate itself with respect to the certificate. This approach permits two possibilities to detecting rogue TPMs: firstly the privacy CA should maintain a list of TPMs identified by their EK known to be rogue and reject requests from them, secondly if a privacy CA receives too many requests from a particular TPM it may reject them and blocklist the TPMs EK. The number of permitted requests should be subject to a risk management exercise. This solution is problematic since the privacy CA must take part in every transaction and thus must provide high availability whilst remaining secure. Furthermore, privacy requirements may be violated if the privacy CA and verifier collude. Although the latter issue can probably be resolved using blind signatures, the first remains.
AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.Apple uses Blind Signatures for attestation. It's how they avoid captchas at CloudFlare and Fastly in their Private Relay product
I don't think that a 100% anonymous attestation protocol is what most people need and want.
It would be sufficient to be able to freely choose who you trust as proxy for your attestations *and* the ability to modify that choice at any point later (i.e. there should be some interoperability). That can be your Google/Apple/Samsung ecosystem, your local government, a company operating in whatever jurisdiction you are comfortable with, etc.
Most busunessed do not need origin attestation, they need history attestation.
I.e. from when they buy from a trusted source and init the device.
But what's it attesting? Their byline "Every system starts in a verified state and stays trusted over time" should be "Every system starts in a verified state of 8,000 yet-to-be-discovered vulns and stays in that vulnerable state over time". The figure is made up but see for example https://tuxcare.com/blog/the-linux-kernel-cve-flood-continue.... So what you're attesting is that all the bugs are still present, not that the system is actually secure.
It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.
Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.
Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.
Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.
For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage
Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.
It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing
The scandal you cited was that guns controlled by the federal government don't have any obvious reasonable path to being owned by criminals; there isn't an obvious reason for the guns to have left the possession of the government in the first place.
There's not really an equivalent here for a computer owned by an individual because it's totally normal for someone to sell or dispose of a computer, and no one expects someone to be responsible for who else might get their hands on it at that point. If you prove a criminal owns a computer that I owned before, then what? Prosecution for failing to protect my computer from thieves, or for reselling it, or gifting it to a neighbor or family friend? Shifting the trust doesn't matter if what gets exposed isn't actually damaging on any way, and that's what the parent comment is asking about.
The first two examples you give seem to be about an unscrupulous government punishing someone for owning a computer that they consider tainted, but it honestly doesn't seem that believable that a government who would do that would require a burden of proof so high as to require cryptographic attestation to decide on something like that. I don't have a rebuttal for "an organization trying to avoid cross-tenant linkage" though because I'm not sure I even understand what it means: an example would probably be helpful.
I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.
I'm confused. People talking about remote attestation which I thought was used for stuff like SGX. A system in an otherwise untrusted state loads a blob of software into an enclave and attests to that fact.
Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).
I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.
This seems like the kind of technology that could make the problem described in https://www.gnu.org/philosophy/can-you-trust.en.html a lot worse. Do you have any plans for making sure it doesn't get used for that?
I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.
I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.
History is pretty consistent here:
WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.
Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.
Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.
Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.
Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.
Common failure modes:
Enterprise customers demand guarantees - policy creeps in.
Governments demand compliance - exceptions appear.
Liability enters the picture - defaults shift to "safe for the company."
Revenue depends on trust decisions - neutrality erodes.
Core maintainers lose leverage - architecture hardens around control.
Even if keys are user-controlled today, the key question is architectural: Can this system resist those pressures long-term, or does it merely promise to?
Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.
I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.
Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?
I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.
But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.
So, what’s the point and what’s the value?
The value is being able to easily and robustly verify that my device hasn't been compromised. Binding disk encryption keys to the TPM such that I don't need to enter a password but an adversary still can't get at the contents without a zero day.
Of course you can already do the above with secure boot coupled with a CPU that implements an fTPM. So I can't speak to the value of this project specifically, only build and boot integrity in general. For example I have no idea what they mean by the bullet "runtime integrity".
https://attestation.app/about For mobiles, it helps make tampering obvious.
https://doc.qubes-os.org/en/latest/user/security-in-qubes/an... For laptops, it helps make tampering obvious. (a different attestation scheme with smaller scope however)
This might not be useful to you personally, however.
Laptops can already have TPM based on FLOSS (with coreboot with Heads). It works well with Qubes btw, and is recommended by the developers: https://forum.qubes-os.org/t/qubes-certified-novacustom-v54-...
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.
But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.
I am aware of that, my (personal) view is that DRM is a social issue caused by modes of behaviour and the existence or non-existence of technical measures cannot fix or avoid that problem.
A lot of the concerns in this thread center on TPMs, but TPMs are really more akin to very limited HSMs that are actually under the user's control (I gave a longer explanation in a sibling comment but TPMs fundamentally trust the data given to them when doing PCR extensions -- the way that consumer hardware is fundamentally built and the way TPMs are deployed is not useful for physical "attacks" by the device owner).
Yes, you can imagine DRM schemes that make use of them but you can also imagine equally bad DRM schemes that do not use them. DRM schemes have been deployed for decades (including "lovely" examples like the Sony rootkit from the 2000s[1], and all of the stuff going on even today with South Korean banks[2]). I think using TPMs (and other security measures) for something useful to users is a good thing -- the same goes for cryptography (which is also used for DRM but I posit most people wouldn't argue that we should eschew all cryptography because of the existence of DRM).
[1]: https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk... [2]: https://palant.info/2023/01/02/south-koreas-online-security-...
This whole discussion is a perfect example of what Upton Sinclair said, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
A rational and intelligent engineer cannot possibly believe that he'll be able to control what a technology is used for after he creates it, unless his salary depends on him not understanding it.
So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.
Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
I'm really not trying to be slick, but I think it's quite difficult to convince people about anything concrete (such as precisely how this model is fundamentally different to models such as the Secure Boot PKI scheme and thus will not provide a mechanism to allow a non-owner of a device to restrict what runs on your machine) without providing a concrete implementation and design documents to back up what I'm saying. People are rightfully skeptical about this stuff, so any kind of explanation needs to be very thorough.
As an aside, it is a bit amusing to me that an initial announcement about a new company working on Linux systems caused the vast majority of people to discuss the impact on personal computers (and games!) rather than servers. I guess we finally have arrived at the fabled "Year of the Linux Desktop" in 2026, though this isn't quite how I expected to find out.
> Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
We do have answers for these questions, and a lot of the necessary components exist already (lots of FOSS people have been working on problems in this space for a while). The problem is that there is still the missing ~20% (not an actual estimate) we are building now, and the whole story doesn't make sense without it. I don't like it when people announce vapourware, so I'm really just trying to not contribute to that problem by describing a system that is not yet fully built, though I do understand that it comes off as being evasive. It will be much easier to discuss all of this once we start releasing things, and I think that very theoretical technical discussions can often be quite unproductive.
In general, I will say that there a lot of unfortunate misunderstandings about TPMs that lead people to assume their only use is as a mechanism for restricting users. This is really not the case, TPMs by themselves are actually more akin to very limited HSMs with a handful of features that can (cooperatively with firmware and operating systems) be used to attest to some aspects of the system state. They are also fundamentally under the users' control, completely unlike the PKI scheme used by Secure Boot and similar systems. In fact, TPMs are really not a useful mechanism for protecting against someone with physical access to the machine -- they have to trust that the hashes they are given to extend into PCRs are legitimate and on most systems the data is even provided over an insecure data line. This is why the security of locked down systems like Xbox One[1] don't really depend on them directly and don't use them at all in the way that they are used on consumer hardware. They are only really useful at protecting against third-party software-based attacks, which is something users actually want!
All of the comments about DRM obviously come from very legitimate concerns about user freedoms, but my views on this are a little too long to fit in a HN comment -- in short, I think that technological measures cannot fix a social problem and the history of DRM schemes shows that the absence of technological measures cannot prevent a social problem from forming either. It's also not as if TPMs haven't been around for decades at this point.
>I think that technological measures cannot fix a social problem
The absence of technological measures used to implement societal problems totally does help though. Just look at social media.
I fear the outlaw evil maid or other hypothetical attackers (good old scare-based sales tactics) much less than already powerful entities (enterprises, states) lawfully encroaching on my devices using your technology. So, I don't care about "misunderstandings" of the TPM or whatever other wall of text you are spewing to divert attention.
Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!
Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.
that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
it will be railroaded through in the same way that systemD was railroaded onto us.
> but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
This is the intent of the Poettering and Brauner.
> but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
This is basically true today with Secure Boot on modern hardware (at least in the default configuration -- Microsoft's soft-power policies for device manufacturers actually requires that you can change this on modern machines). This is bad, but it is bad because platform vendors decide which default keys are trusted for secure boot by default and there is no clean automated mechanism to enroll your own keys programmatically (at least, without depending on the Microsoft key -- shim does let you do this programmatically with the MOK).
The set of default keys ended up being only Microsoft (some argue this is because of direct pressure from Microsoft, but this would've happened for almost all hardware regardless and is a far more complicated story), but in order to permit people to run other operating systems on modern machines Microsoft signed up to being a CA for every EFI binary in the universe. Red Hat then controls which distro keys are trusted by the shim binary Microsoft signs[1].
This system ended up centralised because the platform vendor (not the device owner) fundamentally controls the default trusted key set and is what caused the whole nightmare of the Microsoft Secure Boot keys and rh-boot signing of shim. Getting into the business of being a CA for every binary in the world is a very bad idea, even if you are purely selfish and don't care about user freedoms (and it even makes Secure Boot less useful of a protection mechanism because it means that machines where users only want to trust Microsoft also necessarily trust Linux and every other EFI binary they sign -- there is no user-controlled segmentation of trust, which is the classic CA/PKI problem). I don't personally know how the Secure Boot / UEFI people at Microsoft feel about this, but I wouldn't be surprised if they also dislike the situation we are all in today.
Basically none of these issues actually apply to TPMs, which are more akin to limited HSMs where the keys and policies are all fundamentally user-controlled in a programmatic way. It also doesn't apply to what we are building either, but we need to finish building it before I can prove that to you.
What was it that the Google founders said about not adding advertisements to Google search?
> The models we have in mind for attestation are very much based on users having full control of their keys.
If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.
Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.
It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.
> The models we have in mind for attestation are very much based on users having full control of their keys.
FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.
Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.
This WILL be used to infringe on individual freedoms.
The only question is WHEN? And your answer to that appears to be 'Not for the time being'.
This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.
Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.
One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.
The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.
One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.
Even those not misusing the tech may find themselves worse off in such a world.
Why again should we trust this technology just because you are a good person?
half of the founders of this thing come from Microsoft. I suppose this makes the answer to your question obvious.
My thoughts exactly. We're probably witnessing the beginning of the end of linux users being able to run their own kernels. Soon:
- your bank won't let you log in from an "insecure" device.
- you won't be able to play videos on an "insecure" device.
- you won't be able to play video games on an "insecure" device.
And so on, and so forth.
Unfortunately the parent commenter is completely right.
The attestation portion of those systems is happening on locked down devices, and if you gain ownership of the devices they no longer attest themselves.
This is the curse of the duopoly of iOS and Android.
BankID in Sweden will only run with one of these devices, they used to offer a card system but getting one seems to be impossible these days. So you're really stuck with a mobile device as your primary means of identification for banking and such.
There's a reason that general purpose computers are locked to 720p on Netflix and Disney+; yet AppleTV's are not.
Torrenting is becoming more popular again. The alternative to being allowed to pay to watch on an "insecure" device isn't switching to an attested device, it's to stop paying for the content at all. Games industry, same thing (or just play the good older games, the new ones suck anyway).
Finances, just pay everything by cheque or physical pennies. Fight back. Starve the tyrants to death where you can, force the tyrants to incur additional costs and inefficiencies where you can't.
Is the joke here that all of those things have already been happening for a while now?
that's a silver lining
the anti-user attestation will at least be full of security holes, and likely won't work at all
Dunno about the others but Pottering has proven himself to deliver software against the grain.
Please don't bring attestation to common Linux distributions. This technology, by essence, moves trust to a third party distinct of the user. I don't see how it can be useful in any way to end users like most of us here. Its use by corporations has already caused too much damage and exclusion in the mobile landscape, and I don't want folks like us becoming pariahs in our own world, just because we want machines we bought to be ours...
A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
To anyone thinking not possibile, we already switched inits to systemd. And being persnickety saw mariadb replace mysql everywhere, libreoffice replace open office, and so on.
All the recent pushiness by a certain zealotish Italian debian maintainer, only helps this case. Trying to degrade Debian into a clone of Redhat is uncooth.
> A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
This misunderstands why systemd succeeded. It included several design decisions aimed at easing distribution maintainers' burdens, thus making adoption attractive to the same people that would approve this adoption.
If a systemd fork differentiates on not having attestation and getting rid of an unspecified set of "all the silly parts", how would they entice distro maintainers to adopt it? Elaborating what is meant by "silly parts" would be needed to answer that question.
Attestation is a critical feature for many H/W companies (e.g. IoT, robotics), and they struggle with finding security engineers who expertise in this area (disclaimer: I used to work as a operating system engineer + security engineer). Many distros are not only designed for desktop users, but also for industrial uses. If distros ship standardized packages in this area, it would help those companies a lot.
This is the problem with Linux in general. It's way too much infiltrated by our adversaries from big tech industry.
Look at all the kernel patch submissions. 90% are not users but big tech drones. Look at the Linux foundation board. It's the who's who of big tech.
This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial, the BSDs started commercial but are hardly still used as such and are mostly user driven now (yes there's a few exceptions like netflix, netgate, ix etc but nothing on the scale of huawei, Amazon etc)
Linux has been majority developed by large tech companies for the last 20+ years. If not for them, it would not be anywhere close to where it is today. You may not like this fact, but it's not really a new development nor something that can be described as infiltration. At the end of the day, maintaining software without being paid to do so is not generally sustainable.
> This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial
Thanks, this may be the key takeaway from this discussion for me
As a complete guess, I would say that 90% of Linux systems are run by "big tech drones". And also by small companies using technology.
Open source operating systems are not a zero sum game. Yes there is a certain gravitational pull from all the work contributed by the big companies. If you aren't contributing "for-hire", then you choose what you want to work on, and what you want to use.
> Attestation is a critical feature for many H/W companies
Like John Deere. Read about how they use that sort of thing
Why shouldn't they use the kernel, systemd, and a few core utilities? Why reinvent the wheel? There's nothing requiring them to pull in a typical desktop userspace.
Because different tasks requires different trade-offs and Linux has only one set of trade-offs. You cannot do good universal tool. It is like Leatherman, good enough to fix-up your bike on the side of the road, not so for normal workshop.
You say: reinvent the wheel.
I say: use pickup truck for every task, from farming to racing to commuting moving goods across continent. Is it possible? Of course. Is it good idea? I don't think so.
All cars are the same if you squint enough, wheels, engine, some frame, some controls, which are not very different between even F1 car and 18-wheel truck.
How are you defining "general-purpose OS"? Are you saying IoT and robotics shouldn't use a Linux kernel at all? Or just not your general purpose distros? I would be interested to hear more of your logic here, since it seems like using the same FOSS operating system across various uses provides a lot of value to everyone.
I think, that I want at least hard-real-time OS in any computer which can move physical objects. Linux kernel cannot be it: hard RTOS cannot have virtual memory (mapping walks is unpredictable in case of TLB miss) and many other mechanisms which are desired in desktop/server OS are ill-suited for RTOS. Scheduler must be tuned differently, I/O must be done differently. It is not only «this process have RT priority, don't preempt it», it is design of whole kernel.
Better, this OS must be verified (as seL4). But I understand, that it is pipe dream. Heck, even RTOS is pipe dream.
About IoT: this word means nothing. Is connected TV IoT? I have no problems with Linux inside it. My lightbulb which can be turned on and off via ZigBee? Why do I need Linux here? My battery-powered weather station (because I cannot put 220v wiring in backyard)? Better no, I need as-low-power-as-possible solution.
To be honest, O think even using one kernel for different servers is technically wrong, because RDBMS, file server and computational node needs very different priories in kernel tuning too. I prefer network stack of FreeBSD, file server capabilities (native ZFS & Ko) of Solaris, transaction processing of Tandem/HPE NonStop OS and Wayland/GPU/Desktop support of Linux. But everything bar Linux is effectively dead. And Linux is only «good enough» in everything, mediocre.
I understand value of unification, but as engineer I'm sad.
I agree but it's difficult to argue against it. There is just so much you get for free by starting with a Linux distro as your base. Developing against alternatives is very expensive and developing something new is even more expensive. The best we can hope for is that someone with deep pockets invests in good alternatives that everyone can benefit from.
I'm not too big in this field but didn't many of those same IOT companies and the like struggle with the packages becoming dependent on Poeterings work since they often needed much smaller/minimal distros?
I don't think this is generally true. If you are running Linux in your stack, your device probably is investing in 1GiB+ RAM and 2GiB+ of flash storage. systemd et al are not a problem at that point. Running a UI will end up being considerably more costly.
I work on embedded devices, fairly powerful ones to be fair, and I think systemd is really great, useful software. There's a ton of stuff I can do quite easily with systemd that would take a ton of effort to do reliably with sysvinit.
It's definitely pretty opinionated, and I frequently have to explain to people why "After=" doesn't mean "Wants=", but the result is way more robust than any alternative I'm familiar with.
If you're on a system so constrained that running systemd is a burden, you are probably already using something like buildroot/yocto and have a high degree of control about what init system you use.
Please do, I disagree with this commenter.
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
Whoever uses this seeks to ensure a certain kind of behavior on a machine they typically don't own (in the legal sense of it). So of course you can make it optional. But then software that depends on it, like your banking Electron app or your Steam game, will refuse to run... so as the user, you don't really have a choice.
I would love to use that technology to do reverse attestation, and require the server that handles my personal data to behave a certain way, like obeying the privacy policy terms of the EULA and not using my data to train LLMs if I so opted out. Something tells me that's not going to happen...
see latest "MS just divilged disk encryption keys to govt" news to see why this is a horrid idea
I’m skeptical about the push toward third-party hardware attestation for Linux kernels. Handing kernel trust to external companies feels like repeating mistakes we’ve already seen with iOS and Android, where security mechanisms slowly turned into control mechanisms.
Centralized trust Hardware attestation run by third parties creates a single point of trust (and failure). If one vendor controls what’s “trusted,” Linux loses one of its core properties: decentralization. This is a fundamental shift in the threat model.
Misaligned incentives These companies don’t just care about security. They have financial, legal, and political incentives. Over time, that usually means monetization, compliance pressure, and policy enforcement creeping into what started as a “security feature.”
Black boxes Most attestation systems are opaque. Users can’t easily audit what’s being measured, what data is emitted, or how decisions are made. This runs counter to the open, inspectable nature of Linux security today.
Expanded attack surface Adding external hardware, firmware, and vendor services increases complexity and creates new supply-chain and implementation risks. If the attestation authority is compromised, the blast radius is massive.
Loss of user control Once attestation becomes required (or “strongly encouraged”), users lose the ability to fully control their own systems. Custom kernels, experimental builds, or unconventional setups risk being treated as “untrusted” by default.
Vendor lock-in Proprietary attestation stacks make switching vendors difficult. If a company disappears, changes terms, or decides your setup is unsupported, you’re stuck. Fragmentation across vendors also becomes likely.
Privacy and tracking Remote attestation often involves sending unique or semi-unique device signals to external services. Even if not intended for tracking, the capability is there—and history shows it eventually gets used.
Potential for abuse Attestation enables blacklisting. Whether for business, legal, or political reasons, third parties gain the power to decide what software or hardware is acceptable. That’s a dangerous lever to hand over.
Harder incident response If something goes wrong inside a proprietary attestation system, users and distro maintainers may have little visibility or ability to respond independently.
Hi, Chris here, CEO @ Amutable. We are very excited about this. Happy to answer questions.