Comment by cyphar

Comment by cyphar 5 days ago

51 replies

I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.

I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.

ingohelpinger 5 days ago

Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.

History is pretty consistent here:

WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.

Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.

Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.

Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.

Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.

Common failure modes:

Enterprise customers demand guarantees - policy creeps in.

Governments demand compliance - exceptions appear.

Liability enters the picture - defaults shift to "safe for the company."

Revenue depends on trust decisions - neutrality erodes.

Core maintainers lose leverage - architecture hardens around control.

Even if keys are user-controlled today, the key question is architectural: Can this system resist those pressures long-term, or does it merely promise to?

Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.

I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.

drdaeman 5 days ago

Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?

I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.

But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.

So, what’s the point and what’s the value?

  • fc417fc802 4 days ago

    The value is being able to easily and robustly verify that my device hasn't been compromised. Binding disk encryption keys to the TPM such that I don't need to enter a password but an adversary still can't get at the contents without a zero day.

    Of course you can already do the above with secure boot coupled with a CPU that implements an fTPM. So I can't speak to the value of this project specifically, only build and boot integrity in general. For example I have no idea what they mean by the bullet "runtime integrity".

    • NekkoDroid 4 days ago

      > For example I have no idea what they mean by the bullet "runtime integrity".

      This is for example dm-verity (e.g. `/usr/` is an erofs partiton with matching dm-verity). Lennart always talks about either having files be RW (backed by encryption) or RX (backed by kernel signature verification).

    • drdaeman 4 days ago

      I don’t think attestation can provide such guarantees. To best of my understanding, it won’t protect from any RCE, and it won’t protect from malicious updates to configuration files. It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware). IDSes are sufficient for this purpose, without negative side effects.

      And that’s why I said “not a security mechanism”. Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.

      • fc417fc802 4 days ago

        I think all of that comes down to being a matter of what precisely you're attesting? So I'm not actually clear what we're talking about here.

        Given secure boot and a TPM you can remotely attest, using your own keys, that the system booted up to a known good state. What exactly that means though depends entirely on what you configured the image to contain.

        > it won’t protect from malicious updates to configuration files

        It will if you include the verified correct state of the relevant config file in a merkel tree.

        > It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware).

        Shouldn't it permit running arbitrary binaries that you have signed? That places the root of trust with the build environment.

        Now if you attempt to compile binaries and then sign them on the production system yeah that would open you up to attack (if we assume a process has been compromised at runtime). But wasn't that already the case? Ideally the production system should never be used to sign anything. (Some combination of SGX, TPM, and SEV might be an exception to that but I don't know enough to say.)

        > Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.

        If you remotely boot a box sitting in a rack on the other side of the world how can you be sure it hasn't been compromised? However you go about confirming it, isn't that what attestation is?

    • giant_loser a day ago

      > The value is being able to easily and robustly verify that my device hasn't been compromised.

      That is impossible.

      "secure" devices get silently tampered with everyday.

      You can never guarantee that.

  • its-summertime 5 days ago

    https://attestation.app/about For mobiles, it helps make tampering obvious.

    https://doc.qubes-os.org/en/latest/user/security-in-qubes/an... For laptops, it helps make tampering obvious. (a different attestation scheme with smaller scope however)

    This might not be useful to you personally, however.

repstosb 5 days ago

The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.

Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.

teiferer 5 days ago

> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.

Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.

But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.

  • cyphar 4 days ago

    I am aware of that, my (personal) view is that DRM is a social issue caused by modes of behaviour and the existence or non-existence of technical measures cannot fix or avoid that problem.

    A lot of the concerns in this thread center on TPMs, but TPMs are really more akin to very limited HSMs that are actually under the user's control (I gave a longer explanation in a sibling comment but TPMs fundamentally trust the data given to them when doing PCR extensions -- the way that consumer hardware is fundamentally built and the way TPMs are deployed is not useful for physical "attacks" by the device owner).

    Yes, you can imagine DRM schemes that make use of them but you can also imagine equally bad DRM schemes that do not use them. DRM schemes have been deployed for decades (including "lovely" examples like the Sony rootkit from the 2000s[1], and all of the stuff going on even today with South Korean banks[2]). I think using TPMs (and other security measures) for something useful to users is a good thing -- the same goes for cryptography (which is also used for DRM but I posit most people wouldn't argue that we should eschew all cryptography because of the existence of DRM).

    [1]: https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk... [2]: https://palant.info/2023/01/02/south-koreas-online-security-...

  • mikkupikku 5 days ago

    This whole discussion is a perfect example of what Upton Sinclair said, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

    A rational and intelligent engineer cannot possibly believe that he'll be able to control what a technology is used for after he creates it, unless his salary depends on him not understanding it.

  • faust201 5 days ago

    You could tell this sort of insinuation to anyone. Including you.

    Argument should be technical.

    • teiferer 5 days ago

      Insinuation? As a sw dev they don't have any agency over whether or by whom they get acquired. Their decision will be whether to leave if it's changing to the worse, and that's very much understandable (and arguably the ethical thing to do).

    • seanhunter 5 days ago

      That's a perfectly valid objection to this proposal. You only have to look at what happened to Hashicorp to see the risk.

      • faust201 16 hours ago

        How can anyone promise that? Will you promise to your current employer that you will never leave the job?

        • seanhunter 10 hours ago

          No, but I can promise to my current employer that me leaving my job won’t be a critical problem.

          It’s less of an issue in the case of a normal job than in an open source project where often the commitment of particular founding individuals to the long-term future of the project is a big part of people’s decision to use or not use that tech in their solutions. Here, given that “Trusted computing” can potentially lock you out of devices you have bought, it’s important for people to be able to judge the risk of getting “legal ransomware”d if the trusted computing base ends up depending on a proprietary component that they can’t back out of.

          That said, there is absolutely zero chance that I use this (systemd is already enough Poettering software for me in this lifetime) so I’m not personally affected either way.

    • majewsky 5 days ago

      > You could tell this sort of insinuation to anyone. Including you.

      Yes. You correctly stated the important point.

    • pseudalopex 4 days ago

      > Argument should be technical.

      Yes. Aleksa made no technical argument.

ahartmetz 5 days ago

So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.

Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?

  • cyphar 4 days ago

    I'm really not trying to be slick, but I think it's quite difficult to convince people about anything concrete (such as precisely how this model is fundamentally different to models such as the Secure Boot PKI scheme and thus will not provide a mechanism to allow a non-owner of a device to restrict what runs on your machine) without providing a concrete implementation and design documents to back up what I'm saying. People are rightfully skeptical about this stuff, so any kind of explanation needs to be very thorough.

    As an aside, it is a bit amusing to me that an initial announcement about a new company working on Linux systems caused the vast majority of people to discuss the impact on personal computers (and games!) rather than servers. I guess we finally have arrived at the fabled "Year of the Linux Desktop" in 2026, though this isn't quite how I expected to find out.

    > Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?

    We do have answers for these questions, and a lot of the necessary components exist already (lots of FOSS people have been working on problems in this space for a while). The problem is that there is still the missing ~20% (not an actual estimate) we are building now, and the whole story doesn't make sense without it. I don't like it when people announce vapourware, so I'm really just trying to not contribute to that problem by describing a system that is not yet fully built, though I do understand that it comes off as being evasive. It will be much easier to discuss all of this once we start releasing things, and I think that very theoretical technical discussions can often be quite unproductive.

    In general, I will say that there a lot of unfortunate misunderstandings about TPMs that lead people to assume their only use is as a mechanism for restricting users. This is really not the case, TPMs by themselves are actually more akin to very limited HSMs with a handful of features that can (cooperatively with firmware and operating systems) be used to attest to some aspects of the system state. They are also fundamentally under the users' control, completely unlike the PKI scheme used by Secure Boot and similar systems. In fact, TPMs are really not a useful mechanism for protecting against someone with physical access to the machine -- they have to trust that the hashes they are given to extend into PCRs are legitimate and on most systems the data is even provided over an insecure data line. This is why the security of locked down systems like Xbox One[1] don't really depend on them directly and don't use them at all in the way that they are used on consumer hardware. They are only really useful at protecting against third-party software-based attacks, which is something users actually want!

    All of the comments about DRM obviously come from very legitimate concerns about user freedoms, but my views on this are a little too long to fit in a HN comment -- in short, I think that technological measures cannot fix a social problem and the history of DRM schemes shows that the absence of technological measures cannot prevent a social problem from forming either. It's also not as if TPMs haven't been around for decades at this point.

    [1]: https://www.youtube.com/watch?v=U7VwtOrwceo

    • ahartmetz 4 days ago

      >I think that technological measures cannot fix a social problem

      The absence of technological measures used to implement societal problems totally does help though. Just look at social media.

      I fear the outlaw evil maid or other hypothetical attackers (good old scare-based sales tactics) much less than already powerful entities (enterprises, states) lawfully encroaching on my devices using your technology. So, I don't care about "misunderstandings" of the TPM or whatever other wall of text you are spewing to divert attention.

iamnothere 5 days ago

Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!

Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.

LooseMarmoset 5 days ago

that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.

it will be railroaded through in the same way that systemD was railroaded onto us.

  • giant_loser a day ago

    > but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.

    This is the intent of the Poettering and Brauner.

  • cyphar 4 days ago

    > but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.

    This is basically true today with Secure Boot on modern hardware (at least in the default configuration -- Microsoft's soft-power policies for device manufacturers actually requires that you can change this on modern machines). This is bad, but it is bad because platform vendors decide which default keys are trusted for secure boot by default and there is no clean automated mechanism to enroll your own keys programmatically (at least, without depending on the Microsoft key -- shim does let you do this programmatically with the MOK).

    The set of default keys ended up being only Microsoft (some argue this is because of direct pressure from Microsoft, but this would've happened for almost all hardware regardless and is a far more complicated story), but in order to permit people to run other operating systems on modern machines Microsoft signed up to being a CA for every EFI binary in the universe. Red Hat then controls which distro keys are trusted by the shim binary Microsoft signs[1].

    This system ended up centralised because the platform vendor (not the device owner) fundamentally controls the default trusted key set and is what caused the whole nightmare of the Microsoft Secure Boot keys and rh-boot signing of shim. Getting into the business of being a CA for every binary in the world is a very bad idea, even if you are purely selfish and don't care about user freedoms (and it even makes Secure Boot less useful of a protection mechanism because it means that machines where users only want to trust Microsoft also necessarily trust Linux and every other EFI binary they sign -- there is no user-controlled segmentation of trust, which is the classic CA/PKI problem). I don't personally know how the Secure Boot / UEFI people at Microsoft feel about this, but I wouldn't be surprised if they also dislike the situation we are all in today.

    Basically none of these issues actually apply to TPMs, which are more akin to limited HSMs where the keys and policies are all fundamentally user-controlled in a programmatic way. It also doesn't apply to what we are building either, but we need to finish building it before I can prove that to you.

    [1]: https://github.com/rhboot/shim-review

5d41402abc4b 5 days ago

What was it that the Google founders said about not adding advertisements to Google search?

dTal 5 days ago

Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.

It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.

curt15 5 days ago

> The models we have in mind for attestation are very much based on users having full control of their keys.

If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.

inetknght 5 days ago

Can I build my own kernel and still use software that wants attestation?

  • surajrmal 5 days ago

    Do you have a way to tell the software to trust your kernel? If so, yes. Things like the web show how we can achieve distributed trust.

    • account42 5 days ago

      "Trust" has become such an orwellian word in tech.

    • cferry 4 days ago

      That's the thing. I can only provide a piece of software with the guarantee it can run on my OS. It can trust my kernel to let it run, but shouldn't expect anything more. The editor is free to run code it wants to guarantee the integrity of on its own infrastructure; but whatever reaches my machine _may_ at best run as the editor intends.

wooptoo 4 days ago

> The models we have in mind for attestation are very much based on users having full control of their keys.

FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.

Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.

This WILL be used to infringe on individual freedoms.

The only question is WHEN? And your answer to that appears to be 'Not for the time being'.

account42 5 days ago

> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.

The road to hell is paved with good intentions.

endgame 5 days ago

That's not the intention, but how do you stop it from being the effect?

trelane 5 days ago

Glad to hear it! I am not surprised given the names and the fact you're at FOSDEM.

qmr 5 days ago

What engineering discipline?

PE or EIT?

[removed] 5 days ago
[deleted]
michaelmrose 5 days ago

This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.

Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.

One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.

The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.

One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.

Even those not misusing the tech may find themselves worse off in such a world.

Why again should we trust this technology just because you are a good person?

  • [removed] 5 days ago
    [deleted]
  • michaelmrose 5 days ago

    TLDR We already know how this will be misused to take away people's freedom not to run their own software stack but to dissent against fascism. It's immoral to build even with the best intentions.

quotemstr 5 days ago

You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.