Comment by mcpherrinm

Comment by mcpherrinm a day ago

46 replies

The downgrade attacks on TLS are only really present in the case of client behaviour where, on failing to achieve one version, they retry a new connection without it.

This was necessary to bypass various broken server side implementations, and broken middleboxes, but wasn’t necessarily a flaw in TLS itself.

But from the learnings of this issue preventing 1.2 deployment, TLS 1.3 goes out of its way to look very similar on the wire to 1.2

matthewdgreen 10 hours ago

This isn't really accurate historically. TLS has both ciphersuite and version negotiation. Logjam (2015) [1] was a downgrade attack on the former that's now fixed, but is an extension of an attack that was first noticed way back in 1996 [2]. Similar problems occurred with the FREAK attack, though that was actually a client vulnerability. TLS 1.3 goes out of its way to fix all of this using a better negotiation mechanism, and by reducing agility.

[1] https://en.wikipedia.org/wiki/Logjam_(computer_security) [2] https://www.usenix.org/legacy/publications/library/proceedin...

ekr____ a day ago

Moreover, there's not really much in the way of choices here. If you don't have this kind of automatic version negotiation then it's essentially impossible to deploy a new version.

  • upofadown 14 hours ago

    Well you can, but that would require a higher level of political skill than normally exists for such things. What would have to happen is that almost everyone would have to agree on the new version and then implement it. Once implementation was sufficiently high enough then you have a switchover day.

    The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.

    The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.

    • ekr____ 13 hours ago

      This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.

      Moreover, even in the best case scenario this means that you don't get the benefits of deployment for years if not decades. Even 7 years out, TLS 1.3 is well below 100% deployment. To take a specific example here: we want to deploy PQ ciphers ASAP to prevent harvest-and-decrypt attacks. Why should this wait for 100% deployment?

      > The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.

      I don't think this is really that accurate, especially on the Web. The actual widely in use options are fairly narrow.

      TLS is used in a lot of different settings, so it's unsurprising that there are a lot of options to cover those settings. TLS 1.3 did manage to reduce those quite a bit, however.

      • Sophira 13 hours ago

        > This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.

        Case in point: IPv6 adoption. There's no interoperability or negotiation between it and IPv4 (at least, not in any way that matters), which has led to the mess we're in today.

    • drob518 12 hours ago

      That’s a great theory but in practice such a “flag day” almost never happens. The last time the internet went through such a change was January 1, 1983, when the ARPANET switched from NCP to the newly designed TCP/IP. People want to do something similar on February 1, 2030, to remove IPv4 and switch totally to IPv6, but I give it a 50/50 chance of success, and IPv6 is already about 30 years old. See https://ipv4flagday.net/

      • upofadown 10 hours ago

        You don't have to have everyone switch over on the same day as with your example. Once it is decreed that implementations are widespread enough, then everyone can switch over to the introduced thing gradually. The "flag day" is when it is decreed that implementations no longer have to support some previously widely used method. Support for that method would then gradually disappear unless there was some associated cryptographic emergency that could not be dealt with without changing the standard.

        • ekr____ 9 hours ago

          Well, this is basically what we do, except that we try to negotiate to the highest version during the period before the flag day. This is far more practical for three reasons:

          1. You actually get benefit during the transition period because you get to use the new version.

          2. You get to test the new version at scale, which often reveals issues, as it did with TLS 1.3. It also makes it much easier to measure deployment because you can see what is actually negotiated.

          3. Generally, implementations are very risk averse and so aren't willing to disable older versions until there is basically universal deployment, so it takes the pressure off of this decision.

    • freeone3000 14 hours ago

      > The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.

      They learned the lesson of IPv6 here.

    • thayne 8 hours ago

      > that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore

      Part of the motivation of TLS 1.3 was to mitigate that. It removed a lot of options for negotiating the ciphersuite.

  • pcthrowaway 20 hours ago

    You could deploy a new version, you'd just have older clients unable to connect to servers implementing the newer versions.

    It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility (yes I realize the 's' stands for secure, not 'ssl', but httpt would have still worked as "HTTP with TLS")

    • josephg 19 hours ago

      > It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility

      That would have been at least little bit insane, since then web links would be embedding the protocol version number. As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work.

      I wish we could go the other way - and make http:// implicitly use TLS when TLS is available. Having http://.../x and https://.../x be able to resolve to different resources was a huge mistake.

      • cpach 17 hours ago

        Regarding your last paragraph: Isn’t that pretty much solved thanks to HSTS preload? A non-technical author of a small recipe blog might not know how to set it up, but a bank ought to have staff (and auditors) who takes care of stuff like that.

      • pcthrowaway 7 hours ago

        > As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work

        Wouldn't we be able to just redirect https->httpt like http requests do right now?

        Sure it'd be a tiny bit more overhead for servers, but no different than what we already experienced moving away from unencrypted http

    • ekr____ 13 hours ago

      This has a number of negative downstream effects.

      First, recall that links are very often inter-site, so the consequence would be that even when a server upgraded to TLS 1.2, clients would still try to connect with TLS 1.1 because they were using the wrong kind of link. This would relay delay deployment. By contrast, today when the server upgrades then new clients upgrade as well.

      Second, in the Web security model, the Origin of a resource (e.g., the context in which the JS runs) is based on scheme/host/port. So httpt would be a different origin from HTTPS. Consider what happens if the incoming link is https and internal links are httpt now different pages are different origins for the same site.

      These considerations are so important that when QUIC was developed, the IETF decided that QUIC would also be an https URL (it helps that IETF QUIC's cryptographic handshake is TLS 1.3).

    • tgma 18 hours ago

      TLS is one of the best success stories of widely applied security with great UX. It would be nowhere as successful with that attitude.

    • account42 16 hours ago

      Yes it would absolutely have been insane.

    • immibis 11 hours ago

      You mean like the way we use h2:// everywhere now? Oh wait, we don't.

  • Dylan16807 21 hours ago

    Depends on what you mean by "this kind" because you want a way to detect attacker-forced downgrades and that used to be missing.

frollogaston 21 hours ago

If a protocol is widely used wrongly, I consider it a flaw in the protocol. But overall, SSL standardization has gone decently well. I always bring it up as a good example to contrast with XMPP as a bad example.

  • mcpherrinm 21 hours ago

    Well, my only real point is that it’s not the version negotiation in TLS that’s broken. It’s the workaround for intolerance of newer versions that had downgrade attacks.

    Fortunately that’s all behind us now, and transitioning from 1.2 to 1.3 is going much smoother than 1.0 to 1.2 went.

    • tialaramex 16 hours ago

      One of the big differences was in attitude. The TLS 1.3 anti-downgrade feature was not compatible with some popular middlebox products. Google told people too bad, either your vendor fixes it (most shipped free bug fixes for this issue, presumably "encouraged" by the resulting customer anger) or you can't run Chrome once this temporary fudge goes away in a year's time.

      Previously (in earlier protocol versions) nobody stood up to the crap middleboxes even though it's bad for all normal users.

      • drob518 12 hours ago

        The service providers were the worst offenders here because they wanted to be the MIM to be able to look at the data and “add value” to their networks some how. Moving to TLS 1.3 took a lot of that away from them and it was only Google’s market power that could break them.

        • frollogaston 5 hours ago

          Similar thing has been happening with email sender auth, with Gmail and other big providers enforcing things

      • adgjlsfhk1 6 hours ago

        Any chance that can be used to undo lots of the ossification that made QUIC a UDP based hack rather than it's own level 4 protocol?

  • meepmorp 12 hours ago

    > I always bring it up as a good example to contrast with XMPP as a bad example.

    Could you expand a bit here? Do you just mean how extensions to the protocol are handled, etc., or the overall process and involved parties?

    • frollogaston 7 hours ago

      XMPP is too loose. Easiest comparison is security alone. XMPP auth and encryption are complicated, and they're optional for each of c2s, s2c, s2s (setting aside e2e). Clients and servers will quietly do the wrong thing if not configured exactly right. Email has similar problems, so bad that entire companies exist just to help set up stuff like DMARC, but that's a simpler app than instant messaging. The rest of the XMPP feature set is also super loose. Clients and servers never agree on what extensions to implement, even for very basic things like chat rooms. I really tried to like it before giving up.

      Edit: https://wiki.xmpp.org/web/Securing_XMPP

      SSL is appropriately strict. Auth and encryption, both c2s and s2c, go together. They were a bit lax on upgrades in the past, but as another comment said, Google just said you fix your stuff or else Chrome will show a very scary banner on your website. Yes you can skip it or force special things like auth without encryption, but it's impossible to do by accident.

sjducb 18 hours ago

Man in the middle interfering with TLS handshakes?

The handshake is unencrypted so you can modify the messages to make it look like the server only supports broken ciphers. Then the man in the middle can read all of the encrypted data because it was badly encrypted.

A surprising number of servers still support broken ciphers due to legacy uses or incompetence.

  • ekr____ 12 hours ago

    Yes, this is a seriously difficult problem with only partial solutions.

    The basic math of any kind of negotiation is that you need the minimum set of cryptographic parameters supported by both sides to be secure enough to resist downgrade. This is too small a space to support a complete accounting of the situation, but roughly:

    - In pre-TLS 1.3 versions of TLS, the Finished message was intended to provide secure negotiation as long as the weakest joint key exchange was secure, even if the weakest joint record protection algorithm was insecure, because the Finished provides integrity for the handshake outside of the record layer.

    - In TLS 1.3, the negotiation messages are also signed by the server, which is intended to protect negotiation as long as the weakest joint signature algorithm is secure. This is (I believe) the best you can do with a client and server which have never talked to each other, because if the signature algorithm is insecure, the attacker can just impersonate the server directly.

    - TLS 1.3 also includes a mechanism intended to prevent against TLS 1.3 -> TLS 1.2 downgrade as long as the TLS 1.2 cipher suite involves server signing (as a practical matter, this means ECDHE). Briefly, the idea is to use a sentinel value in the random nonces, which are signed even in TLS 1.2 (https://www.rfc-editor.org/rfc/rfc8446#section-4.1.3).

  • mcpherrinm 12 hours ago

    No: while the handshake is unencrypted, it is authenticated. An attacker can’t modify it.

    What an attacker can do is block handshakes with parameters they don’t like. Some clients would retry a new handshake with an older TLS version, because they’d take the silence to mean that the server has broken negotiation.

    • mcpherrinm 5 hours ago

      well, unless both client and server have sufficiently weak crypto enabled that an attacker can break it during the handshake.

      Then you can MITM, force both sides to use the weak crypto, which can be broken, and you're in the middle. Also not really so relevant today.

  • kevincox 17 hours ago

    You could encrypt the handshake that you recieved with the server's certificate and send it back. Then if it doesn't match what the server thought it sent it aborts the handshake. As long as the server's cert isn't broken this would detect a munged handshake, and if the server's cert is broken you have no root of trust to start the connection in the first place.

    • sjducb 13 hours ago

      How do you agree a protocol to encrypt the message to agree the protocol?

      This is the message that returns a list of supported ciphers and key exchange protocols. There’s no data in this first packet.

      Alice: I’d like to connect Bob: Sure here is a list of protocols we could use:

      You modify bob’s message so that bob only suggests insecure protocols.

      You might be proposing that Alice asks Trent for Bob’s public key … But that’s not how TLS works.

      • lxgr 7 hours ago

        Bob's list of supported protocols is an input into the (authenticated) final handshake message, and that authentication failing will prevent the connection from being considered successfully established.

        If the "negotiated" cipher suite is weak enough to allow real-time impersonation of Bob, though, pre-1.3 versions are still vulnerable; that's another reason not to keep insecure cipher suites around in a TLS config.

    • dotancohen 14 hours ago

      The fine man in the middle could still intercept that.