Comment by martinralbrecht

Comment by martinralbrecht 16 hours ago

45 replies

WhatsApp's end-to-end encryption has been independently investigated: https://kclpure.kcl.ac.uk/ws/files/324396471/whatsapp.pdf

Full version here: https://eprint.iacr.org/2025/794.pdf

We didn't review the entire source code, only the cryptographic core. That said, the main issue we found was that the WhatsApp servers ultimately decide who is and isn't in a particular chat. Dan Goodin wrote about it here: https://arstechnica.com/security/2025/05/whatsapp-provides-n...

vpShane 13 hours ago

> We didn't review the entire source code And, you don't see the issue with that? Facebook was bypassing security measures for mobile by sending data to itself on localhost using websockets and webrtc.

https://cybersecuritynews.com/track-android-users-covertly/

An audit of 'they can't read it cryptographically' but the app can read it, and the app sends data in all directions. Push notifications can be used to read messages.

  • miduil 13 hours ago

    > Push notifications can be used to read messages.

    Are you trying to imply that WhatsApp is bypassing e2e messaging through Push notifications?

    Unless something has changed, this table highlights that both Signal and WhatsApp are using a "Push-to-Sync" technique to notify about new messages.

    https://crysp.petsymposium.org/popets/2024/popets-2024-0151....

    • itsthecourier 12 hours ago

      Push-to-Sync. We observed 8 apps employ a push-to-sync strat- egy to prevent privacy leakage to Google via FCM. In this mitigation strategy, apps send an empty (or almost empty) push notification to FCM. Some apps, such as Signal, send a push notification with no data (aside from the fields that Google sets; see Figure 4). Other apps may send an identifier (including, in some cases, a phone num- ber). This push notification tells the app to query the app server for data, the data is retrieved securely by the app, and then a push notification is populated on the client side with the unencrypted data. In these cases, the only metadata that FCM receives is that the user received some message or messages, and when that push noti- fication was issued. Achieving this requires sending an additional network request to the app server to fetch the data and keeping track of identifiers used to correlate the push notification received on the user device with the message on the app server.

      • chaps 8 hours ago

        Is that not still incredibly vulnerable to timing attacks?

        • xethos 6 hours ago

          Maybe I’m mis-interpreting what you mean, but without a notification when a message is sent, what would you correlate a message-received notification with?

    • fasbiner 9 hours ago

      Nothing changed, but many people struggle to understand their our own degree of relative ignorance and overvalue high-level details that are leaky abstractions which make the consequentially dissimilar look superficially similar.

cookiengineer 12 hours ago

Why did you not mention that the WhatsApp apk, even on non-google play installed devices, loads google tag manager's scripts?

It is reproducibly loaded in each chat, and an MitM firewall can also confirm that. I don't know why the focus of audits like these are always on a specific part of the app or only about the cryptography parts, and not the overall behavior of what is leaked and transferred over the wire, and not about potential side channel or bypass attacks.

Transport encryption is useless if the client copies the plaintext of the messages afterwards to another server, or say an online service for translation, you know.

  • tptacek 12 hours ago

    There's a whole section, early, in the analysis Albrecht posted that surfaces these concerns.

morshu9001 12 hours ago

They also decide what public key is associated with a phone number, right? Unless you verify in person.

  • NoahZuniga 10 hours ago

    That's protected cryptographically with key transparency. Anyone can check what the current published keys for a user are, and be sure they get the same value as any other user. Specifically, your wa client checks that these keys are the right key.

    • morshu9001 8 hours ago

      Even if your client is asking other clients to verify, what if everyone has the same wrong key for a particular user Whatsapp has chosen to spoof?

some_furry 16 hours ago

Thank you for actually evaluating the technology as implemented instead of speculating wildly about what Facebook can do based on vibes.

  • chaps 13 hours ago

    Unfortunately a lot of investigations start out as speculation/vibes before they turn into an actual evaluation. And getting past speculation/vibes can take a lot of effort and political/social/professional capital before even starting.

    • lazide 4 hours ago

      Well yeah. If they had solid evidence at the start, why would they need an investigation?

Jamesbeam 10 hours ago

Hello Professor Albrecht,

thank you for your work.

I’ve been looking for this everywhere the past few days but I couldn’t find any official information relating the use of https://signal.org/docs/specifications/pqxdh/ in the signal protocol version that WhatsApp is currently using.

Do you have any information if the protocol version they currently use provides post-quantum forward secrecy and SPQR or are the current e2ee chats vulnerable to harvest now, decrypt later attacks?

Thanks for your time.

uoaei 11 hours ago

Can they control private keys and do replay attacks?

  • maqp 11 hours ago

    Signal protocol prevents replay attacks as every message is encrypted with new key. Either it's next hash ratchet key, or next future secret key with new entropy mixed via next DH shared key.

    Private keys, probably not. WhatsApp is E2EE meaning your device generates the private key with OS's CSPRNG. (Like I also said above), exfiltration of signing keys might allow MITM but that's still possible to detect e.g. if you RE the client and spot the code that does it.

    • TurdF3rguson 8 hours ago

      Wouldn't ratchet keys prevent MITM too? In other words if MITM has your keys and decrypts your message, then your keys are out of sync from now on. Or do I misunderstand that?

digdigdag 13 hours ago

> We didn't review the entire source code

Then it's not fully investigated. That should put any assessments to rest.

  • 3rodents 13 hours ago

    By that standard, it can never be verified because what is running and what is reviewed could be different. Reviewing relevant elements is as meaningful as reviewing all the source code.

    • giancarlostoro 12 hours ago

      Or they could even take out the backdoor code and then put it back in after review.

      • hedora 9 hours ago

        This is why signal supports reproducible builds.

        • pdpi 7 hours ago

          In this day and age, in a world with Docker and dev containers and such, it's kind of shocking that reproducible builds aren't table stakes.

    • dangus 2 hours ago

      Let’s be real: the standard is “Do we trust Meta?”

      I don’t, and don’t see how it could possibly be construed to be logical to trust them.

      I definitely trust a non-profit open source alternative a whole lot more. Perception can be different than reality but that’s what we’ve got to work with.

  • ghurtado 12 hours ago

    I have to assume you have never worked on security cataloging of third party dependencies on a large code base.

    Because if you had, you would realize how ridiculous it is to state that app security can't be assessed until you have read 100% of the code

    That's like saying "well, we don't know how many other houses in the city might be on fire, so we should let this one burn until we know for sure"

    • fasbiner 9 hours ago

      What you are saying is empirically false. Change in a single line of executed code (sometimes even a single character!) can be the difference between a secure and non-secure system.

      This must mean that you have been paid not to understand these things. Or perhaps you would be punished at work if you internalized reality and spoke up. In either case, I don't think your personal emotional landscape should take precedence over things that have been proven and are trivial to demonstrate.

      • JasonADrury 35 minutes ago

        > Change in a single line of executed code (sometimes even a single character!) can be the difference between a secure and non-secure system.

        This is kind of pointless, nobody is going to audit every single instruction in the Linux kernel or any complex software product.

    • jokersarewild 10 hours ago

      It sounds like your salary has depended on believing things like a partial audit is worthwhile in the case that a client is the actual adversary.

      • charcircuit 9 hours ago

        Except Meta is not an adversary. They are aligned with people who want private messaging.

  • Barrin92 12 hours ago

    as long as client side encryption has been audited, which to my understanding is the case, it doesn't matter. That is literally the point of encryption, communication across adversarial channels. Unless you think Facebook has broken the laws of mathematics it's impossible for them to decrypt the content of messages without the users private keys.

    • maqp 12 hours ago

      Well the thing is, the key exfiltration code would probably reside outside the TCB. Not particularly hard to have some function grab the signing keys, and send them to the server. Then you can impersonate as the user in MITM. That exfiltration is one-time and it's quite hard to recover from.

      I'd much rather not have blind faith on WhatsApp doing the right thing, and instead just use Signal so I can verify myself it's key management is doing only what it should.

      Speculating over the correctness of E2EE implementation isn't productive, considering the metadata leak we know Meta takes full advantage of, is enough reason to stick proper platforms like Signal.

      • jcgl 11 hours ago

        > That exfiltration is one-time and it's quite hard to recover from.

        Not quite true with Signal's double ratchet though, right? Because keys are routinely getting rolled, you have to continuously exfiltrate the new keys.

        • maqp 11 hours ago

          No I said signing keys. If you're doing MITM all the time because there's no alternative path to route ciphertexts, you get to generate all those double-ratchet keys. And then you have a separate ratchet for the other peer in the opposite direction.

          Last time I checked, by default, WhatsApp features no fingerprint change warnings by default, so users will not even notice if you MITM them. The attack I described is for situations where the two users would enable non-blocking key change warnings and try to compare the fingerprints.

          Not saying this attack happens by any means. Just that this is theoretically possible, and leaves the smallest trail. Which is why it helps that you can verify on Signal it's not exfiltrating your identity keys.

      • subw00f 12 hours ago

        Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now? Unless there is some remote code execution flow going on.

    • hn_throwaway_99 12 hours ago

      The issue is what the client app does with the information after it is decrypted. As Snowden remarked after he released his trove, encryption works, and it's not like the NSA or anyone else has some super secret decoder ring. The problem is endpoint security is borderline atrocious and an obvious achilles heel - the information has to be decoded in order to display it to the end user, so that's a much easier attack vector than trying to break the encryption itself.

      So the point other commenters are making is that you can verify all you want that the encryption is robust and secure, but that doesn't mean the app can't just send a copy of the info to a server somewhere after it has been decoded.