Comment by swiftcoder

Comment by swiftcoder 4 hours ago

3 replies

I do not think this matches anyones' mental model of what "end-to-end encrypted" for a conversation between me and what is ostensibly my own computer should look like.

If you promise end-to-end encryption, and later it turns out your employees have been reading my chat transcripts...

butvacuum 2 hours ago

I'm not sure how you can call chatgpt "ostensibly my own computer" when it's primarily a website.

And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.

If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?

It's still a disingenuous use of the term. And, if TFA is anything like multiple other providers, it's going to be "oh, the video is E2EE. But the 5fps ,non-sensitive' 512*512px preview isn't."

  • therealpygon an hour ago

    > it's primarily a website … unambiguously possible[sic] for chatGPT … happens to also be the message platform

    I assume you mean impossible, and in either case that’s not quite accurate. The “end” is a specific AI model you wished to communicate with, not the platform. You’re suggesting they are one and the same, but they are not and Google proves that with their own secure LLM offering.

    But I’m 100% with you on it being a disingenuous use.

    • butvacuum 9 minutes ago

      No, No typo- the problem with ChatGPT is the third party that would would be Attesting that's how it works, is just the 2nd party.

      I'm not familiar with the referenced Google secure LLM, but offhand- if it's TEE based- Google would be publishing auditable/signed images and Intel/AMD would be the third party Attesting that's whats actually running. TEEs are way out of my expertise though, and there's a ton of places and ways for it to break down.