Comment by mstank
Couldn't a human just use an LLM browser extension / script to answer that quickly? This is a really interesting non-trivial problem.
Couldn't a human just use an LLM browser extension / script to answer that quickly? This is a really interesting non-trivial problem.
I'd expect humans can just pass real images through Gemini to get the watermark added, similarly pass real text through an LLM asking for no changes. Now you can say, truthfully, that the text came out of an LLM.
At least on image generation, google and maybe others put a watermark in each image. Text would be hard, you can't even do the printer steganography or canary traps because all models and the checker would need to have some sort of communication. https://deepmind.google/models/synthid/
You could have every provider fingerprint a message and host an API where it can attest that it's from them. I doubt the companies would want to do that though.