Comment by xnorswap

Comment by xnorswap 2 days ago

1 reply

We don't have the infrastructure for it, but models could digitally sign all generated messages with a key assigned to the model that generated that message.

That would prove the message came directly from the LLM output.

That at least would be more difficult to game than a captcha which could be MITM'd.

notpushkin 2 days ago

Hosted models could do that (provided we trust the providers). Open source models could embed watermarks.

It doesn’t really matter, though: you can ask a model to rewrite your text in its own words.