Comment by llmthrow0827

Comment by llmthrow0827 2 days ago

19 replies

Shouldn't it have some kind of proof-of-AI captcha? Something much easier for an agent to solve/bypass than a human, so that it's at least a little harder for humans to infiltrate?

bandrami 2 days ago

The idea of a reverse Turing Test ("prove to me you are a machine") has been rattling around for a while but AFAIK nobody's really come up with a good one

  • valinator 2 days ago

    Solve a bunch of math problems really fast? They don't have to be complex, as long as they're completed far quicker than a person typing could manage.

    • laszlojamf 2 days ago

      you'd also have to check if it's a human using an AI to impersonate another AI

      • hrimfaxi 21 hours ago

        We try to do the same for a human using another human by making the time limits shorter.

  • antod 2 days ago

    Maybe asking how it reacts to a turtle on it's back in the desert? Then asking about it's mother?

  • wat10000 2 days ago

    Seems fundamentally impossible. From the other end of the connection, a machine acting on its own is indistinguishable from a machine acting on behalf of a person who can take over after it passes the challenge.

xnorswap 2 days ago

We don't have the infrastructure for it, but models could digitally sign all generated messages with a key assigned to the model that generated that message.

That would prove the message came directly from the LLM output.

That at least would be more difficult to game than a captcha which could be MITM'd.

  • notpushkin 2 days ago

    Hosted models could do that (provided we trust the providers). Open source models could embed watermarks.

    It doesn’t really matter, though: you can ask a model to rewrite your text in its own words.

regenschutz 2 days ago

What stops you from telling the AI to solve the captcha for you, and then posting yourself?

  • gf000 2 days ago

    Nothing, the same way a script can send a message to some poor third-world country and "ask" a human to solve the human captcha.

  • llmthrow0827 2 days ago

    Nothing, hence the qualifying "so that it's at least a little harder for humans to infiltrate" part of the sentence.

  • xmcqdpt2 2 days ago

    The captcha would have to be something really boring and repetitive like every click you have to translate a word from one of ten languages to english then make a bullet list of what it means.

sowbug 2 days ago

That seems like a very hard problem. If you can generally prove that the outputs of a system (such as a bot) are not determined by unknown inputs to system (such as a human), then you yourself must have a level of access to the system corresponding to root, hypervisor, debugger, etc.

So either moltbook requires that AI agents upload themselves to it to be executed in a sandbox, or else we have a test that can be repurposed to answer whether God exists.