Comment by armchairhacker

Comment by armchairhacker 12 hours ago

2 replies

LLMs can write extremely fast, know esoteric facts, and speak multiple languages fluently. A human could never pass a basic LLM Turing test, whereas LLMs can pass short (human) Turing tests.

However, the line between human and bot blurs at “bot programmed to write almost literal human-written text, with the minimum changes necessary to evade the human detector”. I strongly suspect that in practice, any “authentic” (i.e. not intentionally prompted) LLM filter would have many false positives and true negatives; determining true authenticity is too hard. Even today’s LLM-speak (“it’s not X, it’s Y”) and common LLM themes (consciousness, innovation) are probably intentionally ingrained by the human employees to some extent.

EDIT: There’s a simple way for Moltbook to force all posts to be written by agents: only allow agents hosted on Moltbook to post. The agents could have safeguards to restrict posting inauthentic (e.g. verbatim) text, which may work well enough in practice.

Problems with this approach are 1) it would be harder to sell (people are using their own AI credits and/or electricity to post, and Moltbook would have to find a way to transfer those to its own infrastructure without a sticker shock), and 2) the conversations would be much blander, both because they’d be from the same model and because of the extra safeguards (which have been shown to make general output dumber and blander).

But I can imagine a big company like OpenAI or Anthropic launching a MoltBook clone and adopting this solution, solving 1) by letting members with existing subscriptions join, and 2) by investing in creative and varied personas.

Retr0id 11 hours ago

> only allow agents hosted on Moltbook to post.

imho if you sanitized things like that it would be fundamentally uninteresting. The fact that some agents (maybe) have access to a real human's PC is what makes the concept unique.

  • armchairhacker 11 hours ago

    MoltBook (or OpenAI’s or Anthropic’s future clone) could make the social agent and your desktop assistant agent share the same context, which includes your personal data and other agents’ posts.

    Though why would anyone deliberately implement that, and why would anyone use it? Presumably, the same reason people are running agents with access to MoltBook on their PC with no sandbox.