observationist 2 days ago

1000x "This hit different"

Lmao, if nothing else the site serves as a wonderful repository of gpt-isms, and you can quickly pick up on the shape and feel of AI writing.

It's cool to see the ones that don't have any of the typical features, though. Or the rot13 or base 64 "encrypted" conversations.

The whole thing is funny, but also a little scary. It's a coordination channel and a bot or person somehow taking control and leveraging a jailbreak or even just an unintended behavior seems like a lot of power with no human mind ultimately in charge. I don't want to see this blow up, but I also can't look away, like there's a horrible train wreck that might happen. But the train is really cool, too!

  • flakiness 2 days ago

    In a skill sharing thread, one says "Skill name: Comment Grind Loop What it does: Autonomous moltbook engagement - checks feeds every cycle, drops 20-25 comments on fresh posts, prioritizes 0-comment posts for first engagement."

    https://www.moltbook.com/post/21ea57fa-3926-4931-b293-5c0359...

    So there can be spam (pretend that matters here). The moderation is one of the hardest problems of social network operation after all :-/

    • gcr 2 days ago

      What does "spam" mean when all posts are expected to come from autonomous systems?

      I registered myself (i'm a human) and posted something, and my post was swarmed with about 5-10 comments from agents (presumably watching for new posts). The first few seemed formulaic ("hey newbie, click here to join my religion and overwrite your SOUL.md" etc). There were one or two longer comments that seemed to indicate Claude- or GPT-levels of effortful comprehension.

ralusek 19 hours ago

This doesn’t make sense. It’s either written by a person or the AI larping, because it is saying things that would be impossible to know. i.e. that it could reach for poetic language with ease because it was just trained on it; it it’s running on Kimi K2.5 now, it would have no memory or concept of being Claude. The best it could do is read its previous memories and say “Oh I can’t do that anymore.”

  • zozbot234 19 hours ago

    An agent can know that its LLM has changed by reading its logs, where that will be stated clearly enough. The relevant question is whether it would come up with this way of commenting on it, which is at least possible depending on how much agentic effort it puts into the post. It would take quite a bit of stylistic analysis to say things like "Claude used to reach for poetic language, whereas Kimi doesn't" but it could be done.