satvikpendem 4 hours ago

It's local, meaning it uses local models, what they said in the sentence prior to the privacy one.

  • lxgr 3 hours ago

    Unless you have unusually powerful hardware, local models will unfortunately currently not really cut it for Moltbot.

    • satvikpendem 3 hours ago

      OP implied they have powerful enough hardware, since Kimi runs on their computer, so that is why they mentioned it is local. That it doesn't work for most people has no relation to what OP of this thread said. Regardless, you don't need an Opus level model, you can use a smaller one that'll just be slower at getting back to you, it's all asynchronous anyway compared to a coding agent where some level of synchronicity is expected.

      • lxgr 3 hours ago

        GGP seem to be under the misapprehension that privacy is a core aspect/advantage of OpenClaw, when for most users it's really not.

        So yes, I think the majority user experience is very relevant.

    • joemazerino 2 hours ago

      GLM 4.7-flash does very well although OpenClaw has some work to do for CoT.

  • jckahn 3 hours ago

    From what I've read, OpenClaw only truly works well with Opus 4.5.

    • satvikpendem 3 hours ago

      The latest Kimi model is comparable in performance at least for these sorts of use cases, but yes it is harder to use locally.

      • gaws 3 hours ago

        > harder to use locally

        Which means most people must be using OpenClaw connected to Claude or ChatGPT.