Comment by gaws
What privacy? If you're using ChatGPT or Claude, your chats are still logged.
What privacy? If you're using ChatGPT or Claude, your chats are still logged.
OP implied they have powerful enough hardware, since Kimi runs on their computer, so that is why they mentioned it is local. That it doesn't work for most people has no relation to what OP of this thread said. Regardless, you don't need an Opus level model, you can use a smaller one that'll just be slower at getting back to you, it's all asynchronous anyway compared to a coding agent where some level of synchronicity is expected.
GLM 4.7-flash does very well although OpenClaw has some work to do for CoT.
The latest Kimi model is comparable in performance at least for these sorts of use cases, but yes it is harder to use locally.
It's local, meaning it uses local models, what they said in the sentence prior to the privacy one.