Comment by oceanplexian

Comment by oceanplexian a day ago

9 replies

Yes, I’m running it with a minimal set of plugins.

When I’m driving or out I can ask Siri to send a iMessage to Clawdbot something like “Can you find out if anything is playing at the local concert venue, and figure in how much 2 tickets would cost”, and a few minutes later it will give me a few options. It even surprised me and researched the different seats and recommended a cheaper one or free activities as an alternative that weekend.

Basically: This is the product that Apple and Google were unable to build despite having billions of dollars and thousands of engineers because it’s a threat to their business model.

It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.

Nextgrid 12 hours ago

> This is the product that Apple and Google were unable to build

It's not they're unable to build it, it's that their businesses are built on "engagement" and wasting human time. A bot "engaging" with the ads and wasting its time would signal the end of their business model.

game_the0ry 17 hours ago

> It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.

After messing with openclaw on an old 2018 Windows laptop running WSL2 that I was about to recycle, I am coming to the same conclusion, and the paradigm shift is blowing my mind. Tinkerers paradise.

The future is glorious indeed.

  • lxgr 8 hours ago

    Same here. I like tinkering with my Home Assistant setup and small web server running miscellaneous projects on my Raspberry Pi, but I hate having to debug it from my phone when it all falls over while I'm not near my computer.

    Being able to chat with somebody that has a working understanding of a Unix environment and can execute tasks like "figure out why Caddy is crash looping and propose solutions" for a few dollars per month is a dream come true.

    I'm not actually using OpenClaw for that just yet, though; something about exposing my full Unix environment to OpenAI or Anthropic just seems wrong, both in terms of privacy and dependency. The former could probably be solved with some redacting and permission-enforcing filter between the agent and the OS, but the latter needs powerful local models. (I'll only allow my Unix devops skills to start getting rusty once I can run an Opus 4.5 equivalent agent on sub-$5000 hardware :)

someguyiguess 2 hours ago

I don’t get it. You can do that with the Claude app or ChatGPT too. What’s the value add?

Edit: oh I see. It’s local. So privacy. Quite a good value add actually.

mrdependable 19 hours ago

How are you running Kimi locally?

  • ineedasername 13 hours ago

    Quantized, heavily, and offloading everything possible to sysram. You can run it this way, just barely reachable with consumer hardware with 16 to 24gb vram and 256gb sysram. Before the spike in prices you could just about build such a system for $2500, but the ram along probably adds another $2k onto that now. Nvidia dgx boxes and similar setups with 256gb unified ram can probably manage it more slowly ~1-2 tokens per second. Unsloth has the quantized models. I’ve test Kimi though don’t have quite the headroom at home for it, and I don’t yet see a significant enough difference between it and the Qwen 3 models that can run in more modest setups: I get a highly usable 50 tokens per second out of the A3B instruct that fits into 16gb VRAM with enough left over not to choke Netflix and other browser tasks, it performs on par with what I ask out of Haiku in Claude Code, and better as my own tweaking improves with the also ever better tooling that comes out near weekly.

antonvs 18 hours ago

> The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it.

I wouldn't be so certain of that. Someone is paying to train and create these models. Ultimately, the money to do that is going to have to come from somewhere.

  • Krutonium 13 hours ago

    Good News! The Models are done, and you can download them for free. Even if they stopped being worked on this moment, those are finished and usable right now, and won't get any worse over time.

    • wappieslurkz 7 hours ago

      They wouldn't get any worse, but I assume they'd get behind really fast.