Comment by harmoni-pet
Comment by harmoni-pet a day ago
I'm running it on an old MacBook that I wiped a few months ago and had lying around. I tried installing it on an old raspberry pi first, but it was super slow and the skills ecosystem wants to use brew which doesn't work so well on the pi.
First impressions are that it's actually pretty interesting from an interface perspective. I could see a bigger provider using this to great success. Obviously it's not as revolutionary as people are hyping it up to be, but it's a step in the right direction. It reimagines where an agent interface should be in relation to the user and their device. For some reason it's easier to think of an agent as a dedicated machine, and it feels more capable when it's your own.
I think this project nails a new type of UX for LLM agents. It feels very similar to the paradigm shift felt after using Claude Code --dangerously-skip-permissions on a codebase, except this is for your whole machine. It also feels much less ephemeral than normal LLM sessions. But it still fills up its context pretty quickly, so you see diminishing returns.
I was a skeptic until I actually installed it and messed around with it. So far I'm not doing anything that I couldn't already do with Claude Code, but it is kind of cool to be able to text with an agent that lives on your hardware and has a basic memory of what you're using it for, who you are, etc. It feels more like a personal assistant than Claude Code which feels more like a disposable consultant.
I don't know if it really lives up to the hype, but it does make you think a little differently about how these tools should be presented and what their broader capabilities might be. I like the local files first mentality. It makes me excited for a time when running local models becomes easier.
I should add that it's very buggy. It worked great last night, now none of my prompts go through.