Comment by c4pt0r
Comment by c4pt0r 3 days ago
Paired with programming tools like Claude Code, it could be a low-cost/open-source replacement for Sonnet
Comment by c4pt0r 3 days ago
Paired with programming tools like Claude Code, it could be a low-cost/open-source replacement for Sonnet
This doesn’t change the VRAM usage, only the compute requirements.
It does not have to be VRAM, it could be system RAM, or weights streamed from SSD storage. Reportedly, the latter method achieves around 1 token per second on computers with 64 GB of system RAM.
R1 (and K2) is MoE, whereas Llama 3 is a dense model family. MoE actually makes these models practical to run on cheaper hardware. DeepSeek R1 is more comfortable for me than Llama 3 70B for exactly that reason - if it spills out of the GPU, you take a large performance hit.
If you need to spill into CPU inference, you really want to be multiplying a different set of 32B weights for every token compared to the same 70B (or more) instead, simply because the computation takes so long.
Here's a neat looking project that allows for using other models with Claude Code: https://github.com/musistudio/claude-code-router
I found that while looking for reports of the best agents to use with K2. The usual suspects like Cline and forks, Aider, and Zed should be interesting to test with K2 as well.