Comment by jonwinstanley
Comment by jonwinstanley 7 days ago
Has anyone had any joy using a local model? Or is it still too slow?
On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?
Comment by jonwinstanley 7 days ago
Has anyone had any joy using a local model? Or is it still too slow?
On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?
For advanced autocomplete (not code generation, but can do that too), basic planning, looking things up instead of web search, review & summary, even one shooting smaller scripts, the 32b Q4 models proved very good for me (24gb VRAM RTX 3090). All LLM caveats still apply, of course. Note that setting up local llm in cursor is pain because they don't support local host. Ngrok or vps and reverse ssh solve that though.