Comment by EagnaIonat

Comment by EagnaIonat 6 hours ago

3 replies

Tried out the Ollama version and it's insanely fast with really good results for 1.9GB size. Supposed to have a 1M context window, would be interested where the speed goes then.

No Mamba in the Ollama version though.

Flere-Imsaho 5 hours ago

(I've only just starting running local LLMs so excuse the dumb question).

Would Granite run with llama.cpp and use Mamba?

  • RossBencina 5 hours ago

    Last I checked Ollama inference is based on llama.cpp so either Ollama has not caught up yet, or the answer is no.

    EDIT: Looks like Granite 4 hybrid architecture support was added to llama.cpp back in May: https://github.com/ggml-org/llama.cpp/pull/13550

    • magicalhippo 4 hours ago

      > Last I checked Ollama inference is based on llama.cpp

      Yes and no. They've written their own "engine" using GGML libraries directly, but fall back to llama.cpp for models the new engine doesn't yet support.