Comment by RossBencina
Comment by RossBencina 13 hours ago
Last I checked Ollama inference is based on llama.cpp so either Ollama has not caught up yet, or the answer is no.
EDIT: Looks like Granite 4 hybrid architecture support was added to llama.cpp back in May: https://github.com/ggml-org/llama.cpp/pull/13550
> Last I checked Ollama inference is based on llama.cpp
Yes and no. They've written their own "engine" using GGML libraries directly, but fall back to llama.cpp for models the new engine doesn't yet support.