Comment by magicalhippo

Comment by magicalhippo 12 hours ago

0 replies

> Last I checked Ollama inference is based on llama.cpp

Yes and no. They've written their own "engine" using GGML libraries directly, but fall back to llama.cpp for models the new engine doesn't yet support.