The instruct models are available on Ollama (e.g. `ollama run ministral-3:8b`), however the reasoning models still are a wip. I was trying to get them to work last night and it works for single turn, but is still very flakey w/ multi-turn.
Yes, the 3B variant, with vLLM 0.11.2. Parameters are given on the HF page. Had to override the temperature to 0.15 though (as suggested on HF) to avoid random looking syllables.
The instruct models are available on Ollama (e.g. `ollama run ministral-3:8b`), however the reasoning models still are a wip. I was trying to get them to work last night and it works for single turn, but is still very flakey w/ multi-turn.