Comment by pornel
The Nano model is 3.2B parameters at 4bit quantization. This is quite small compared to what you get from hosted chatbots, and even compared to open-weights models runnable on desktops.
It's cool to have something like this available locally anyway, but don't expect it to have reasoning capabilities. At this size it's going to be naive and prone to hallucinations. It's going to be more like a natural language regex and a word association game.
The big win for those small local models to me isn't knowledge based (I'll leave that to the large hosted models), but more so a natural language interface that can then dispatch to tool calls and summarize results. I think this is where they have the opportunity to shine. You're totally right that these are going to be awful for knowledge.