Comment by druskacik

Comment by druskacik 19 hours ago

2 replies

This is my experience as well. Mistral models may not be the best according to benchmarks and I don't use them for personal chats or coding, but for simple tasks with pre-defined scope (such as categorization, summarization, etc.) they are the option I choose. I use mistral-small with batch API and it's probably the best cost-efficient option out there.

leobg 3 hours ago

Did you compare it to gemini-2.0-flash-lite?

  • leobg an hour ago

    Answering my own question:

    Artificial Analysis ranks them close in terms of price (both 0.3 USD/1M tokens) and intelligence (27 / 29 for gemini/mistral), but ranks gemini-2.0-flash-lite higher in terms of speed (189 tokens/s vs. 130).

    So they should be interchangeable. Looking forward to testing this.

    [0] https://artificialanalysis.ai/?models=o3%2Cgemini-2-5-pro%2C...