Comment by juanre
LLMRing
is a unified interface across OpenAI, Anthropic, Google, and Ollama - same code works with all providers.
Use aliases instead of hardcoding model IDs. Your code references "summarizer", and a version-controlled lockfile maps it to the actual model. Switch providers by changing the lockfile, not your code.
Also handles streaming, tool calling, and structured output consistently across providers. Plus a human-curated registry (https://llmring.github.io/registry/) that I (try to) keep updated with current model capabilities and pricing - helpful when choosing models.
An MCP server and client are included, as well as the ability to help you define your aliases/models with an interactive chat.
It's a thin wrapper on top of the downstream providers, so you are talking directly to them. And it also comes with a postgres-backed open source server to store logs, costs, etc.
MIT licensed. I am using it in several projects, but it's probably not ready to be presented in polite society yet.