Comment by TulioKBR
*AI Model Support:*
Currently configured for Perplexity, Claude, and Groq (production-ready). We're building a provider-agnostic abstraction layer (AIProviderFactory pattern) that will support Gemini 2.5 Pro, Claude Sonnet 4.5, and others. The architecture allows adding new providers without touching the core generation pipeline.
*Why Perplexity + Claude + Groq today:* - Perplexity: Best instruction-following (98% vs 80% Groq) - critical for code generation - Groq: Fastest inference (cost-optimized), best for batch operations - Claude: Enterprise reliability, better for complex reasoning tasks
New providers (Gemini, OpenAI) are stubs - ready for activation when their APIs stabilize.
*Database Flexibility:*
We're backend-agnostic by design. Currently shipping PostgreSQL + MongoDB, but the persistence layer is abstracted:
- *Supported now*: PostgreSQL, MongoDB, Redis (caching) - *Planned*: Firebase Realtime/Firestore, Supabase, PlanetScale, Neon - *Coming*: DynamoDB, Datastore, Cosmos
Firebase support: We have adapters ready but haven't prioritized it because most enterprise customers need PostgreSQL compliance + audit logs. Firebase Firestore is on the roadmap for Q1.
*The key insight:* Our code generation doesn't depend on DB choice. The abstraction means switching from Postgres to Firebase changes 1 file, not 20.
Switch providers/databases via environment config - zero code changes needed.
Thanks for your prompt reply. I have to say I'm a little confused as to why you're excluding the two best code models, Gemini and Sonnet 4.5, from your stack? Is there something I'm missing?