dartantic_ai package ships 10 built-in providers. For Flutter apps
that use Gemini through Firebase AI Logic,
add the optional dartantic_firebase_ai
package and register it (see Firebase AI below).
Additional OpenAI-compatible endpoints are available via the openai_compat.dart
example.
Provider Capabilities
| Provider | Default Model | Default Embedding Model | Capabilities | Notes |
|---|---|---|---|---|
| OpenAI | gpt-4o | text-embedding-3-small | Chat, Embeddings, Vision, Tools, Streaming | Full feature support |
| OpenAI Responses | gpt-5 | text-embedding-3-small | Chat, Embeddings, Vision, Tools, Streaming, Thinking | Includes built-in server-side tools |
| Anthropic | claude-sonnet-4-0 | - | Chat, Vision, Tools, Streaming, Thinking | Extended thinking with token budgets |
gemini-2.5-flash | text-embedding-004 | Chat, Embeddings, Vision, Tools, Streaming, Thinking | Extended thinking with dynamic budgets | |
| Firebase AI | gemini-2.5-flash | - | Chat, Vision, Tools, Streaming, Thinking, Media | Gemini via Firebase SDK (Flutter only) |
| Mistral | mistral-small-latest | mistral-embed | Chat, Embeddings, Tools, Streaming | European servers |
| Cohere | command-r-08-2024 | embed-v4.0 | Chat, Embeddings, Tools, Streaming | RAG-optimized |
| Ollama | qwen2.5:7b-instruct | - | Chat, Tools, Streaming | Local models only |
| OpenRouter | google/gemini-2.5-flash | - | Chat, Vision, Tools, Streaming | Multi-model gateway |
| xAI | grok-4-1-fast-non-reasoning | - | Chat, Vision, Tools, Streaming | OpenAI-compatible chat completions; no embeddings or temperature in Dartantic |
| xAI Responses | grok-4-1-fast-non-reasoning | - | Chat, Vision, Tools, Streaming, Thinking, Server-side tools, Media | Stateful Responses API; default image grok-imagine-image, video grok-imagine-video |
openai_compat.dart example. You can still use them by registering them with
Agent.providerFactories.
Provider Configuration
| Provider | Provider Prefix | Aliases | API Key | Provider Type |
|---|---|---|---|---|
| OpenAI | openai | - | OPENAI_API_KEY | OpenAIProvider |
| OpenAI Responses | openai-responses | - | OPENAI_API_KEY | OpenAIResponsesProvider |
| Anthropic | anthropic | claude | ANTHROPIC_API_KEY | AnthropicProvider |
google | gemini, googleai | GEMINI_API_KEY | GoogleProvider | |
| Firebase AI | firebase | - | None (Firebase) | FirebaseAIProvider |
| Mistral | mistral | - | MISTRAL_API_KEY | MistralProvider |
| Cohere | cohere | - | COHERE_API_KEY | CohereProvider |
| Ollama | ollama | - | None (local) | OllamaProvider |
| OpenRouter | openrouter | - | OPENROUTER_API_KEY | OpenAIProvider |
| xAI | xai | grok | XAI_API_KEY | XAIProvider |
| xAI Responses | xai-responses | grok-responses | XAI_API_KEY | XAIResponsesProvider |
Firebase AI (Flutter add-on)
Usedartantic_firebase_ai when
you want Gemini through the Firebase SDK (Firebase AI Logic), including App Check
and Firebase Auth on the Vertex AI path. Pure Dart apps should keep using
GoogleProvider with GEMINI_API_KEY instead.
Add the package, call Firebase.initializeApp(), then register factories for
each backend you need. Provider name is firebase_ai; model strings use the
aliases below.
| Backend | Prefix (alias) | Authentication / setup |
|---|---|---|
| Google AI (Gemini Developer API via Firebase) | firebase-google | Google AI API key in Firebase console; lighter setup |
| Vertex AI (through Firebase) | firebase-vertex | Full Firebase project, billing; optional App Check / Auth |
dartantic_firebase_ai README
for dependency versions and setup.
Setup
Usage
Find Providers
Custom Config
Custom Headers
All providers support custom HTTP headers for enterprise scenarios like authentication proxies, request tracing, or compliance logging:Check Capabilities
UseProvider.listModels() for runtime capability discovery:
Model Kinds
Models are categorized by their kind viaModelKind (a model may report multiple
kinds):
chat- Chat / completion modelsembeddings- Text embedding modelsmedia- Unified media generation (images, documents, audio, etc.)image- Image generation or vision modelsvideo- Video generation modelsaudio- Audio processing (e.g. speech-to-text)tts- Text-to-speechcountTokens- Token countingother- Other specialized model types
Typed Output With Tools
ThetypedOutputWithTools capability indicates a provider can handle both
function calling and structured JSON output in the same request:
| Provider | Support | Implementation |
|---|---|---|
| OpenAI | ✅ | Native response_format |
| OpenAI Responses | ✅ | Native response_format |
| Anthropic | ✅ | return_result tool |
| ✅ | Double agent orchestrator | |
| OpenRouter | ✅ | OpenAI-compatible |
| xAI | ✅ | OpenAI-compatible |
| xAI Responses | ✅ | Native response_format (Responses API) |
| Ollama | ❌ | Coming soon |
| Cohere | ❌ | API limitation |
openai_compat.dart example.
Anthropic’s approach uses the return_result tool pattern. This is a
special tool that allows the model to return a structured JSON response. This
is handled automatically by the Anthropic provider.
Google’s approach uses a transparent two-phase workflow: Phase 1 executes
tools, Phase 2 requests structured output. This is handled automatically by the
Google provider.

