Documentation Index
Fetch the complete documentation index at: https://docs.dartantic.ai/llms.txt
Use this file to discover all available pages before exploring further.
The dartantic_ai package ships 10 built-in providers. For Flutter apps
that use Gemini through Firebase AI Logic,
add the optional dartantic_firebase_ai
package and register it (see Firebase AI below).
Additional OpenAI-compatible endpoints are available via the openai_compat.dart
example.
Provider Capabilities
| Provider | Default Model | Default Embedding Model | Capabilities | Notes |
|---|
| OpenAI | gpt-4o | text-embedding-3-small | Chat, Embeddings, Vision, Tools, Streaming | Full feature support |
| OpenAI Responses | gpt-5 | text-embedding-3-small | Chat, Embeddings, Vision, Tools, Streaming, Thinking | Includes built-in server-side tools |
| Anthropic | claude-sonnet-4-0 | - | Chat, Vision, Tools, Streaming, Thinking | Extended thinking with token budgets |
| Google | gemini-2.5-flash | text-embedding-004 | Chat, Embeddings, Vision, Tools, Streaming, Thinking | Extended thinking with dynamic budgets |
| Firebase AI | gemini-2.5-flash | - | Chat, Vision, Tools, Streaming, Thinking, Media | Gemini via Firebase SDK (Flutter only) |
| Mistral | mistral-small-latest | mistral-embed | Chat, Embeddings, Tools, Streaming | European servers |
| Cohere | command-r-08-2024 | embed-v4.0 | Chat, Embeddings, Tools, Streaming | RAG-optimized |
| Ollama | qwen2.5:7b-instruct | - | Chat, Tools, Streaming | Local models only |
| OpenRouter | google/gemini-2.5-flash | - | Chat, Vision, Tools, Streaming | Multi-model gateway |
| xAI | grok-4-1-fast-non-reasoning | - | Chat, Vision, Tools, Streaming | OpenAI-compatible chat completions; no embeddings or temperature in Dartantic |
| xAI Responses | grok-4-1-fast-non-reasoning | - | Chat, Vision, Tools, Streaming, Thinking, Server-side tools, Media | Stateful Responses API; default image grok-imagine-image, video grok-imagine-video |
Note: Together AI, Google-OpenAI, and Ollama-OpenAI have been moved to the
openai_compat.dart example. You can still use them by registering them with
Agent.providerFactories.
Provider Configuration
| Provider | Provider Prefix | Aliases | API Key | Provider Type |
|---|
| OpenAI | openai | - | OPENAI_API_KEY | OpenAIProvider |
| OpenAI Responses | openai-responses | - | OPENAI_API_KEY | OpenAIResponsesProvider |
| Anthropic | anthropic | claude | ANTHROPIC_API_KEY | AnthropicProvider |
| Google | google | gemini, googleai | GEMINI_API_KEY | GoogleProvider |
| Firebase AI | firebase | - | None (Firebase) | FirebaseAIProvider |
| Mistral | mistral | - | MISTRAL_API_KEY | MistralProvider |
| Cohere | cohere | - | COHERE_API_KEY | CohereProvider |
| Ollama | ollama | - | None (local) | OllamaProvider |
| OpenRouter | openrouter | - | OPENROUTER_API_KEY | OpenAIProvider |
| xAI | xai | grok | XAI_API_KEY | XAIProvider |
| xAI Responses | xai-responses | grok-responses | XAI_API_KEY | XAIResponsesProvider |
Firebase AI (Flutter add-on)
Use dartantic_firebase_ai when
you want Gemini through the Firebase SDK (Firebase AI Logic), including App Check
and Firebase Auth on the Vertex AI path. Pure Dart apps should keep using
GoogleProvider with GEMINI_API_KEY instead.
Add the package, call Firebase.initializeApp(), then register factories for
each backend you need. Provider name is firebase_ai; model strings use the
aliases below.
| Backend | Prefix (alias) | Authentication / setup |
|---|
| Google AI (Gemini Developer API via Firebase) | firebase-google | Google AI API key in Firebase console; lighter setup |
| Vertex AI (through Firebase) | firebase-vertex | Full Firebase project, billing; optional App Check / Auth |
import 'package:dartantic_ai/dartantic_ai.dart';
import 'package:dartantic_firebase_ai/dartantic_firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
await Firebase.initializeApp();
Agent.providerFactories['firebase-google'] = () =>
FirebaseAIProvider(backend: FirebaseAIBackend.googleAI);
Agent.providerFactories['firebase-vertex'] = () =>
FirebaseAIProvider(backend: FirebaseAIBackend.vertexAI);
final dev = Agent('firebase-google:gemini-2.5-flash');
final prod = Agent('firebase-vertex:gemini-2.5-flash');
Platforms: iOS, Android, macOS, and Web (Flutter only). Supports chat,
streaming, tools, thinking, structured output, and media generation; no
embeddings model. See the
dartantic_firebase_ai README
for dependency versions and setup.
Setup
# Set API keys
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
export XAI_API_KEY="..."
Usage
// Basic
Agent('openai');
// With model
Agent('anthropic:claude-3-5-sonnet');
// Chat + embeddings
Agent('openai?chat=gpt-4o&embeddings=text-embedding-3-large');
// Extended thinking (OpenAI Responses, xAI Responses, Anthropic, Google)
Agent('anthropic:claude-sonnet-4-5', enableThinking: true);
// Server-side tools (OpenAI Responses)
Agent('openai-responses:gpt-5');
// xAI Grok — OpenAI-compatible chat completions
Agent('xai');
Agent('grok:grok-4-1-fast-non-reasoning');
// xAI Grok — Responses API (thinking, server-side tools, media models)
Agent('xai-responses');
Agent('xai-responses:grok-4-1-fast-reasoning', enableThinking: true);
Find Providers
// All providers
Agent.allProviders
// By name (returns fresh instance)
Agent.createProvider('claude') // → anthropic provider
// For runtime capability discovery, use Provider.listModels()
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
print('${model.name}: ${model.kinds}'); // Shows chat, embeddings, media, etc.
}
Custom Config
final provider = OpenAIProvider(
apiKey: 'key',
baseUrl: Uri.parse('https://custom.api.com/v1'),
);
Agent.forProvider(provider);
All providers support custom HTTP headers for enterprise scenarios like
authentication proxies, request tracing, or compliance logging:
final provider = GoogleProvider(
apiKey: apiKey,
headers: {
'X-Request-ID': requestId,
'X-Tenant-ID': tenantId,
},
);
final agent = Agent.forProvider(provider);
Custom headers flow through to all API calls and can override internal headers
when needed. This works consistently across OpenAI, Google, Anthropic, Mistral,
Ollama, and xAI providers.
Check Capabilities
Use Provider.listModels() for runtime capability discovery:
// List models and their capabilities
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
print('${model.name}: ${model.kinds}');
// Output: gpt-4o: {chat}, text-embedding-3-small: {embeddings}
}
// Check if a specific model supports embeddings
final models = await provider.listModels().toList();
final hasEmbeddings = models.any((m) => m.kinds.contains(ModelKind.embeddings));
if (hasEmbeddings) {
final agent = Agent('openai');
final embed = await agent.embedQuery('test');
} else {
print('Provider does not support embeddings');
}
Model Kinds
Models are categorized by their kind via ModelKind (a model may report multiple
kinds):
chat - Chat / completion models
embeddings - Text embedding models
media - Unified media generation (images, documents, audio, etc.)
image - Image generation or vision models
video - Video generation models
audio - Audio processing (e.g. speech-to-text)
tts - Text-to-speech
countTokens - Token counting
other - Other specialized model types
The typedOutputWithTools capability indicates a provider can handle both
function calling and structured JSON output in the same request:
| Provider | Support | Implementation |
|---|
| OpenAI | ✅ | Native response_format |
| OpenAI Responses | ✅ | Native response_format |
| Anthropic | ✅ | return_result tool |
| Google | ✅ | Double agent orchestrator |
| OpenRouter | ✅ | OpenAI-compatible |
| xAI | ✅ | OpenAI-compatible |
| xAI Responses | ✅ | Native response_format (Responses API) |
| Ollama | ❌ | Coming soon |
| Cohere | ❌ | API limitation |
Together AI, Google-OpenAI, and Ollama-OpenAI also support typed output with tools
when registered via the openai_compat.dart example.
Anthropic’s approach uses the return_result tool pattern. This is a
special tool that allows the model to return a structured JSON response. This
is handled automatically by the Anthropic provider.
Google’s approach uses a transparent two-phase workflow: Phase 1 executes
tools, Phase 2 requests structured output. This is handled automatically by the
Google provider.
List Models
// List all models from a provider
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
final status = model.stable ? 'stable' : 'preview';
print('${model.name}: ${model.displayName} [$status]');
}
// Example output:
// - openai:gpt-4-0613 (chat)
// - openai:gpt-4 (chat)
// - openai:gpt-3.5-turbo (chat)
Examples