Skip to main content
The dartantic_ai package ships 10 built-in providers. For Flutter apps that use Gemini through Firebase AI Logic, add the optional dartantic_firebase_ai package and register it (see Firebase AI below). Additional OpenAI-compatible endpoints are available via the openai_compat.dart example.

Provider Capabilities

ProviderDefault ModelDefault Embedding ModelCapabilitiesNotes
OpenAIgpt-4otext-embedding-3-smallChat, Embeddings, Vision, Tools, StreamingFull feature support
OpenAI Responsesgpt-5text-embedding-3-smallChat, Embeddings, Vision, Tools, Streaming, ThinkingIncludes built-in server-side tools
Anthropicclaude-sonnet-4-0-Chat, Vision, Tools, Streaming, ThinkingExtended thinking with token budgets
Googlegemini-2.5-flashtext-embedding-004Chat, Embeddings, Vision, Tools, Streaming, ThinkingExtended thinking with dynamic budgets
Firebase AIgemini-2.5-flash-Chat, Vision, Tools, Streaming, Thinking, MediaGemini via Firebase SDK (Flutter only)
Mistralmistral-small-latestmistral-embedChat, Embeddings, Tools, StreamingEuropean servers
Coherecommand-r-08-2024embed-v4.0Chat, Embeddings, Tools, StreamingRAG-optimized
Ollamaqwen2.5:7b-instruct-Chat, Tools, StreamingLocal models only
OpenRoutergoogle/gemini-2.5-flash-Chat, Vision, Tools, StreamingMulti-model gateway
xAIgrok-4-1-fast-non-reasoning-Chat, Vision, Tools, StreamingOpenAI-compatible chat completions; no embeddings or temperature in Dartantic
xAI Responsesgrok-4-1-fast-non-reasoning-Chat, Vision, Tools, Streaming, Thinking, Server-side tools, MediaStateful Responses API; default image grok-imagine-image, video grok-imagine-video
Note: Together AI, Google-OpenAI, and Ollama-OpenAI have been moved to the openai_compat.dart example. You can still use them by registering them with Agent.providerFactories.

Provider Configuration

ProviderProvider PrefixAliasesAPI KeyProvider Type
OpenAIopenai-OPENAI_API_KEYOpenAIProvider
OpenAI Responsesopenai-responses-OPENAI_API_KEYOpenAIResponsesProvider
AnthropicanthropicclaudeANTHROPIC_API_KEYAnthropicProvider
Googlegooglegemini, googleaiGEMINI_API_KEYGoogleProvider
Firebase AIfirebase-None (Firebase)FirebaseAIProvider
Mistralmistral-MISTRAL_API_KEYMistralProvider
Coherecohere-COHERE_API_KEYCohereProvider
Ollamaollama-None (local)OllamaProvider
OpenRouteropenrouter-OPENROUTER_API_KEYOpenAIProvider
xAIxaigrokXAI_API_KEYXAIProvider
xAI Responsesxai-responsesgrok-responsesXAI_API_KEYXAIResponsesProvider

Firebase AI (Flutter add-on)

Use dartantic_firebase_ai when you want Gemini through the Firebase SDK (Firebase AI Logic), including App Check and Firebase Auth on the Vertex AI path. Pure Dart apps should keep using GoogleProvider with GEMINI_API_KEY instead. Add the package, call Firebase.initializeApp(), then register factories for each backend you need. Provider name is firebase_ai; model strings use the aliases below.
BackendPrefix (alias)Authentication / setup
Google AI (Gemini Developer API via Firebase)firebase-googleGoogle AI API key in Firebase console; lighter setup
Vertex AI (through Firebase)firebase-vertexFull Firebase project, billing; optional App Check / Auth
import 'package:dartantic_ai/dartantic_ai.dart';
import 'package:dartantic_firebase_ai/dartantic_firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';

await Firebase.initializeApp();

Agent.providerFactories['firebase-google'] = () =>
    FirebaseAIProvider(backend: FirebaseAIBackend.googleAI);
Agent.providerFactories['firebase-vertex'] = () =>
    FirebaseAIProvider(backend: FirebaseAIBackend.vertexAI);

final dev = Agent('firebase-google:gemini-2.5-flash');
final prod = Agent('firebase-vertex:gemini-2.5-flash');
Platforms: iOS, Android, macOS, and Web (Flutter only). Supports chat, streaming, tools, thinking, structured output, and media generation; no embeddings model. See the dartantic_firebase_ai README for dependency versions and setup.

Setup

# Set API keys
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
export XAI_API_KEY="..."

Usage

// Basic
Agent('openai');

// With model
Agent('anthropic:claude-3-5-sonnet');

// Chat + embeddings
Agent('openai?chat=gpt-4o&embeddings=text-embedding-3-large');

// Extended thinking (OpenAI Responses, xAI Responses, Anthropic, Google)
Agent('anthropic:claude-sonnet-4-5', enableThinking: true);

// Server-side tools (OpenAI Responses)
Agent('openai-responses:gpt-5');

// xAI Grok — OpenAI-compatible chat completions
Agent('xai');
Agent('grok:grok-4-1-fast-non-reasoning');

// xAI Grok — Responses API (thinking, server-side tools, media models)
Agent('xai-responses');
Agent('xai-responses:grok-4-1-fast-reasoning', enableThinking: true);

Find Providers

// All providers
Agent.allProviders

// By name (returns fresh instance)
Agent.createProvider('claude') // → anthropic provider

// For runtime capability discovery, use Provider.listModels()
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
  print('${model.name}: ${model.kinds}'); // Shows chat, embeddings, media, etc.
}

Custom Config

final provider = OpenAIProvider(
  apiKey: 'key',
  baseUrl: Uri.parse('https://custom.api.com/v1'),
);
Agent.forProvider(provider);

Custom Headers

All providers support custom HTTP headers for enterprise scenarios like authentication proxies, request tracing, or compliance logging:
final provider = GoogleProvider(
  apiKey: apiKey,
  headers: {
    'X-Request-ID': requestId,
    'X-Tenant-ID': tenantId,
  },
);

final agent = Agent.forProvider(provider);
Custom headers flow through to all API calls and can override internal headers when needed. This works consistently across OpenAI, Google, Anthropic, Mistral, Ollama, and xAI providers.

Check Capabilities

Use Provider.listModels() for runtime capability discovery:
// List models and their capabilities
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
  print('${model.name}: ${model.kinds}');
  // Output: gpt-4o: {chat}, text-embedding-3-small: {embeddings}
}

// Check if a specific model supports embeddings
final models = await provider.listModels().toList();
final hasEmbeddings = models.any((m) => m.kinds.contains(ModelKind.embeddings));
if (hasEmbeddings) {
  final agent = Agent('openai');
  final embed = await agent.embedQuery('test');
} else {
  print('Provider does not support embeddings');
}

Model Kinds

Models are categorized by their kind via ModelKind (a model may report multiple kinds):
  • chat - Chat / completion models
  • embeddings - Text embedding models
  • media - Unified media generation (images, documents, audio, etc.)
  • image - Image generation or vision models
  • video - Video generation models
  • audio - Audio processing (e.g. speech-to-text)
  • tts - Text-to-speech
  • countTokens - Token counting
  • other - Other specialized model types

Typed Output With Tools

The typedOutputWithTools capability indicates a provider can handle both function calling and structured JSON output in the same request:
ProviderSupportImplementation
OpenAINative response_format
OpenAI ResponsesNative response_format
Anthropicreturn_result tool
GoogleDouble agent orchestrator
OpenRouterOpenAI-compatible
xAIOpenAI-compatible
xAI ResponsesNative response_format (Responses API)
OllamaComing soon
CohereAPI limitation
Together AI, Google-OpenAI, and Ollama-OpenAI also support typed output with tools when registered via the openai_compat.dart example. Anthropic’s approach uses the return_result tool pattern. This is a special tool that allows the model to return a structured JSON response. This is handled automatically by the Anthropic provider. Google’s approach uses a transparent two-phase workflow: Phase 1 executes tools, Phase 2 requests structured output. This is handled automatically by the Google provider.

List Models

// List all models from a provider
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
  final status = model.stable ? 'stable' : 'preview';
  print('${model.name}: ${model.displayName} [$status]');
}

// Example output:
// - openai:gpt-4-0613  (chat)
// - openai:gpt-4  (chat)
// - openai:gpt-3.5-turbo  (chat)

Examples