Skip to main content
Out of the box support for 8 built-in providers, with more available via the openai_compat.dart example.

Provider Capabilities

ProviderDefault ModelDefault Embedding ModelCapabilitiesNotes
OpenAIgpt-4otext-embedding-3-smallChat, Embeddings, Vision, Tools, StreamingFull feature support
OpenAI Responsesgpt-5text-embedding-3-smallChat, Embeddings, Vision, Tools, Streaming, ThinkingIncludes built-in server-side tools
Anthropicclaude-sonnet-4-0-Chat, Vision, Tools, Streaming, ThinkingExtended thinking with token budgets
Googlegemini-2.5-flashtext-embedding-004Chat, Embeddings, Vision, Tools, Streaming, ThinkingExtended thinking with dynamic budgets
Mistralmistral-large-latestmistral-embedChat, Embeddings, Tools, StreamingEuropean servers
Coherecommand-r-plusembed-english-v3.0Chat, Embeddings, Tools, StreamingRAG-optimized
Ollamallama3.2:latest-Chat, Tools, StreamingLocal models only
OpenRoutergoogle/gemini-2.5-flash-Chat, Vision, Tools, StreamingMulti-model gateway
Note: Together AI, Google-OpenAI, and Ollama-OpenAI have been moved to the openai_compat.dart example. You can still use them by registering them with Agent.providerFactories.

Provider Configuration

ProviderProvider PrefixAliasesAPI KeyProvider Type
OpenAIopenai-OPENAI_API_KEYOpenAIProvider
OpenAI Responsesopenai-responses-OPENAI_API_KEYOpenAIResponsesProvider
AnthropicanthropicclaudeANTHROPIC_API_KEYAnthropicProvider
Googlegooglegemini, googleaiGEMINI_API_KEYGoogleProvider
Mistralmistral-MISTRAL_API_KEYMistralProvider
Coherecohere-COHERE_API_KEYCohereProvider
Ollamaollama-None (local)OllamaProvider
OpenRouteropenrouter-OPENROUTER_API_KEYOpenAIProvider

Setup

# Set API keys
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."

Usage

// Basic
Agent('openai');

// With model
Agent('anthropic:claude-3-5-sonnet');

// Chat + embeddings
Agent('openai?chat=gpt-4o&embeddings=text-embedding-3-large');

// Extended thinking (OpenAI Responses, Anthropic, Google)
Agent('anthropic:claude-sonnet-4-5', enableThinking: true);

// Server-side tools (OpenAI Responses)
Agent('openai-responses:gpt-5');

Find Providers

// All providers
Agent.allProviders

// By name (returns fresh instance)
Agent.createProvider('claude') // → anthropic provider

// For runtime capability discovery, use Provider.listModels()
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
  print('${model.name}: ${model.kinds}'); // Shows chat, embeddings, media, etc.
}

Custom Config

final provider = OpenAIProvider(
  apiKey: 'key',
  baseUrl: Uri.parse('https://custom.api.com/v1'),
);
Agent.forProvider(provider);

Custom Headers

All providers support custom HTTP headers for enterprise scenarios like authentication proxies, request tracing, or compliance logging:
final provider = GoogleProvider(
  apiKey: apiKey,
  headers: {
    'X-Request-ID': requestId,
    'X-Tenant-ID': tenantId,
  },
);

final agent = Agent.forProvider(provider);
Custom headers flow through to all API calls and can override internal headers when needed. This works consistently across OpenAI, Google, Anthropic, Mistral, and Ollama.

Check Capabilities

Use Provider.listModels() for runtime capability discovery:
// List models and their capabilities
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
  print('${model.name}: ${model.kinds}');
  // Output: gpt-4o: {chat}, text-embedding-3-small: {embeddings}
}

// Check if a specific model supports embeddings
final models = await provider.listModels().toList();
final hasEmbeddings = models.any((m) => m.kinds.contains(ModelKind.embeddings));
if (hasEmbeddings) {
  final agent = Agent('openai');
  final embed = await agent.embedQuery('test');
} else {
  print('Provider does not support embeddings');
}

Model Kinds

Models are categorized by their kind via ModelKind:
  • chat - Chat conversations
  • embeddings - Vector embeddings
  • media - Media generation (images, documents, etc.)

Typed Output With Tools

The typedOutputWithTools capability indicates a provider can handle both function calling and structured JSON output in the same request:
ProviderSupportImplementation
OpenAINative response_format
OpenAI ResponsesNative response_format
Anthropicreturn_result tool
GoogleDouble agent orchestrator
OpenRouterOpenAI-compatible
OllamaComing soon
CohereAPI limitation
Together AI, Google-OpenAI, and Ollama-OpenAI also support typed output with tools when registered via the openai_compat.dart example. Anthropic’s approach uses the return_result tool pattern. This is a special tool that allows the model to return a structured JSON response. This is handled automatically by the Anthropic provider. Google’s approach uses a transparent two-phase workflow: Phase 1 executes tools, Phase 2 requests structured output. This is handled automatically by the Google provider.

List Models

// List all models from a provider
final provider = Agent.createProvider('openai');
await for (final model in provider.listModels()) {
  final status = model.stable ? 'stable' : 'preview';
  print('${model.name}: ${model.displayName} [$status]');
}

// Example output:
// - openai:gpt-4-0613  (chat)
// - openai:gpt-4  (chat)
// - openai:gpt-3.5-turbo  (chat)

Examples