Skip to main content
The dartantic_ai package is an agent framework inspired by pydantic-ai and designed to make building client and server-side apps in Dart with generative AI easier and more fun!

Why Dartantic?

Dartantic was born out of frustration with not being able to easily use generative AI in my Dart and Flutter apps without doing things in a very different way based on the model I chose and the type of app I was building, i.e. GUI, CLI or server-side. It’s all Dart — why can’t I use all the models with a single API across all the apps? As an example of the kinds of apps that I wanted to build, consider CalPal, a Flutter app that uses Dartantic to build an agentic workflow for managing a user’s calendar. Check out this screenshot: CalPal screenshot CalPal in action In about ~300 LOC, CalPal is able to figure out the events in my schedule based on an English phrase. To do this, it first has to figure out the local date and time to understand what “today” means anyway and then uses the result of that against the Zapier MCP server connected to my Google Calendar. That multi-step tool usage is all built into Dartantic and it’s what makes it an “agentic” framework. Oh, and then just for fun, I asked CalPal to add a calendar event to my calendar based on a picture of the events at my local pool. I can’t imagine the person-years of effort that would’ve been required to build this without generative AI, but I couldn’t rest until I had that kind of power for all my Dart and Flutter apps. Combine that with pydantic-ai for inspiration and Dartantic was born. Enjoy!

What is Dartantic AI?

One API, multiple provider configurations out of the box:
  • Agentic behavior with multi-step tool calling: Let your AI agents autonomously chain tool calls together to solve multi-step problems without human intervention.
  • Multiple Providers Out of the Box - OpenAI, OpenAI Responses, Google, Anthropic, Mistral, Cohere, Ollama, and more
  • OpenAI-Compatibility - Access to literally thousands of providers via the OpenAI API that nearly every single modern LLM provider implements
  • Streaming Output - Real-time response generation
  • Typed Outputs and Tool Calling - Uses Dart types and JSON serialization
  • Multimedia Input - Process text, images, and files
  • Media Generation - Stream images, PDFs, and other artifacts from OpenAI Responses, Google Gemini (Nana Banana), and Anthropic code execution
  • Embeddings - Vector generation and semantic search
  • Model Reasoning (“Thinking”) - Extended reasoning support across OpenAI Responses, Anthropic, and Google
  • Provider-Hosted Server-Side Tools - Web search, file search, image generation, and code interpreter via OpenAI Responses, Anthropic, and Google
  • MCP Support - Model Context Protocol server integration
  • Provider Switching - Switch between AI providers mid-conversation
  • Production Ready: Built-in logging, error handling, and retry handling
  • Extensible: Easy to add custom providers as well as tools of your own or from your favorite MCP servers
Switch providers with one line of code. All models. Single API. Enjoy!

Installation

dependencies:
  dartantic_ai: ^VERSION

Quick Examples

Basic Chat

import 'package:dartantic_ai/dartantic_ai.dart';

final agent = Agent('claude'); // or 'openai', 'gemini', etc.
final result = await agent.send('Hello!');
print(result.output);

Streaming

await for (final chunk in agent.sendStream('Tell me a story')) {
  stdout.write(chunk.output);
}

Tools

final weatherTool = Tool(
  name: 'get_weather',
  description: 'Get weather for a location',
  inputSchema: JsonSchema.object({
    'location': JsonSchema.string(),
  }),
  onCall: (args) async => {'temp': 72, 'condition': 'sunny'},
);

final agent = Agent('openai', tools: [weatherTool]);
final result = await agent.send("Weather in Seattle?");

Embeddings

final agent = Agent('openai');
final embed = await agent.embedQuery('Hello world');
print(embed.embeddings.length); // 1536

Multi-Provider Conversations

final history = <ChatMessage>[];

// Start with Gemini
final gemini = Agent('google');
final result1 = await gemini.send('Hi, I\'m Alice', history: history);
history.addAll(result1.messages);

// Switch to Claude
final claude = Agent('anthropic');
final result2 = await claude.send('What\'s my name?', history: history);
print(result2.output); // "Your name is Alice"

Examples

See complete working examples:

Next Steps