Skip to main content
An agent contains configuration information for what provider and model you’d like to use, what settings you want to use, what tools, etc. However, an agent has no context (aka memory) to bring from one prompt to the next. That context comes from the message history, which you manage and send to each agent along with each prompt.

Basic Example

Each response from an Agent.sendXxx method includes a list of new messages, which you can accumulate into the history of the conversation.
final agent = Agent('openai');
final history = <ChatMessage>[];

// First turn
var result = await agent.send('My name is Alice', history: history);
history.addAll(result.messages);

// Second turn - remembers
result = await agent.send("What's my name?", history: history);
history.addAll(result.messages);
print(result.output); // "Your name is Alice"

Streaming Example

When using streaming, you can accumulate the messages as they arrive.
final agent = Agent('openai');
final history = <ChatMessage>[];

const prompt1 = 'My name is Alice.';
await for (final chunk in agent.sendStream(prompt1, history: history)) {
  stdout.write(chunk.output);
  history.addAll(chunk.messages);
}
stdout.writeln();

const prompt2 = 'What is my name?';
await for (final chunk in agent.sendStream(prompt2, history: history)) {
  stdout.write(chunk.output);
  history.addAll(chunk.messages);
}
stdout.writeln();

Multi-Provider Chat

You can pass a shared history to multiple agents, even across providers.
final history = <ChatMessage>[];

final gemini = Agent('google');
var r = await gemini.send('Summarize...', history: history);
print('Gemini: ${r.output}');
history.addAll(r.messages);

final claude = Agent('anthropic');
r = await claude.send('Analyze...', history: history);
print('Claude: ${r.output}');
history.addAll(r.messages);

With Tools

When you use tools in a chat, the results are included in the history and provide context for the next request.
final agent = Agent('openai', tools: [weatherTool]);
final history = <ChatMessage>[];

// Tools persist across turns
var r = await agent.send('Weather in NYC?', history: history);
print(r.output);
history.addAll(r.messages);

r = await agent.send('How about LA?', history: history);
print(r.output);
history.addAll(r.messages);

Prompt Caching with Provider Sessions

The OpenAI Responses provider attaches metadata to each response message it providers so that you do not need to pass the history around; you only need to send the new messages since the last response. This provides prompt caching and saves on network traffic. You still send messages around as normal, so your code doesn’t change. Moreover, this works across multiple providers; the OpenAI Responses provider can send along all of the messages since the last response, even the request/response pairs from other providers. You can control this behavior with the store parameter to the OpenAIResponsesChatModelOptions class.
final agent = Agent(
  'openai-responses',
  chatModelOptions: const OpenAIResponsesChatModelOptions(store: true),
);
When store is true (the default), the provider caches the entire conversation server-side. You can manually set store: false to force stateless behavior.

Examples

Next Steps