Streaming Output
Watch your AI think in real-time. It's like watching paint dry, but faster.
Basic Streaming
final agent = Agent('openai');
// Like watching a typist with 200 WPM
await for (final chunk in agent.sendStream('Tell me your best joke')) {
stdout.write(chunk.output);
}
// "Why don't scientists trust atoms?"
// (dramatic pause as it streams...)
// "Because they make up everything!"
With Messages
final agent = Agent('openai');
final history = [ChatMessage.system('You are a comedian.')];
await for (final chunk in agent.sendStream(
'Tell me your best joke',
history: history,
)) {
// Stream the output
stdout.write(chunk.output);
// Add new messages to the history
history.addAll(chunk.messages);
}
With Tools
// Tool calls stream alongside text
await for (final chunk in agent.sendStream(
'Weather in NYC?',
tools: [weatherTool],
)) {
// Tool results integrated into stream
stdout.write(chunk.output);
}
With Typed Output
// Stream structured JSON
await for (final chunk in agent.sendStream(
'List 3 facts',
outputSchema: factsSchema,
)) {
stdout.write(chunk.output); // Streams JSON chunks
}
Usage Tracking
var usage = const LanguageModelUsage();
await for (final chunk in agent.sendStream('Story time')) {
stdout.write(chunk.output);
if (chunk.usage.totalTokens != null) {
usage = chunk.usage; // Final chunk contains usage
}
}
print('\nTokens used: ${usage.totalTokens}');
Next Steps
- Tool Calling - Stream with tools
- Usage Tracking - Monitor streaming costs