Skip to main content
Imagine that you’d like to extend the capabilities of your AI Agent with some new abilities. For example, out of the box, an LLM doesn’t know what time it is. If you ask it anyway, you’re not going to get a very good answer. If you want to teach the LLM to tell the time, you need to give it a tool.

Basic Example

You can define a tool with a name, description, and input schema. The input schema is used to parse the input from the LLM to the tool’s input type. The tool’s input type is used to parse the output from the tool to the LLM’s output type. The onCall function is called when the tool is invoked.
final timeTool = Tool(
  name: 'get_time',
  description: 'Get current time in a location',
  inputSchema: JsonSchema.create({
    'type': 'object',
    'properties': {
      'location': {'type': 'string'},
    },
    'required': ['location'],
  }),
  onCall: (args) async {
    // NOTE: insert actual timezone magic here
    return {
      'time': '3:42 PM',
      'location': args['location'],
    };
  },
);

final agent = Agent('openai', tools: [timeTool]);
final result = await agent.send("What time is it in New York?");
print(result.output); // "It's currently 3:42 PM in New York"
Tools are passed to the agent constructor and are available to the agent for the duration of the conversation. It’s not shown here, but it’s best practice for you to also list the tools and their metadata in the agent’s system message. See System Messages for more information.

Multiple Tools

You can pass multiple tools to the agent constructor.
final agent = Agent('anthropic', tools: [
  weatherTool,
  temperatureConverterTool,
  currentDateTimeTool,
]);

// Single request, multiple tool calls
final result = await agent.send(
  "What's the weather in Paris? Convert to Fahrenheit and tell me the time."
);
// Agent calls all three tools automatically

Streaming Tool Calls

await for (final chunk in agent.sendStream(
  "Weather in NYC and Boston?",
)) {
  stdout.write(chunk.output); // Streams response as tools execute
}

Error Handling

If you throw an error in a tool, it will be caught and returned to the LLM as an error message so it can handle it gracefully (ideally).
final tool = Tool(
  name: 'divide',
  description: 'Divide numbers',
  inputSchema: JsonSchema.create({
    'type': 'object',
    'properties': {
      'a': {'type': 'number'},
      'b': {'type': 'number'},
    },
    'required': ['a', 'b'],
  }),
  onCall: (args) async {
    if (args['b'] == 0) throw 'Cannot divide by zero';
    return {'result': args['a'] / args['b']};
  },
);

Automatic Schema Generation

Use json_serializable and soti_schema_plus for type-safe tools:
@SotiSchema()
@JsonSerializable()
class WeatherInput {
  WeatherInput({required this.city});
  
  /// The city to get weather for
  final String city;
  
  factory WeatherInput.fromJson(Map<String, dynamic> json) =>
      _$WeatherInputFromJson(json);
  
  @jsonSchema
  static Map<String, dynamic> get schemaMap => 
      _$WeatherInputSchemaMap;
}

// Use the generated schema
final weatherTool = Tool(
  name: 'get_weather',
  description: 'Get weather for a city',
  inputSchema: JsonSchema.create(WeatherInput.schemaMap),
  onCall: (args) async {
    final input = WeatherInput.fromJson(args);
    return {'temp': 72, 'city': input.city};
  },
);

Examples

Next Steps