Code Interpreter
Execute Python code directly in OpenAI's sandboxed environment
Code Interpreter
The Code Interpreter tool allows models to write and execute Python code in a secure, sandboxed environment managed by OpenAI. This is ideal for data analysis, mathematical computations, file processing, and generating visualizations.
Overview
Code Interpreter runs entirely on OpenAI's infrastructure:
- Automatic container management: OpenAI creates and manages containers automatically
- Python execution: Write and run Python code with common libraries pre-installed
- File handling: Create, read, and process files within the sandbox
- Visualization: Generate charts, graphs, and other visual outputs
Basic Usage
import 'package:dartantic_ai/dartantic_ai.dart';
final agent = Agent(
'openai-responses:gpt-4o',
chatModelOptions: const OpenAIResponsesChatOptions(
serverSideTools: {OpenAIServerSideTool.codeInterpreter},
),
);
final response = await agent.send(
'Calculate the first 20 prime numbers and create a visualization',
);
File Access
You can provide files for the code interpreter to work with:
final agent = Agent(
'openai-responses:gpt-4o',
chatModelOptions: OpenAIResponsesChatOptions(
serverSideTools: const {OpenAIServerSideTool.codeInterpreter},
codeInterpreterConfig: const CodeInterpreterConfig(
files: ['file-abc123', 'file-def456'], // File IDs from OpenAI Files API
),
),
);
Streaming Events
Monitor code execution progress through metadata events:
await for (final chunk in agent.sendStream('Analyze this CSV data')) {
if (chunk.output.isNotEmpty) print(chunk.output);
final ci = chunk.metadata['code_interpreter'];
if (ci != null) {
final stage = ci['stage'];
switch (stage) {
case 'started':
print('Starting code execution...');
break;
case 'in_progress':
print('Running code...');
break;
case 'interpreting':
print('Interpreting results...');
break;
case 'code_delta':
print('Code: ${ci['data']['code']}');
break;
case 'completed':
print('Execution completed');
break;
}
}
}
Data Analysis
final response = await agent.send('''
Analyze this sales data:
- Calculate monthly trends
- Identify top performers
- Create a summary report with visualizations
''');
Mathematical Computations
final response = await agent.send('''
Solve this system of equations:
3x + 2y - z = 1
2x - 2y + 4z = -2
-x + 0.5y - z = 0
''');
File Processing
final response = await agent.send('''
Process the uploaded CSV file:
1. Clean missing data
2. Calculate statistics
3. Generate a cleaned output file
''');
Container Management
OpenAI automatically manages containers with these characteristics:
- Auto-creation: Containers are created on-demand when
container: {"type": "auto"}
is specified - Session persistence: Containers remain active for 1 hour with a 30-minute idle timeout
- Isolation: Each session gets its own isolated environment
- Cost: $0.03 per container
Reusing Containers
You can reuse a container from a previous session to maintain state and avoid recreation costs:
// First session - container is created automatically
final agent1 = Agent(
'openai-responses:gpt-4o',
chatModelOptions: const OpenAIResponsesChatOptions(
serverSideTools: {OpenAIServerSideTool.codeInterpreter},
),
);
String? containerId;
await for (final chunk in agent1.sendStream('Create a dataset')) {
final ci = chunk.metadata['code_interpreter'];
if (ci?['data']?['container_id'] != null) {
containerId = ci['data']['container_id'];
}
}
// Second session - reuse the same container
final agent2 = Agent(
'openai-responses:gpt-4o',
chatModelOptions: OpenAIResponsesChatOptions(
serverSideTools: const {OpenAIServerSideTool.codeInterpreter},
codeInterpreterConfig: CodeInterpreterConfig(
containerId: containerId, // Reuse previous container
),
),
);
// The dataset from the first session is still available
await agent2.send('Analyze the dataset we created earlier');
Available Libraries
The Python environment includes common data science and visualization libraries:
- NumPy, Pandas, SciPy
- Matplotlib, Seaborn, Plotly
- Scikit-learn
- SymPy for symbolic math
- Many other standard libraries
Limitations
- Code execution timeout: 2 minutes per execution
- Memory limit: 2GB per container
- No internet access from within code
- No system calls or shell commands
- Files created are temporary and cleared after session
Cost Considerations
- Token costs for input/output
- $0.03 per container session
- Additional charges for file storage if using uploaded files
Error Handling
The model automatically handles common errors and can retry or adjust code:
// The model will debug and fix code errors automatically
final response = await agent.send(
'Read the CSV file and handle any encoding or format issues',
);
Complete Example
import 'dart:io';
import 'package:dartantic_ai/dartantic_ai.dart';
void main() async {
final agent = Agent(
'openai-responses:gpt-4o',
chatModelOptions: const OpenAIResponsesChatOptions(
serverSideTools: {OpenAIServerSideTool.codeInterpreter},
),
);
print('Running data analysis with Code Interpreter...\n');
await for (final chunk in agent.sendStream('''
Generate a dataset of 100 random points following a sine wave with noise.
Then:
1. Fit a polynomial regression model
2. Calculate R-squared value
3. Create a visualization showing the data and fitted curve
4. Provide insights about the model fit
''')) {
if (chunk.output.isNotEmpty) stdout.write(chunk.output);
// Optional: Monitor execution stages
final ci = chunk.metadata['code_interpreter'];
if (ci?['stage'] == 'code_delta') {
print('\n[Executing Python code...]');
}
}
}
See Also
- Server-Side Tools Overview
- File Search - Search through uploaded documents
- Web Search - Search the internet for current information
- Image Generation - Create images with DALL-E