What can you do with it?
Use OpenAI models for general reasoning, chat, multimodal tasks, and advanced coding. Choose from various GPT models optimized for different performance and cost requirements. Supports both synchronous processing for immediate results and asynchronous processing for long-running tasks.How to use it?
Basic Command Structure
Parameters
Required:prompt
- Your instructions or questions
-
model
- Specific GPT model to use (defaults to gpt-4.1) -
system prompt
- Override the default system prompt -
files
- File URLs to include in the request (see LLM File Type Support for supported formats) -
tools
- Function tools for the model to use -
tool choices
- Control how the model uses tools -
async
- Set totrue
for long-running tasks that should process in the background
Response Format
Synchronous Response:Examples
Basic Usage
Advanced Usage
Specific Use Case
Async Processing
Notes
When to use Async:- Large document processing (multiple files or very large files)
- Complex coding projects that might take several minutes
- Batch analysis tasks
- Any task that might timeout with synchronous processing
- Returns immediately with a
responseId
for tracking - Results are saved to the “Multimedia Artifact” file storage collection
- Use the
statusUrl
to check processing status - Supports webhook notifications via
triggerUrls
parameter
Supported Models
Choose the appropriate OpenAI model based on your specific needs:gpt-5
- Best model for coding and agentic tasksgpt-5-nano
- Fastest, most cost-efficient GPT-5gpt-5-mini
- Faster, cost-efficient GPT-5gpt-4.1
(default) - Advanced coding, long contextgpt-4.1-mini
- Fast coding, scalable tasksopenai-deep-research
- Extensive web research, comprehensive reports