Skip to main content

What can you do with it?

Use OpenAI models for general reasoning, chat, multimodal tasks, and advanced coding. Choose from various GPT models optimized for different performance and cost requirements. Supports both synchronous processing for immediate results and asynchronous processing for long-running tasks.

How to use it?

Basic Command Structure

/openai [prompt] [optional-parameters]

Parameters

Required:
  • prompt - Your instructions or questions
Optional:
  • model - Specific GPT model to use (defaults to gpt-4.1)
  • system prompt - Override the default system prompt
  • files - File URLs to include in the request (see LLM File Type Support for supported formats)
  • tools - Function tools for the model to use
  • tool choices - Control how the model uses tools
  • async - Set to true for long-running tasks that should process in the background

Response Format

Synchronous Response:
{
  "response": "Model's generated response",
  "format": "Response format (JSON/plaintext/markdown/HTML)",
  "metadata": {
    "model": "Model used",
    "tokens": "Token count"
  }
}
Asynchronous Response:
{
  "responseId": "resp_12345...",
  "status": "queued",
  "message": "Process started. Results will be saved to file storage when complete.",
  "statusUrl": "/llm/gpt/async/resp_12345...",
  "outputFileName": "openai-result-12345.txt"
}

Examples

Basic Usage

/openai
prompt: Create a marketing strategy for a new product launch
Get a response from GPT for strategic planning and creative tasks.

Advanced Usage

/openai
prompt: Analyze this code and suggest optimizations
files: code_file.py
model: gpt-4.1
Use a specific model with file context for code analysis and optimization.

Specific Use Case

/openai
prompt: Summarize these customer reviews
model: gpt-4o-mini
Fast, cost-effective analysis using the mini model for simple tasks.

Async Processing

/openai
prompt: Analyze this large dataset and generate a comprehensive report
files: large_dataset.csv
model: gpt-4.1
async: true
Long-running task that processes in the background and saves results to file storage.

Notes

When to use Async:
  • Large document processing (multiple files or very large files)
  • Complex coding projects that might take several minutes
  • Batch analysis tasks
  • Any task that might timeout with synchronous processing
Async Processing:
  • Returns immediately with a responseId for tracking
  • Results are saved to the “Multimedia Artifact” file storage collection
  • Use the statusUrl to check processing status
  • Supports webhook notifications via triggerUrls parameter

Supported Models

Choose the appropriate OpenAI model based on your specific needs:
  • gpt-5 - Best model for coding and agentic tasks
  • gpt-5-nano - Fastest, most cost-efficient GPT-5
  • gpt-5-mini - Faster, cost-efficient GPT-5
  • gpt-4.1 (default) - Advanced coding, long context
  • gpt-4.1-mini - Fast coding, scalable tasks
  • openai-deep-research - Extensive web research, comprehensive reports
I