The /groq command enables you to interact with Groq’s high-speed language models. Perfect for:

  • Fast text generation
  • Document analysis
  • Complex reasoning tasks
  • Multi-file processing
  • JSON-structured outputs

Basic Usage

Use the command to interact with Groq models:

/groq analyze this document and summarize key points
/groq use model llama-3.3-70b to write a technical report
/groq process these files and extract data as JSON

Key Features

Model Selection

  • Multiple model options
  • Varying context lengths
  • Different speed tiers
  • Specialized capabilities
  • Automatic model selection

File Processing

  • Direct file URL handling
  • Multi-file support
  • No content retrieval needed
  • Efficient processing
  • Batch operations

Output Formats

  • JSON (default)
  • Plain text
  • Markdown
  • HTML
  • Custom formats

Example Commands

Basic Query

/groq explain quantum computing in simple terms

With Model Selection

/groq use deepseek-r1-distill-llama-70b to solve this complex math problem

Process Files

/groq analyze these documents [file1.pdf, file2.txt] and compare their content

Custom Output Format

/groq format as markdown: create a table comparing these products

Long Context Task

/groq use llama-4-maverick-17b with 128k context to analyze this book

Available Models

Fast & Efficient

  • gemma2-9b-it: Very fast, 8K context
  • llama-3.1-8b-instant: Very fast, 131K context
  • llama-4-scout-17b: Very fast, 16K context

Balanced Performance

  • llama-3.3-70b-versatile: Fast, 32K context
  • mixtral-8x7b-32768: Fast, 32K context
  • llama-4-maverick-17b: Fast, 131K context (default)

Advanced Reasoning

  • deepseek-r1-distill-llama-70b: Complex problem solving, 8K context
  • llama-3.1-70b-versatile: Long context versatile tasks, 131K context

Model Selection Guide

Choose based on your needs:

  • Quick responses: Use instant or scout models
  • Long documents: Use maverick or llama-3.1 models
  • Complex reasoning: Use deepseek model
  • General purpose: Use versatile models
  • Default choice: llama-4-maverick-17b

Output Control

JSON Format (Default)

/groq extract product details from this description

Plain Text

/groq format as plaintext: write a story about robots

Markdown

/groq format as markdown: create documentation for this API

Custom System Prompt

/groq with prompt "You are a data scientist" analyze this dataset

Tips

  • Default model handles most tasks well
  • Use specific models for specialized needs
  • JSON output is default for structured data
  • Include file URLs directly in commands
  • Specify format explicitly when needed

The /groq command enables you to interact with Groq’s high-speed language models. Perfect for:

  • Fast text generation
  • Document analysis
  • Complex reasoning tasks
  • Multi-file processing
  • JSON-structured outputs

Basic Usage

Use the command to interact with Groq models:

/groq analyze this document and summarize key points
/groq use model llama-3.3-70b to write a technical report
/groq process these files and extract data as JSON

Key Features

Model Selection

  • Multiple model options
  • Varying context lengths
  • Different speed tiers
  • Specialized capabilities
  • Automatic model selection

File Processing

  • Direct file URL handling
  • Multi-file support
  • No content retrieval needed
  • Efficient processing
  • Batch operations

Output Formats

  • JSON (default)
  • Plain text
  • Markdown
  • HTML
  • Custom formats

Example Commands

Basic Query

/groq explain quantum computing in simple terms

With Model Selection

/groq use deepseek-r1-distill-llama-70b to solve this complex math problem

Process Files

/groq analyze these documents [file1.pdf, file2.txt] and compare their content

Custom Output Format

/groq format as markdown: create a table comparing these products

Long Context Task

/groq use llama-4-maverick-17b with 128k context to analyze this book

Available Models

Fast & Efficient

  • gemma2-9b-it: Very fast, 8K context
  • llama-3.1-8b-instant: Very fast, 131K context
  • llama-4-scout-17b: Very fast, 16K context

Balanced Performance

  • llama-3.3-70b-versatile: Fast, 32K context
  • mixtral-8x7b-32768: Fast, 32K context
  • llama-4-maverick-17b: Fast, 131K context (default)

Advanced Reasoning

  • deepseek-r1-distill-llama-70b: Complex problem solving, 8K context
  • llama-3.1-70b-versatile: Long context versatile tasks, 131K context

Model Selection Guide

Choose based on your needs:

  • Quick responses: Use instant or scout models
  • Long documents: Use maverick or llama-3.1 models
  • Complex reasoning: Use deepseek model
  • General purpose: Use versatile models
  • Default choice: llama-4-maverick-17b

Output Control

JSON Format (Default)

/groq extract product details from this description

Plain Text

/groq format as plaintext: write a story about robots

Markdown

/groq format as markdown: create documentation for this API

Custom System Prompt

/groq with prompt "You are a data scientist" analyze this dataset

Tips

  • Default model handles most tasks well
  • Use specific models for specialized needs
  • JSON output is default for structured data
  • Include file URLs directly in commands
  • Specify format explicitly when needed