/embedded-groq | Type: Embedded | PCID required: No
Generate text using Groq-hosted open-source models including Llama, Qwen, and Kimi at high speed.
Tools
| Tool | Description |
|---|---|
embedded-groq_generate | Generate text using Groq models |
embedded-groq_generate
Generate text using a Groq-hosted open-source model at high speed. Supports system prompts and file analysis. Parameters:| Parameter | Type | Required | Description |
|---|---|---|---|
model | enum | No | Model to use. Default llama-3.1-8b-instant |
systemPrompt | string | No | System prompt to guide model behavior |
userPrompt | string | Yes | User prompt or question |
fileUrls | string[] | No | URLs of files to analyze |
| Field | Type | Description |
|---|---|---|
output | string or object | Generated text or structured output |
metadata | object | Response metadata |
metadata.model | string | Model used for generation |
metadata.usage | object | Token usage statistics |
metadata.usage.promptTokens | number | Number of input tokens |
metadata.usage.completionTokens | number | Number of output tokens |
metadata.usage.totalTokens | number | Total tokens used |
| Model | Description |
|---|---|
groq/compound | Groq compound model |
groq/compound-mini | Groq compound model (mini) |
llama-3.1-8b-instant | Llama 3.1 8B instant inference (default) |
llama-3.3-70b-versatile | Llama 3.3 70B versatile |
openai/gpt-oss-120b | GPT open-source 120B |
openai/gpt-oss-20b | GPT open-source 20B |
meta-llama/llama-guard-4-12b | Llama Guard 4 12B safety model |
meta-llama/llama-4-maverick-17b-128e-instruct | Llama 4 Maverick 17B 128E instruct |
meta-llama/llama-4-scout-17b-16e-instruct | Llama 4 Scout 17B 16E instruct |
meta-llama/llama-prompt-guard-2-22m | Llama Prompt Guard 2 22M |
meta-llama/llama-prompt-guard-2-86m | Llama Prompt Guard 2 86M |
moonshotai/kimi-k2-instruct-0905 | Kimi K2 instruct |
openai/gpt-oss-safeguard-20b | GPT open-source safeguard 20B |
qwen/qwen3-32b | Qwen 3 32B |

