/gemini | Type: Application | PCID required: Yes
Google AI models
Tools
| Tool | Description |
|---|---|
gemini_generate_content | Generate content using Google Gemini models |
gemini_chat | Have a conversation using Gemini models |
gemini_analyze_image | Analyze an image using Gemini vision capabilities |
gemini_embed_content | Create embeddings for text using Gemini models |
gemini_list_models | List available Gemini models |
gemini_count_tokens | Count tokens in text for Gemini models |
gemini_generate_code | Generate code using Gemini models optimized for programming |
gemini_summarize_text | Summarize long text using Gemini models |
gemini_generate_content
Generate content using Google Gemini models Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "gemini-2.5-flash" | Gemini model to use |
prompt | string | Yes | — | Text prompt for content generation |
temperature | number | No | — | Sampling temperature |
topP | number | No | — | Nucleus sampling parameter |
topK | number | No | — | Top-k sampling parameter |
maxOutputTokens | number | No | — | Maximum tokens to generate |
stopSequences | string[] | No | — | Stop sequences to end generation |
gemini_chat
Have a conversation using Gemini models Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "gemini-2.5-flash" | Gemini model to use |
messages | object[] | Yes | — | Conversation history |
temperature | number | No | — | Sampling temperature |
maxOutputTokens | number | No | — | Maximum tokens to generate |
gemini_analyze_image
Analyze an image using Gemini vision capabilities Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "gemini-2.5-flash" | Gemini model with vision capabilities |
imageUrl | string | No | — | URL of image to analyze |
imageBase64 | string | No | — | Base64 encoded image data |
prompt | string | Yes | — | Question or instruction about the image |
temperature | number | No | — | Sampling temperature |
maxOutputTokens | number | No | — | Maximum tokens to generate |
gemini_embed_content
Create embeddings for text using Gemini models Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "text-embedding-004" | Gemini embedding model |
content | string | Yes | — | Text content to embed |
taskType | string | No | — | Task type for embedding optimization |
title | string | No | — | Optional title for the content |
gemini_list_models
List available Gemini modelsgemini_count_tokens
Count tokens in text for Gemini models Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "gemini-2.5-flash" | Gemini model for token counting |
text | string | Yes | — | Text to count tokens for |
gemini_generate_code
Generate code using Gemini models optimized for programming Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "gemini-2.5-flash" | Gemini model to use |
prompt | string | Yes | — | Code generation prompt |
language | string | No | — | Programming language (e.g., “python”, “javascript”) |
temperature | number | No | — | Low temperature for more deterministic code |
maxOutputTokens | number | No | — | Maximum tokens to generate |
gemini_summarize_text
Summarize long text using Gemini models Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | No | "gemini-2.5-flash" | Gemini model to use |
text | string | Yes | — | Text to summarize |
summaryLength | string | No | "medium" | Desired summary length |
focusAreas | string[] | No | — | Specific areas to focus on in summary |

