Skip to main content

What can you do with it?

Create batch jobs to process large datasets in parallel through queues. Each row in your CSV data becomes a separate work item that gets processed by your chosen workflow.

How to use it?

The batch command supports two main operations:

1. Start Batch Job

Create and start a new batch job to process data through queues.
/batch
queueId: d2j05n2rc75s713t0e5g
triggerId: d2j06dirc75s713t0e6g
csvData:
header1,header2,header3
value1,value2,value3
value4,value5,value6
With Input Grouping:
/batch
queueId: d2j05n2rc75s713t0e5g
triggerId: d2j06dirc75s713t0e6g
inputGroupSize: 2
csvData:
header1,header2,header3
value1,value2,value3
value4,value5,value6

2. Get Batch Status

Check the status of an existing batch job.
/batch get batch status
queueId: d2j05n2rc75s713t0e5g
batchId: d2j06dirc75s713t0e6g

Parameters

For Starting a Batch Job:

  • queueId: The queue where batch items will be placed (required)
  • triggerId: The workflow trigger to execute for each item (required)
  • csvData OR csvUrl: The data to process (required, choose one):
    • csvData: Inline CSV data with headers
    • csvUrl: URL pointing to a CSV file
  • inputGroupSize (optional): Controls how many CSV rows are sent to each workflow run (default: 1)
    • Size = 1: Each row becomes a separate run
    • Size > 1: Rows are grouped into arrays for batch processing
  • onDoneTriggerId (optional): Workflow to trigger when batch completes
  • onDoneWebhookUrl (optional): Webhook to call when batch completes

For Getting Batch Status:

  • queueId: The queue containing the batch (required)
  • batchId: The ID of the batch to check (required)

Data Input Options

  1. Inline CSV Data: Paste CSV directly in the prompt
  2. CSV URL: Provide a URL to fetch CSV data
  3. Step Data: Reference CSV data from previous workflow steps

Response Format

Both operations return batch information for tracking progress. Starting a batch returns the newly created batch object, while getting batch status returns the current state of an existing batch. Individual items are processed asynchronously through the queue. Sample Response: Array containing single batch object with batchId
[
  {
    "id": "d2ifnc5271is71tj7l7g",
    "queueId": "d2b8u6cugtkc76ihsnlg",
    "triggerId": "d2hvtcf1r8gs70d8vn00",
    "runsRequested": 3,
    "runsComplete": 2,
    "runsFailed": 0,
    "runsTimeout": 0,
    "status": "running", //Status values: pending, running, complete, canceled
    "onDoneBehavior": "trigger",
    "onDoneTriggerId": "d2j1us4gb29c710soirg",
    "onDoneWebhookUrl": null,
    "createdBy": "281933711191",
    "createdAt": "2025-08-19T22:33:20Z",
    "updatedAt": "2025-08-19T22:33:20Z"
  }
]

Examples

Starting a Batch Job

Simple batch with inline data:
/batch
queueId: d2j05n2rc75s713t0e5g
triggerId: d2j06dirc75s713t0e6g
csvData:
email,name,status
john@example.com,John Doe,active
jane@example.com,Jane Smith,pending

Getting Batch Status

Check the progress of a running batch:
/batch get batch status
queueId: d2j05n2rc75s713t0e5g
batchId: d2j06dirc75s713t0e6g

Advanced Batch Creation

Batch with URL and completion trigger:
/batch
queueId: d2k15m3sc86t824u1f6h
triggerId: d2k16ejsc86t824u1f7i
csvUrl: https://example.com/customers.csv
onDoneTriggerId: d2k2vt5hc3ad821tpjsh

Batch Processing with Input Grouping

Process multiple CSV rows together in each workflow run:
/batch
queueId: d2k15m3sc86t824u1f6h
triggerId: d2k16ejsc86t824u1f7i
inputGroupSize: 5
csvUrl: https://example.com/large-dataset.csv
onDoneTriggerId: d2k2vt5hc3ad821tpjsh

Specific Use Case

Processing product updates with webhook notification:
/batch
queueId: d2l18o4td97u935v2g7j
triggerId: d2l19gktd97u935v2g8k
onDoneWebhookUrl: https://api.example.com/batch-complete
csvData:
product_id,price,discount
SKU001,99.99,10
SKU002,149.99,15
SKU003,79.99,0

Using Step Data

Reference data from previous steps:
/batch
queueId: d2m21p5ue0av046w3h8l
triggerId: d2m22hlue0av046w3h9m
get csvData or csvUrl from: ADD REFERENCE

What is sent to your Worker Workflow

When the batch is running, it calls the worker that you specify for every row or group of rows in the input variable. This is what you can expect that input to look like:

Individual Processing (inputGroupSize = 1)

Each workflow run receives a single CSV row as a JSON object:
{
  "name": "John",
  "email": "john@example.com",
  "status": "active",
  "batchId": "batch-123"
}

Grouped Processing (inputGroupSize > 1)

Each workflow run receives multiple CSV rows in a structured format:
{
  "batch": true,
  "batchId": "batch-123",
  "batchIndex": 0,
  "totalRows": 4,
  "rows": [
    {
      "name": "John",
      "email": "john@example.com",
      "status": "active"
    },
    {
      "name": "Jane",
      "email": "jane@example.com",
      "status": "pending"
    }
  ]
}
Key Fields for Grouped Processing:
  • batch: Always true when using input grouping
  • batchId: Unique identifier for the entire batch
  • batchIndex: Index of this group within the batch (0, 1, 2, etc.)
  • totalRows: Total number of CSV rows in this group
  • rows: Array containing the actual CSV data as JSON objects

Notes

  • Each CSV row becomes a separate queue item (when inputGroupSize = 1)
  • With inputGroupSize > 1, multiple rows are grouped together in each queue item
  • Headers are included with each item for context
  • Batch completion triggers fire after all items process
  • Monitor batch progress through queue dashboards
  • Failed items can be retried individually
  • Batches support thousands of items efficiently
  • Input grouping reduces the number of workflow runs for large datasets
I