/openai-data | Type: Application | PCID required: Yes
Tools
| Tool | Description |
|---|---|
openai_data_add_upload_part | Adds a Part to an Upload object. A Part represents a chunk of bytes from the file you are trying to upload. Each Part can be at most 64 MB, and you can add Parts until you hit the Upload maximum of 8 GB. It is possible to add multiple Parts in parallel. You can decide the intended order of the Parts when you complete the Upload. |
openai_data_cancel_batch | Cancels an in-progress batch. The batch will be in status cancelling for up to 10 minutes, before changing to cancelled, where it will have partial results (if any) available in the output file. |
openai_data_cancel_eval_run | Cancel an ongoing evaluation run. |
openai_data_cancel_fine_tuning_job | Immediately cancel a fine-tune job. |
openai_data_cancel_upload | Cancels the Upload. No Parts may be added after an Upload is cancelled. |
openai_data_cancel_vector_store_file_batch | Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible. |
openai_data_complete_upload | Completes the Upload. Within the returned Upload object, there is a nested File object that is ready to use in the rest of the platform. You can specify the order of the Parts by passing in an ordered list of the Part IDs. The number of bytes uploaded upon completion must match the number of bytes initially specified when creating the Upload object. No Parts may be added after an Upload is completed. |
openai_data_create_batch | Creates and executes a batch from an uploaded file of requests |
openai_data_create_eval | Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide. |
openai_data_create_eval_run | Create a new evaluation run. This is the endpoint that will kick off grading. |
openai_data_create_file | Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and the size of all files uploaded by one organization can be up to 100 GB. The Assistants API supports files up to 2 million tokens and of specific file types. See the Assistants Tools guide for details. The Fine-tuning API only supports .jsonl files. The input also has certain required formats for fine-tuning chat or completions models. The Batch API only supports .jsonl files up to 200 MB in size. The input also has a specific required format. Please contact us if you need to increase these storage limits. |
openai_data_create_fine_tuning_checkpoint_permission | NOTE: Calling this endpoint requires an admin API key. This enables organization owners to share fine-tuned models with other projects in their organization. |
openai_data_create_fine_tuning_job | Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. Learn more about fine-tuning |
openai_data_create_upload | Creates an intermediate Upload object that you can add Parts to. Currently, an Upload can accept at most 8 GB in total and expires after an hour after you create it. Once you complete the Upload, we will create a File object that contains all the parts you uploaded. This File is usable in the rest of our platform as a regular File object. For certain purpose values, the correct mime_type must be specified. Please refer to documentation for the supported MIME types for your use case. For guidance on the proper filename extensions for each purpose, please follow the documentation on creating a File. |
openai_data_create_vector_store | Create a vector store. |
openai_data_create_vector_store_file | Create a vector store file by attaching a File to a vector store. |
openai_data_create_vector_store_file_batch | Create a vector store file batch. |
openai_data_delete_eval | Delete an evaluation. |
openai_data_delete_eval_run | Delete an eval run. |
openai_data_delete_file | Delete a file. |
openai_data_delete_fine_tuning_checkpoint_permission | NOTE: This endpoint requires an admin API key. Organization owners can use this endpoint to delete a permission for a fine-tuned model checkpoint. |
openai_data_delete_vector_store | Delete a vector store. |
openai_data_delete_vector_store_file | Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint. |
openai_data_download_file | Returns the contents of the specified file. |
openai_data_get_eval | Get an evaluation by ID. |
openai_data_get_eval_run | Get an evaluation run by ID. |
openai_data_get_eval_run_output_item | Get an evaluation run output item by ID. |
openai_data_get_eval_run_output_items | Get a list of output items for an evaluation run. |
openai_data_get_eval_runs | Get a list of runs for an evaluation. |
openai_data_get_vector_store | Retrieves a vector store. |
openai_data_get_vector_store_file | Retrieves a vector store file. |
openai_data_get_vector_store_file_batch | Retrieves a vector store file batch. |
openai_data_list_batches | List your organization’s batches. |
openai_data_list_evals | List evaluations for a project. |
openai_data_list_files | Returns a list of files. |
openai_data_list_files_in_vector_store_batch | Returns a list of vector store files in a batch. |
openai_data_list_fine_tuning_checkpoint_permissions | NOTE: This endpoint requires an admin API key. Organization owners can use this endpoint to view all permissions for a fine-tuned model checkpoint. |
openai_data_list_fine_tuning_events | Get status updates for a fine-tuning job. |
openai_data_list_fine_tuning_job_checkpoints | List checkpoints for a fine-tuning job. |
openai_data_list_paginated_fine_tuning_jobs | List your organization’s fine-tuning jobs |
openai_data_list_vector_store_files | Returns a list of vector store files. |
openai_data_list_vector_stores | Returns a list of vector stores. |
openai_data_modify_vector_store | Modifies a vector store. |
openai_data_retrieve_batch | Retrieves a batch. |
openai_data_retrieve_file | Returns information about a specific file. |
openai_data_retrieve_fine_tuning_job | Get info about a fine-tuning job. Learn more about fine-tuning |
openai_data_retrieve_vector_store_file_content | Retrieve the parsed contents of a vector store file. |
openai_data_search_vector_store | Search a vector store for relevant chunks based on a query and file attributes filter. |
openai_data_update_eval | Update certain properties of an evaluation. |
openai_data_update_vector_store_file_attributes | Update attributes on a vector store file. |
openai_data_add_upload_part
Adds a Part to an Upload object. A Part represents a chunk of bytes from the file you are trying to upload. Each Part can be at most 64 MB, and you can add Parts until you hit the Upload maximum of 8 GB. It is possible to add multiple Parts in parallel. You can decide the intended order of the Parts when you complete the Upload. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
upload_id | string | Yes | — | The ID of the Upload. |
data | string | Yes | — | The chunk of bytes for this Part. |
openai_data_cancel_batch
Cancels an in-progress batch. The batch will be in statuscancelling for up to 10 minutes, before changing to cancelled, where it will have partial results (if any) available in the output file.
Parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
batch_id | string | Yes | — | The ID of the batch to cancel. |
openai_data_cancel_eval_run
Cancel an ongoing evaluation run. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation whose run you want to cancel. |
run_id | string | Yes | — | The ID of the run to cancel. |
openai_data_cancel_fine_tuning_job
Immediately cancel a fine-tune job. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuning_job_id | string | Yes | — | The ID of the fine-tuning job to cancel. |
openai_data_cancel_upload
Cancels the Upload. No Parts may be added after an Upload is cancelled. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
upload_id | string | Yes | — | The ID of the Upload. |
openai_data_cancel_vector_store_file_batch
Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store that the file batch belongs to. |
batch_id | string | Yes | — | The ID of the file batch to cancel. |
openai_data_complete_upload
Completes the Upload. Within the returned Upload object, there is a nested File object that is ready to use in the rest of the platform. You can specify the order of the Parts by passing in an ordered list of the Part IDs. The number of bytes uploaded upon completion must match the number of bytes initially specified when creating the Upload object. No Parts may be added after an Upload is completed. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
upload_id | string | Yes | — | The ID of the Upload. |
md5 | string | No | — | The optional md5 checksum for the file contents to verify if the bytes uploaded matches what you expect. |
part_ids | string[] | Yes | — | The ordered list of Part IDs. |
openai_data_create_batch
Creates and executes a batch from an uploaded file of requests Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
completion_window | string | Yes | — | The time frame within which the batch should be processed. Currently only 24h is supported. |
endpoint | string | Yes | — | The endpoint to be used for all requests in the batch. Currently /v1/responses, /v1/chat/completions, /v1/embeddings, and /v1/completions are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch. |
input_file_id | string | Yes | — | The ID of an uploaded file that contains requests for the new batch. See upload file for how to upload a file. Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch. The file can contain up to 50,000 requests, and can be up to 200 MB in size. |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
openai_data_create_eval
Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
data_source_config | object | Yes | — | The configuration for the data source used for the evaluation runs. |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
name | string | No | — | The name of the evaluation. |
testing_criteria | any[] | Yes | — | A list of graders for all eval runs in this group. |
openai_data_create_eval_run
Create a new evaluation run. This is the endpoint that will kick off grading. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to create a run for. |
data_source | object | Yes | — | Details about the run’s data source. |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
name | string | No | — | The name of the run. |
openai_data_create_file
Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and the size of all files uploaded by one organization can be up to 100 GB. The Assistants API supports files up to 2 million tokens and of specific file types. See the Assistants Tools guide for details. The Fine-tuning API only supports.jsonl files. The input also has certain required formats for fine-tuning chat or completions models. The Batch API only supports .jsonl files up to 200 MB in size. The input also has a specific required format. Please contact us if you need to increase these storage limits.
Parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
file | string | Yes | — | The File object (not file name) to be uploaded. |
purpose | string | Yes | — | The intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - vision: Images used for vision fine-tuning - user_data: Flexible file type for any purpose - evals: Used for eval data sets |
openai_data_create_fine_tuning_checkpoint_permission
NOTE: Calling this endpoint requires an admin API key. This enables organization owners to share fine-tuned models with other projects in their organization. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuned_model_checkpoint | string | Yes | — | The ID of the fine-tuned model checkpoint to create a permission for. |
project_ids | string[] | Yes | — | The project identifiers to grant access to. |
openai_data_create_fine_tuning_job
Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. Learn more about fine-tuning Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
hyperparameters | object | No | — | The hyperparameters used for the fine-tuning job. This value is now deprecated in favor of method, and should be passed in under the method parameter. |
integrations | object[] | No | — | A list of integrations to enable for your fine-tuning job. |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
method | object | No | — | The method used for fine-tuning. |
model | object | Yes | — | The name of the model to fine-tune. You can select one of the supported models. |
seed | integer | No | — | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. |
suffix | string | No | — | A string of up to 64 characters that will be added to your fine-tuned model name. For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel. |
training_file | string | Yes | — | The ID of an uploaded file that contains training data. See upload file for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune. The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format. See the fine-tuning guide for more details. |
validation_file | string | No | — | The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune. See the fine-tuning guide for more details. |
openai_data_create_upload
Creates an intermediate Upload object that you can add Parts to. Currently, an Upload can accept at most 8 GB in total and expires after an hour after you create it. Once you complete the Upload, we will create a File object that contains all the parts you uploaded. This File is usable in the rest of our platform as a regular File object. For certainpurpose values, the correct mime_type must be specified. Please refer to documentation for the supported MIME types for your use case. For guidance on the proper filename extensions for each purpose, please follow the documentation on creating a File.
Parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
bytes | integer | Yes | — | The number of bytes in the file you are uploading. |
filename | string | Yes | — | The name of the file to upload. |
mime_type | string | Yes | — | The MIME type of the file. This must fall within the supported MIME types for your file purpose. See the supported MIME types for assistants and vision. |
purpose | string | Yes | — | The intended purpose of the uploaded file. See the documentation on File purposes. |
openai_data_create_vector_store
Create a vector store. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
chunking_strategy | object | No | — | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
expires_after | object | No | — | The expiration policy for a vector store. |
file_ids | string[] | No | — | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
name | string | No | — | The name of the vector store. |
openai_data_create_vector_store_file
Create a vector store file by attaching a File to a vector store. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store for which to create a File. |
attributes | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
chunking_strategy | object | No | — | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. |
file_id | string | Yes | — | A File ID that the vector store should use. Useful for tools like file_search that can access files. |
openai_data_create_vector_store_file_batch
Create a vector store file batch. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store for which to create a File Batch. |
attributes | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
chunking_strategy | object | No | — | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. |
file_ids | string[] | Yes | — | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
openai_data_delete_eval
Delete an evaluation. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to delete. |
openai_data_delete_eval_run
Delete an eval run. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to delete the run from. |
run_id | string | Yes | — | The ID of the run to delete. |
openai_data_delete_file
Delete a file. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
file_id | string | Yes | — | The ID of the file to use for this request. |
openai_data_delete_fine_tuning_checkpoint_permission
NOTE: This endpoint requires an admin API key. Organization owners can use this endpoint to delete a permission for a fine-tuned model checkpoint. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuned_model_checkpoint | string | Yes | — | The ID of the fine-tuned model checkpoint to delete a permission for. |
permission_id | string | Yes | — | The ID of the fine-tuned model checkpoint permission to delete. |
openai_data_delete_vector_store
Delete a vector store. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store to delete. |
openai_data_delete_vector_store_file
Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store that the file belongs to. |
file_id | string | Yes | — | The ID of the file to delete. |
openai_data_download_file
Returns the contents of the specified file. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
file_id | string | Yes | — | The ID of the file to use for this request. |
openai_data_get_eval
Get an evaluation by ID. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to retrieve. |
openai_data_get_eval_run
Get an evaluation run by ID. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to retrieve runs for. |
run_id | string | Yes | — | The ID of the run to retrieve. |
openai_data_get_eval_run_output_item
Get an evaluation run output item by ID. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to retrieve runs for. |
run_id | string | Yes | — | The ID of the run to retrieve. |
output_item_id | string | Yes | — | The ID of the output item to retrieve. |
openai_data_get_eval_run_output_items
Get a list of output items for an evaluation run. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to retrieve runs for. |
run_id | string | Yes | — | The ID of the run to retrieve output items for. |
after | string | No | — | Identifier for the last output item from the previous pagination request. |
limit | integer | No | — | Number of output items to retrieve. |
status | string | No | — | Filter output items by status. Use failed to filter by failed output items or pass to filter by passed output items. |
order | string | No | — | Sort order for output items by timestamp. Use asc for ascending order or desc for descending order. Defaults to asc. |
openai_data_get_eval_runs
Get a list of runs for an evaluation. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to retrieve runs for. |
after | string | No | — | Identifier for the last run from the previous pagination request. |
limit | integer | No | — | Number of runs to retrieve. |
order | string | No | — | Sort order for runs by timestamp. Use asc for ascending order or desc for descending order. Defaults to asc. |
status | string | No | — | Filter runs by status. One of queued | in_progress | failed | completed | canceled. |
openai_data_get_vector_store
Retrieves a vector store. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store to retrieve. |
openai_data_get_vector_store_file
Retrieves a vector store file. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store that the file belongs to. |
file_id | string | Yes | — | The ID of the file being retrieved. |
openai_data_get_vector_store_file_batch
Retrieves a vector store file batch. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store that the file batch belongs to. |
batch_id | string | Yes | — | The ID of the file batch being retrieved. |
openai_data_list_batches
List your organization’s batches. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
after | string | No | — | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
limit | integer | No | — | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
openai_data_list_evals
List evaluations for a project. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
after | string | No | — | Identifier for the last eval from the previous pagination request. |
limit | integer | No | — | Number of evals to retrieve. |
order | string | No | — | Sort order for evals by timestamp. Use asc for ascending order or desc for descending order. |
order_by | string | No | — | Evals can be ordered by creation time or last updated time. Use created_at for creation time or updated_at for last updated time. |
openai_data_list_files
Returns a list of files. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
purpose | string | No | — | Only return files with the given purpose. |
limit | integer | No | — | A limit on the number of objects to be returned. Limit can range between 1 and 10,000, and the default is 10,000. |
order | string | No | — | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. |
after | string | No | — | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
openai_data_list_files_in_vector_store_batch
Returns a list of vector store files in a batch. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store that the files belong to. |
batch_id | string | Yes | — | The ID of the file batch that the files belong to. |
limit | integer | No | — | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
order | string | No | — | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. |
after | string | No | — | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
before | string | No | — | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
filter | string | No | — | Filter by file status. One of in_progress, completed, failed, cancelled. |
openai_data_list_fine_tuning_checkpoint_permissions
NOTE: This endpoint requires an admin API key. Organization owners can use this endpoint to view all permissions for a fine-tuned model checkpoint. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuned_model_checkpoint | string | Yes | — | The ID of the fine-tuned model checkpoint to get permissions for. |
project_id | string | No | — | The ID of the project to get permissions for. |
after | string | No | — | Identifier for the last permission ID from the previous pagination request. |
limit | integer | No | — | Number of permissions to retrieve. |
order | string | No | — | The order in which to retrieve permissions. |
openai_data_list_fine_tuning_events
Get status updates for a fine-tuning job. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuning_job_id | string | Yes | — | The ID of the fine-tuning job to get events for. |
after | string | No | — | Identifier for the last event from the previous pagination request. |
limit | integer | No | — | Number of events to retrieve. |
openai_data_list_fine_tuning_job_checkpoints
List checkpoints for a fine-tuning job. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuning_job_id | string | Yes | — | The ID of the fine-tuning job to get checkpoints for. |
after | string | No | — | Identifier for the last checkpoint ID from the previous pagination request. |
limit | integer | No | — | Number of checkpoints to retrieve. |
openai_data_list_paginated_fine_tuning_jobs
List your organization’s fine-tuning jobs Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
after | string | No | — | Identifier for the last job from the previous pagination request. |
limit | integer | No | — | Number of fine-tuning jobs to retrieve. |
metadata | object | No | — | Optional metadata filter. To filter, use the syntax metadata[k]=v. Alternatively, set metadata=null to indicate no metadata. |
openai_data_list_vector_store_files
Returns a list of vector store files. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store that the files belong to. |
limit | integer | No | — | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
order | string | No | — | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. |
after | string | No | — | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
before | string | No | — | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
filter | string | No | — | Filter by file status. One of in_progress, completed, failed, cancelled. |
openai_data_list_vector_stores
Returns a list of vector stores. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
limit | integer | No | — | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
order | string | No | — | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. |
after | string | No | — | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
before | string | No | — | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
openai_data_modify_vector_store
Modifies a vector store. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store to modify. |
expires_after | object | No | — | Expires After |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
name | string | No | — | The name of the vector store. |
openai_data_retrieve_batch
Retrieves a batch. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
batch_id | string | Yes | — | The ID of the batch to retrieve. |
openai_data_retrieve_file
Returns information about a specific file. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
file_id | string | Yes | — | The ID of the file to use for this request. |
openai_data_retrieve_fine_tuning_job
Get info about a fine-tuning job. Learn more about fine-tuning Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fine_tuning_job_id | string | Yes | — | The ID of the fine-tuning job. |
openai_data_retrieve_vector_store_file_content
Retrieve the parsed contents of a vector store file. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store. |
file_id | string | Yes | — | The ID of the file within the vector store. |
openai_data_search_vector_store
Search a vector store for relevant chunks based on a query and file attributes filter. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store to search. |
filters | object | No | — | A filter to apply based on file attributes. |
max_num_results | integer | No | — | The maximum number of results to return. This number should be between 1 and 50 inclusive. |
query | string[] | Yes | — | A query string for a search |
ranking_options | object | No | — | Ranking options for search. |
rewrite_query | boolean | No | — | Whether to rewrite the natural language query for vector search. |
openai_data_update_eval
Update certain properties of an evaluation. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
eval_id | string | Yes | — | The ID of the evaluation to update. |
metadata | object | No | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
name | string | No | — | Rename the evaluation. |
openai_data_update_vector_store_file_attributes
Update attributes on a vector store file. Parameters:| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
vector_store_id | string | Yes | — | The ID of the vector store the file belongs to. |
file_id | string | Yes | — | The ID of the file to update attributes. |
attributes | object | Yes | — | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |

