Skip to main content
Server path: /elevenlabs-audio | Type: Application | PCID required: Yes

Tools

ToolDescription
elevenlabs_audio_compose_detailedCompose Music With A Detailed Response
elevenlabs_audio_compose_planGenerate Composition Plan
elevenlabs_audio_delete_speech_history_itemDelete History Item
elevenlabs_audio_delete_transcript_by_idDelete Transcript By Id
elevenlabs_audio_download_speech_history_itemsDownload History Items
elevenlabs_audio_generateCompose Music
elevenlabs_audio_get_full_from_speech_history_itemGet Audio From History Item
elevenlabs_audio_get_speech_historyList Generated Items
elevenlabs_audio_get_speech_history_item_by_idGet History Item
elevenlabs_audio_get_transcript_by_idGet Transcript By Id
elevenlabs_audio_isolationAudio Isolation
elevenlabs_audio_isolation_streamAudio Isolation Stream
elevenlabs_audio_separate_song_stemsStem Separation
elevenlabs_audio_sound_generationSound Generation
elevenlabs_audio_speech_to_speech_fullSpeech To Speech
elevenlabs_audio_speech_to_speech_streamSpeech To Speech Streaming
elevenlabs_audio_speech_to_textSpeech To Text
elevenlabs_audio_stream_composeStream Composed Music
elevenlabs_audio_text_to_dialogueText To Dialogue (Multi-Voice)
elevenlabs_audio_text_to_dialogue_full_with_timestampsText To Dialogue With Timestamps
elevenlabs_audio_text_to_dialogue_streamText To Dialogue (Multi-Voice) Streaming
elevenlabs_audio_text_to_dialogue_stream_with_timestampsText To Dialogue Streaming With Timestamps
elevenlabs_audio_text_to_speech_fullText To Speech
elevenlabs_audio_text_to_speech_full_with_timestampsText To Speech With Timestamps
elevenlabs_audio_text_to_speech_streamText To Speech Streaming
elevenlabs_audio_text_to_speech_stream_with_timestampsText To Speech Streaming With Timestamps
elevenlabs_audio_upload_songUpload Music
elevenlabs_audio_video_to_musicVideo To Music

elevenlabs_audio_compose_detailed

Compose Music With A Detailed Response Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
composition_planobjectNoA detailed composition plan to guide music generation. Cannot be used in conjunction with prompt.
finetune_idobjectNoThe ID of the finetune to use for the generation
force_instrumentalbooleanNoIf true, guarantees that the generated song will be instrumental. If false, the song may or may not be instrumental depending on the prompt. Can only be used with prompt.
model_idstringNoThe model to use for the generation.
music_length_msobjectNoThe length of the song to generate in milliseconds. Used only in conjunction with prompt. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt.
music_promptobjectNoA music prompt. Deprecated. Use composition_plan instead.
promptobjectNoA simple text prompt to generate a song from. Cannot be used in conjunction with composition_plan.
respect_sections_durationsbooleanNoControls how strictly section durations in the composition_plan are enforced. Only used with composition_plan. When set to true, the model will precisely respect each section’s duration_ms from the plan. When set to false, the model may adjust individual section durations which will generally lead to better generation quality and improved latency, while always preserving the total song duration from the plan.
seedobjectNoRandom seed to initialize the music generation process. Providing the same seed with the same parameters can help achieve more consistent results, but exact reproducibility is not guaranteed and outputs may change across system updates. Cannot be used in conjunction with prompt.
sign_with_c2pabooleanNoWhether to sign the generated song with C2PA. Applicable only for mp3 files.
store_for_inpaintingbooleanNoWhether to store the generated song for inpainting. Only available to enterprise clients with access to the inpainting feature.
use_phonetic_namesbooleanNoIf true, proper names in the prompt will be phonetically spelled in the lyrics for better pronunciation by the music model. The original names will be restored in word timestamps.
with_timestampsbooleanNoWhether to return the timestamps of the words in the generated song.

elevenlabs_audio_compose_plan

Generate Composition Plan Parameters:
ParameterTypeRequiredDefaultDescription
model_idstringNoThe model to use for the generation.
music_length_msobjectNoThe length of the composition plan to generate in milliseconds. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt.
promptstringYesA simple text prompt to compose a plan from.
source_composition_planobjectNoAn optional composition plan to use as a source for the new composition plan.

elevenlabs_audio_delete_speech_history_item

Delete History Item Parameters:
ParameterTypeRequiredDefaultDescription
history_item_idstringYesHistory item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.

elevenlabs_audio_delete_transcript_by_id

Delete Transcript By Id Parameters:
ParameterTypeRequiredDefaultDescription
transcription_idstringYesThe unique ID of the transcript to delete

elevenlabs_audio_download_speech_history_items

Download History Items Parameters:
ParameterTypeRequiredDefaultDescription
history_item_idsany[]YesA list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_formatobjectNoOutput format to transcode the audio file, can be wav or default.

elevenlabs_audio_generate

Compose Music Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
composition_planobjectNoA detailed composition plan to guide music generation. Cannot be used in conjunction with prompt.
finetune_idobjectNoThe ID of the finetune to use for the generation
force_instrumentalbooleanNoIf true, guarantees that the generated song will be instrumental. If false, the song may or may not be instrumental depending on the prompt. Can only be used with prompt.
model_idstringNoThe model to use for the generation.
music_length_msobjectNoThe length of the song to generate in milliseconds. Used only in conjunction with prompt. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt.
music_promptobjectNoA music prompt. Deprecated. Use composition_plan instead.
promptobjectNoA simple text prompt to generate a song from. Cannot be used in conjunction with composition_plan.
respect_sections_durationsbooleanNoControls how strictly section durations in the composition_plan are enforced. Only used with composition_plan. When set to true, the model will precisely respect each section’s duration_ms from the plan. When set to false, the model may adjust individual section durations which will generally lead to better generation quality and improved latency, while always preserving the total song duration from the plan.
seedobjectNoRandom seed to initialize the music generation process. Providing the same seed with the same parameters can help achieve more consistent results, but exact reproducibility is not guaranteed and outputs may change across system updates. Cannot be used in conjunction with prompt.
sign_with_c2pabooleanNoWhether to sign the generated song with C2PA. Applicable only for mp3 files.
store_for_inpaintingbooleanNoWhether to store the generated song for inpainting. Only available to enterprise clients with access to the inpainting feature.
use_phonetic_namesbooleanNoIf true, proper names in the prompt will be phonetically spelled in the lyrics for better pronunciation by the music model. The original names will be restored in word timestamps.

elevenlabs_audio_get_full_from_speech_history_item

Get Audio From History Item Parameters:
ParameterTypeRequiredDefaultDescription
history_item_idstringYesHistory item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.

elevenlabs_audio_get_speech_history

List Generated Items Parameters:
ParameterTypeRequiredDefaultDescription
page_sizeintegerNoHow many history items to return at maximum. Can not exceed 1000, defaults to 100.
start_after_history_item_idobjectNoAfter which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.
voice_idobjectNoVoice ID to be filtered for, you can use GET https://api.elevenlabs.io/v1/voices to receive a list of voices and their IDs.
model_idobjectNoModel ID to filter history items by.
date_before_unixobjectNoUnix timestamp to filter history items before this date (exclusive).
date_after_unixobjectNoUnix timestamp to filter history items after this date (inclusive).
sort_directionobjectNoSort direction for the results.
searchobjectNosearch term used for filtering
sourceobjectNoSource of the generated history item

elevenlabs_audio_get_speech_history_item_by_id

Get History Item Parameters:
ParameterTypeRequiredDefaultDescription
history_item_idstringYesHistory item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.

elevenlabs_audio_get_transcript_by_id

Get Transcript By Id Parameters:
ParameterTypeRequiredDefaultDescription
transcription_idstringYesThe unique ID of the transcript to retrieve

elevenlabs_audio_isolation

Audio Isolation Parameters:
ParameterTypeRequiredDefaultDescription
audiostringYesThe audio file from which vocals/speech will be isolated from.
file_formatobjectNoThe format of input audio. Options are ‘pcm_s16le_16’ or ‘other’ For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
preview_b64objectNoOptional preview image base64 for tracking this generation.

elevenlabs_audio_isolation_stream

Audio Isolation Stream Parameters:
ParameterTypeRequiredDefaultDescription
audiostringYesThe audio file from which vocals/speech will be isolated from.
file_formatobjectNoThe format of input audio. Options are ‘pcm_s16le_16’ or ‘other’ For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.

elevenlabs_audio_separate_song_stems

Stem Separation Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
filestringYesThe audio file to separate into stems.
sign_with_c2pabooleanNoWhether to sign the generated song with C2PA. Applicable only for mp3 files.
stem_variation_idstringNoThe id of the stem variation to use.

elevenlabs_audio_sound_generation

Sound Generation Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
duration_secondsobjectNoThe duration of the sound which will be generated in seconds. Must be at least 0.5 and at most 30. If set to None we will guess the optimal duration using the prompt. Defaults to None.
loopbooleanNoWhether to create a sound effect that loops smoothly. Only available for the ‘eleven_text_to_sound_v2 model’.
model_idstringNoThe model ID to use for the sound generation.
prompt_influenceobjectNoA higher prompt influence makes your generation follow the prompt more closely while also making generations less variable. Must be a value between 0 and 1. Defaults to 0.3.
textstringYesThe text that will get converted into a sound effect.

elevenlabs_audio_speech_to_speech_full

Speech To Speech Parameters:
ParameterTypeRequiredDefaultDescription
voice_idstringYesVoice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
optimize_streaming_latencyobjectNoYou can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None.
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
audiostringYesThe audio file which holds the content and emotion that will control the generated speech.
file_formatobjectNoThe format of input audio. Options are ‘pcm_s16le_16’ or ‘other’ For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for speech to speech, you can check this using the can_do_voice_conversion property.
remove_background_noisebooleanNoIf set, will remove the background noise from your audio input using our audio isolation model. Only applies to Voice Changer.
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
voice_settingsobjectNoVoice settings overriding stored settings for the given voice. They are applied only on the given request. Needs to be send as a JSON encoded string.

elevenlabs_audio_speech_to_speech_stream

Speech To Speech Streaming Parameters:
ParameterTypeRequiredDefaultDescription
voice_idstringYesVoice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
optimize_streaming_latencyobjectNoYou can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None.
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
audiostringYesThe audio file which holds the content and emotion that will control the generated speech.
file_formatobjectNoThe format of input audio. Options are ‘pcm_s16le_16’ or ‘other’ For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for speech to speech, you can check this using the can_do_voice_conversion property.
remove_background_noisebooleanNoIf set, will remove the background noise from your audio input using our audio isolation model. Only applies to Voice Changer.
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
voice_settingsobjectNoVoice settings overriding stored settings for the given voice. They are applied only on the given request. Needs to be send as a JSON encoded string.

elevenlabs_audio_speech_to_text

Speech To Text Parameters:
ParameterTypeRequiredDefaultDescription
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean log and transcript storage features are unavailable for this request. Zero retention mode may only be used by enterprise customers.
additional_formatsany[]NoAdditional Formats
cloud_storage_urlobjectNoThe HTTPS URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be accessible via HTTPS and the file size must be less than 2GB. Any valid HTTPS URL is accepted, including URLs from cloud storage providers (AWS S3, Google Cloud Storage, Cloudflare R2, etc.), CDNs, or any other HTTPS source. URLs can be pre-signed or include authentication tokens in query parameters.
diarization_thresholdobjectNoDiarization threshold to apply during speaker diarization. A higher value means there will be a lower chance of one speaker being diarized as two different speakers but also a higher chance of two different speakers being diarized as one speaker (less total speakers predicted). A low value means there will be a higher chance of one speaker being diarized as two different speakers but also a lower chance of two different speakers being diarized as one speaker (more total speakers predicted). Can only be set when diarize=True and num_speakers=None. Defaults to None, in which case we will choose a threshold based on the model_id (0.22 usually).
diarizebooleanNoWhether to annotate which speaker is currently talking in the uploaded file.
entity_detectionobjectNoDetect entities in the transcript. Can be ‘all’ to detect all entities, a single entity type or category string, or a list of entity types/categories. Categories include ‘pii’, ‘phi’, ‘pci’, ‘other’, ‘offensive_language’. When enabled, detected entities will be returned in the ‘entities’ field with their text, type, and character positions. Usage of this parameter will incur additional costs.
entity_redactionobjectNoRedact entities from the transcript text. Accepts the same format as entity_detection: ‘all’, a category (‘pii’, ‘phi’), or specific entity types. Must be a subset of entity_detection. When redaction is enabled, the entities field will not be returned.
entity_redaction_modestringNoHow to format redacted entities. ‘redacted’ replaces with {REDACTED}, ‘entity_type’ replaces with {ENTITY_TYPE}, ‘enumerated_entity_type’ replaces with {ENTITY_TYPE_N} where N enumerates each occurrence. Only used when entity_redaction is set.
fileobjectNoThe file to transcribe (100ms minimum audio length). All major audio and video formats are supported. Exactly one of the file or cloud_storage_url parameters must be provided. The file size must be less than 3.0GB.
file_formatstringNoThe format of input audio. Options are ‘pcm_s16le_16’ or ‘other’ For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
keytermsany[]NoA list of keyterms to bias the transcription towards. The keyterms are words or phrases you want the model to recognise more accurately. The number of keyterms cannot exceed 1000. The length of each keyterm must be less than 50 characters. Keyterms can contain at most 5 words (after normalisation). For example [“hello”, “world”, “technical term”]. Usage of this parameter will incur additional costs. When more than 100 keyterms are provided, a minimum billable duration of 20 seconds applies per request.
language_codeobjectNoAn ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
model_idstringYesThe ID of the model to use for transcription.
no_verbatimbooleanNoIf true, the transcription will not have any filler words, false starts and non-speech sounds. Only supported with scribe_v2 model.
num_speakersobjectNoThe maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be an integer between 0 and 2147483647.
source_urlobjectNoThe URL of an audio or video file to transcribe. Supports hosted video or audio files, YouTube video URLs, TikTok video URLs, and other video hosting services.
tag_audio_eventsbooleanNoWhether to tag audio events like (laughter), (footsteps), etc. in the transcription.
temperatureobjectNoControls the randomness of the transcription output. Accepts values between 0.0 and 2.0, where higher values result in more diverse and less deterministic results. If omitted, we will use a temperature based on the model you selected which is usually 0.
timestamps_granularitystringNoThe granularity of the timestamps in the transcription. ‘word’ provides word-level timestamps and ‘character’ provides character-level timestamps per word.
use_multi_channelbooleanNoWhether the audio file contains multiple channels where each channel contains a single speaker. When enabled, each channel will be transcribed independently and the results will be combined. Each word in the response will include a ‘channel_index’ field indicating which channel it was spoken on. A maximum of 5 channels is supported.
webhookbooleanNoWhether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook.
webhook_idobjectNoOptional specific webhook ID to send the transcription result to. Only valid when webhook is set to true. If not provided, transcription will be sent to all configured speech-to-text webhooks.
webhook_metadataobjectNoOptional metadata to be included in the webhook response. This should be a JSON string representing an object with a maximum depth of 2 levels and maximum size of 16KB. Useful for tracking internal IDs, job references, or other contextual information.

elevenlabs_audio_stream_compose

Stream Composed Music Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
composition_planobjectNoA detailed composition plan to guide music generation. Cannot be used in conjunction with prompt.
finetune_idobjectNoThe ID of the finetune to use for the generation
force_instrumentalbooleanNoIf true, guarantees that the generated song will be instrumental. If false, the song may or may not be instrumental depending on the prompt. Can only be used with prompt.
model_idstringNoThe model to use for the generation.
music_length_msobjectNoThe length of the song to generate in milliseconds. Used only in conjunction with prompt. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt.
music_promptobjectNoA music prompt. Deprecated. Use composition_plan instead.
promptobjectNoA simple text prompt to generate a song from. Cannot be used in conjunction with composition_plan.
seedobjectNoRandom seed to initialize the music generation process. Providing the same seed with the same parameters can help achieve more consistent results, but exact reproducibility is not guaranteed and outputs may change across system updates. Cannot be used in conjunction with prompt.
store_for_inpaintingbooleanNoWhether to store the generated song for inpainting. Only available to enterprise clients with access to the inpainting feature.
use_phonetic_namesbooleanNoIf true, proper names in the prompt will be phonetically spelled in the lyrics for better pronunciation by the music model. The original names will be restored in word timestamps.

elevenlabs_audio_text_to_dialogue

Text To Dialogue (Multi-Voice) Parameters:
ParameterTypeRequiredDefaultDescription
output_formatobjectNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
avatar_contextobjectNoAvatar context when this generation is made from the Avatars video editor.
inputsany[]YesA list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
settingsobjectNoSettings controlling the dialogue generation.

elevenlabs_audio_text_to_dialogue_full_with_timestamps

Text To Dialogue With Timestamps Parameters:
ParameterTypeRequiredDefaultDescription
output_formatobjectNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
inputsany[]YesA list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
settingsobjectNoSettings controlling the dialogue generation.

elevenlabs_audio_text_to_dialogue_stream

Text To Dialogue (Multi-Voice) Streaming Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
avatar_contextobjectNoAvatar context when this generation is made from the Avatars video editor.
inputsany[]YesA list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
settingsobjectNoSettings controlling the dialogue generation.

elevenlabs_audio_text_to_dialogue_stream_with_timestamps

Text To Dialogue Streaming With Timestamps Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
inputsany[]YesA list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
settingsobjectNoSettings controlling the dialogue generation.

elevenlabs_audio_text_to_speech_full

Text To Speech Parameters:
ParameterTypeRequiredDefaultDescription
voice_idstringYesVoice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
optimize_streaming_latencyobjectNoYou can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None.
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_language_text_normalizationbooleanNoThis parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
avatar_contextobjectNoAvatar context when this generation is made from the Avatars video editor.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
next_request_idsobjectNoA list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech’s continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send.
next_textobjectNoThe text that comes after the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
previous_request_idsobjectNoA list of request_id of the samples that were generated before this generation. Can be used to improve the speech’s continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send.
previous_textobjectNoThe text that came before the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
textstringYesThe text that will get converted into speech.
use_pvc_as_ivcbooleanNoIf true, we won’t use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions.
voice_settingsobjectNoVoice settings overriding stored settings for the given voice. They are applied only on the given request.

elevenlabs_audio_text_to_speech_full_with_timestamps

Text To Speech With Timestamps Parameters:
ParameterTypeRequiredDefaultDescription
voice_idstringYesVoice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
optimize_streaming_latencyobjectNoYou can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None.
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_language_text_normalizationbooleanNoThis parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
next_request_idsany[]NoA list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech’s continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send.
next_textobjectNoThe text that comes after the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
previous_request_idsany[]NoA list of request_id of the samples that were generated before this generation. Can be used to improve the speech’s continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send.
previous_textobjectNoThe text that came before the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
pronunciation_dictionary_locatorsany[]NoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
textstringYesThe text that will get converted into speech.
use_pvc_as_ivcbooleanNoIf true, we won’t use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions.
voice_settingsobjectNoVoice settings overriding stored settings for the given voice. They are applied only on the given request.

elevenlabs_audio_text_to_speech_stream

Text To Speech Streaming Parameters:
ParameterTypeRequiredDefaultDescription
voice_idstringYesVoice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
optimize_streaming_latencyobjectNoYou can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None.
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_language_text_normalizationbooleanNoThis parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
avatar_contextobjectNoAvatar context when this generation is made from the Avatars video editor.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
next_request_idsobjectNoA list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech’s continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send.
next_textobjectNoThe text that comes after the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
previous_request_idsobjectNoA list of request_id of the samples that were generated before this generation. Can be used to improve the speech’s continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send.
previous_textobjectNoThe text that came before the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
textstringYesThe text that will get converted into speech.
use_pvc_as_ivcbooleanNoIf true, we won’t use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions.
voice_settingsobjectNoVoice settings overriding stored settings for the given voice. They are applied only on the given request.

elevenlabs_audio_text_to_speech_stream_with_timestamps

Text To Speech Streaming With Timestamps Parameters:
ParameterTypeRequiredDefaultDescription
voice_idstringYesVoice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
enable_loggingbooleanNoWhen enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
optimize_streaming_latencyobjectNoYou can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None.
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
apply_language_text_normalizationbooleanNoThis parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.
apply_text_normalizationstringNoThis parameter controls text normalization with three modes: ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped.
language_codeobjectNoLanguage code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned.
model_idstringNoIdentifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
next_request_idsobjectNoA list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech’s continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send.
next_textobjectNoThe text that comes after the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
previous_request_idsobjectNoA list of request_id of the samples that were generated before this generation. Can be used to improve the speech’s continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send.
previous_textobjectNoThe text that came before the text of the current request. Can be used to improve the speech’s continuity when concatenating together multiple generations or to influence the speech’s continuity in the current generation.
pronunciation_dictionary_locatorsobjectNoA list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
seedobjectNoIf specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295.
textstringYesThe text that will get converted into speech.
use_pvc_as_ivcbooleanNoIf true, we won’t use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions.
voice_settingsobjectNoVoice settings overriding stored settings for the given voice. They are applied only on the given request.

elevenlabs_audio_upload_song

Upload Music Parameters:
ParameterTypeRequiredDefaultDescription
extract_composition_planbooleanNoWhether to generate and return the composition plan for the uploaded song. If True, the response will include the composition_plan but will increase the latency.
filestringYesThe audio file to upload.

elevenlabs_audio_video_to_music

Video To Music Parameters:
ParameterTypeRequiredDefaultDescription
output_formatstringNoOutput format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.
descriptionobjectNoOptional text description of the music you want. A maximum of 1000 characters is allowed.
sign_with_c2pabooleanNoWhether to sign the generated song with C2PA. Applicable only for mp3 files.
tagsany[]NoOptional list of style tags (e.g. [‘upbeat’, ‘cinematic’]). A maximum of 10 tags is allowed.
videosany[]YesOne or more video files sent via FormData array (multipart/form-data). They will be combined into one codec in order. A maximum of 10 videos is allowed, where the total size of the combined video is limited to 200MB. In total, the video can be up to 600 seconds long. Note that combining multiple videos may increase the request duration significantly. If possible, combine the videos beforehand.