MODEL ID alibaba:qwen@3-tts-1.7b-base
live

Qwen3-TTS 1.7B Base

Alibaba
by Alibaba

Qwen3-TTS 1.7B Base is the foundation text-to-speech model from Alibaba's Qwen3-TTS family. It generates human-like speech across 10+ languages including Chinese, English, Japanese, Korean, and European languages. It supports voice cloning from a 3-second audio sample and achieves latency as low as 97ms for real-time applications.

Qwen3-TTS 1.7B Base

API Options

Platform-level options for task execution and delivery.

taskType

string required value: audioInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

outputType

string default: URL

Audio output type.

Allowed values 3 values

outputFormat

string default: MP3

File format for the generated audio.

Allowed values 3 values

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: sync

Determines how the API delivers task results.

Allowed values 2 values
Returns complete results directly in the API response.
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Learn more 1 resource

uploadEndpoint

string URI

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

Common use cases:

  • Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
  • CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg

The content data will be sent as the request body to the specified URL when generation is complete.

ttl

integer min: 60

Time-to-live (TTL) in seconds for generated content. Only applies when outputType is 'URL'.

includeCost

boolean default: false

Include task cost in the response.

numberResults

integer min: 1 max: 20 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Inputs

Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

inputs » audio

audio

string required

Audio input (UUID or URL).

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: alibaba:qwen@3-tts-1.7b-base

Identifier of the model to use for generation.

Learn more 3 resources

speech

object required

Settings for speech generation.

Properties 4 properties
speech » text

text

string required max: 2000

Text to convert to speech.

speech » voice

voice

string required value: clone

Voice identifier to use. Set to 'auto' for automatic selection.

speech » language

language

string min: 1 default: Auto

Language code for speech generation.

Allowed values 11 values
speech » speed

speed

float min: 0.25 max: 4 step: 0.01 default: 1

Playback speed of the generated speech.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » maxNewTokens

maxNewTokens

integer default: 2048

Audio output token cap. Higher values allow longer audio but risk hangs.

settings » transcript

transcript

string

Transcript of the reference audio. Required for ICL mode, optional for x-vector-only mode.

settings » xVectorOnly

xVectorOnly

boolean default: false

If true, uses speaker embedding only (no transcript needed, lower similarity). If false, uses ICL mode (requires transcript, higher quality).