sync-3
sync-3 is a lip synchronization model that processes entire shots as a single generation rather than stitching independent segments. It builds a global understanding of the speaker across all frames, enabling consistent output on close-ups, extreme face angles, partially obscured faces, and obstructed mouths. The model preserves the original speaker's style, cadence, and emotional expression across 95+ languages.
API Options
Platform-level options for task execution and delivery.
-
taskType
string required value: videoInference -
Identifier for the type of task being performed
-
taskUUID
string required UUID v4 -
UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.
-
outputType
string default: URL -
Video output type.
Allowed values 1 value
-
outputFormat
string default: MP4 -
Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.
- `MP4`: Widely supported video container (H.264), recommended for general use.
- `WEBM`: Optimized for web delivery.
- `MOV`: QuickTime format, common in professional workflows (Apple ecosystem).
Allowed values 3 values
-
outputQuality
integer min: 20 max: 99 default: 95 -
Compression quality of the output. Higher values preserve quality but increase file size.
-
webhookURL
string URI -
Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.
Learn more 1 resource
- Webhooks PLATFORM
- Webhooks
-
deliveryMethod
string default: async -
Determines how the API delivers task results.
Allowed values 1 value
- Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.
Learn more 1 resource
- Task Polling PLATFORM
-
uploadEndpoint
string URI -
Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.
Common use cases:
- Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
- CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600 // Google Cloud Storage presigned URL https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789 // Custom storage endpoint https://storage.example.com/uploads/generated-image.jpgThe content data will be sent as the request body to the specified URL when generation is complete.
-
safety
object -
Content safety checking configuration for video generation.
Properties 2 properties
-
safety»checkContentcheckContent
boolean default: false -
Enable or disable content safety checking. When enabled, defaults to
fastmode.
-
safety»modemode
string default: none -
Safety checking mode for video generation.
Allowed values 3 values
- Disables checking.
- Checks key frames.
- Checks all frames.
-
-
ttl
integer min: 60 -
Time-to-live (TTL) in seconds for generated content. Only applies when
outputTypeisURL.
-
includeCost
boolean default: false -
Include task cost in the response.
-
numberResults
integer min: 1 max: 20 default: 1 -
Number of results to generate. Each result uses a different seed, producing variations of the same parameters.
Inputs
Input resources for the task (images, audio, etc). These must be nested inside the inputs object.
inputs object. Generation Parameters
Core parameters for controlling the generated content.
Settings
Technical parameters to fine-tune the inference process. These must be nested inside the settings object.
settings object.-
settings»activeSpeakerDetectionactiveSpeakerDetection
object -
Speaker-targeting for multi-person clips.
Properties 4 properties
-
settings»activeSpeakerDetection»autoDetectautoDetect
boolean default: false -
Automatically detect and target the active speaker.
-
settings»activeSpeakerDetection»boundingBoxesboundingBoxes
array of arrays -
Per-frame face bounding boxes across the clip. Each box is [x1, y1, x2, y2].
-
settings»activeSpeakerDetection»coordinatescoordinates
array of numbers items: 2 -
[x, y] point on the target speaker's face in the selected frame.
-
-
settings»segmentssegments
array of objects min items: 1 -
Time segments with audio sources for segmented lip sync workflows.
Properties 5 properties
-
settings»segments»startTimestartTime
float required min: 0 step: 0.01 -
Start time in seconds for the segment.
-
settings»segments»endTimeendTime
float required step: 0.01 -
End time in seconds for the segment. Must be greater than startTime.
-
settings»segments»audioaudio
string required -
Audio source URL for this segment.
-
settings»segments»audioStartTimeaudioStartTime
float min: 0 step: 0.01 default: 0 -
Start time in seconds within the source audio file.
-
settings»segments»audioEndTimeaudioEndTime
float min: 0 step: 0.01 -
End time in seconds within the source audio file.
-
-
settings»syncModesyncMode
string default: cut_off -
Synchronization strategy when audio and video durations don't match.
Allowed values 4 values
- Audio bounces back and forth to fill video duration.
- Audio is cut when video ends.
- Remaining video plays with silence after audio ends.
- Audio is time-stretched or compressed to match video duration.
-
settings»ttstts
object -
Configuration for the text-to-speech provider used with speech input.
Properties 3 properties
-
settings»tts»providerprovider
string default: elevenlabs -
Name of the TTS provider.
-
settings»tts»stabilitystability
float min: 0 max: 1 -
Voice stability for the TTS provider. Higher values produce more consistent output.
-
settings»tts»similarityBoostsimilarityBoost
float min: 0 max: 1 -
Voice similarity enforcement for the TTS provider. Higher values make the voice more closely match the target.
-