MODEL ID sync:3@0
coming-soon

sync-3

sync.
by sync.

sync-3 is a lip synchronization model that processes entire shots as a single generation rather than stitching independent segments. It builds a global understanding of the speaker across all frames, enabling consistent output on close-ups, extreme face angles, partially obscured faces, and obstructed mouths. The model preserves the original speaker's style, cadence, and emotional expression across 95+ languages.

sync-3

API Options

Platform-level options for task execution and delivery.

taskType

string required value: videoInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

outputType

string default: URL

Video output type.

Allowed values 1 value

outputFormat

string default: MP4

Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.

  • `MP4`: Widely supported video container (H.264), recommended for general use.
  • `WEBM`: Optimized for web delivery.
  • `MOV`: QuickTime format, common in professional workflows (Apple ecosystem).
Allowed values 3 values

outputQuality

integer min: 20 max: 99 default: 95

Compression quality of the output. Higher values preserve quality but increase file size.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: async

Determines how the API delivers task results.

Allowed values 1 value
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.
Learn more 1 resource

uploadEndpoint

string URI

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

Common use cases:

  • Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
  • CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg

The content data will be sent as the request body to the specified URL when generation is complete.

safety

object

Content safety checking configuration for video generation.

Properties 2 properties
safety » checkContent

checkContent

boolean default: false

Enable or disable content safety checking. When enabled, defaults to fast mode.

safety » mode

mode

string default: none

Safety checking mode for video generation.

Allowed values 3 values
Disables checking.
Checks key frames.
Checks all frames.

ttl

integer min: 60

Time-to-live (TTL) in seconds for generated content. Only applies when outputType is URL.

includeCost

boolean default: false

Include task cost in the response.

numberResults

integer min: 1 max: 20 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Inputs

Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

inputs » video

video

string required

Video input (UUID or URL).

inputs » audio

audio

string

Audio input (UUID or URL).

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: sync:lipsync@3

Identifier of the model to use for generation.

Learn more 3 resources

speech

object required*

Settings for speech generation.

Properties 2 properties
speech » text

text

string required min: 1 max: 5000

Text to convert to speech.

speech » voice

voice

string required default: auto

Voice identifier to use. Set to auto for automatic selection.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » activeSpeakerDetection

Speaker-targeting for multi-person clips.

Properties 4 properties
settings » activeSpeakerDetection » autoDetect

autoDetect

boolean default: false

Automatically detect and target the active speaker.

settings » activeSpeakerDetection » boundingBoxes

boundingBoxes

array of arrays

Per-frame face bounding boxes across the clip. Each box is [x1, y1, x2, y2].

settings » activeSpeakerDetection » coordinates

coordinates

array of numbers items: 2

[x, y] point on the target speaker's face in the selected frame.

settings » activeSpeakerDetection » frameNumber

frameNumber

integer min: 0

Frame index corresponding to the provided face coordinates.

settings » segments

segments

array of objects min items: 1

Time segments with audio sources for segmented lip sync workflows.

Properties 5 properties
settings » segments » startTime

startTime

float required min: 0 step: 0.01

Start time in seconds for the segment.

settings » segments » endTime

endTime

float required step: 0.01

End time in seconds for the segment. Must be greater than startTime.

settings » segments » audio

audio

string required

Audio source URL for this segment.

settings » segments » audioStartTime

audioStartTime

float min: 0 step: 0.01 default: 0

Start time in seconds within the source audio file.

settings » segments » audioEndTime

audioEndTime

float min: 0 step: 0.01

End time in seconds within the source audio file.

settings » syncMode

syncMode

string default: cut_off

Synchronization strategy when audio and video durations don't match.

Allowed values 4 values
Audio bounces back and forth to fill video duration.
Audio is cut when video ends.
Remaining video plays with silence after audio ends.
Audio is time-stretched or compressed to match video duration.
settings » tts

tts

object

Configuration for the text-to-speech provider used with speech input.

Properties 3 properties
settings » tts » provider

provider

string default: elevenlabs

Name of the TTS provider.

settings » tts » stability

stability

float min: 0 max: 1

Voice stability for the TTS provider. Higher values produce more consistent output.

settings » tts » similarityBoost

similarityBoost

float min: 0 max: 1

Voice similarity enforcement for the TTS provider. Higher values make the voice more closely match the target.