---
title: sync-3 | Runware Docs
url: https://runware.ai/docs/models/sync-3
description: Full-scene lip synchronization with global face understanding and obstruction handling
---
# sync-3

sync-3 is a lip synchronization model that processes entire shots as a single generation rather than stitching independent segments. It builds a global understanding of the speaker across all frames, enabling consistent output on close-ups, extreme face angles, partially obscured faces, and obstructed mouths. The model preserves the original speaker's style, cadence, and emotional expression across 95+ languages.

- **ID**: `sync:3@0`
- **Status**: coming-soon
- **Creator**: sync.
- **Release Date**: April 7, 2026
- **Capabilities**: Video to Video, Audio to Video

## Compatibility & Validation

Provide exactly one of: `inputs.audio`, `speech`, `settings.segments`.

---

`audioStartTime` and `audioEndTime` must be used together (in `settings.segments`).

---

`boundingBoxes` cannot be used with `coordinates` (in `settings.activeSpeakerDetection`).

---

`coordinates` and `frameNumber` must be used together (in `settings.activeSpeakerDetection`).

---

When `settings.tts` is provided, `speech` is required.

## Request Parameters

**API Options**

Platform-level options for task execution and delivery.

### [taskType](https://runware.ai/docs/models/sync-3#request-tasktype)

- **Type**: `string`
- **Required**: true
- **Value**: `videoInference`

Identifier for the type of task being performed

### [taskUUID](https://runware.ai/docs/models/sync-3#request-taskuuid)

- **Type**: `string`
- **Required**: true
- **Format**: `UUID v4`

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

### [outputType](https://runware.ai/docs/models/sync-3#request-outputtype)

- **Type**: `string`
- **Default**: `URL`

Video output type.

**Allowed values**: `URL`

### [outputFormat](https://runware.ai/docs/models/sync-3#request-outputformat)

- **Type**: `string`
- **Default**: `MP4`

Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.

- \`MP4\`: Widely supported video container (H.264), recommended for general use.
- \`WEBM\`: Optimized for web delivery.
- \`MOV\`: QuickTime format, common in professional workflows (Apple ecosystem).

**Allowed values**: `MP4` `WEBM` `MOV`

### [outputQuality](https://runware.ai/docs/models/sync-3#request-outputquality)

- **Type**: `integer`
- **Min**: `20`
- **Max**: `99`
- **Default**: `95`

Compression quality of the output. Higher values preserve quality but increase file size.

### [webhookURL](https://runware.ai/docs/models/sync-3#request-webhookurl)

- **Type**: `string`
- **Format**: `URI`

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

**Learn more** (1 resource):

- [Webhooks](https://runware.ai/docs/platform/webhooks) (platform)

### [deliveryMethod](https://runware.ai/docs/models/sync-3#request-deliverymethod)

- **Type**: `string`
- **Default**: `async`

Determines how the API delivers task results.

**Allowed values**:

- `async` Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.

**Learn more** (1 resource):

- [Task Polling](https://runware.ai/docs/platform/task-polling) (platform)

### [uploadEndpoint](https://runware.ai/docs/models/sync-3#request-uploadendpoint)

- **Type**: `string`
- **Format**: `URI`

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

**Common use cases:**

- **Cloud storage**: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
- **CDN integration**: Upload to content delivery networks for immediate distribution.

```text
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg
```

The content data will be sent as the request body to the specified URL when generation is complete.

### [safety](https://runware.ai/docs/models/sync-3#request-safety)

- **Path**: `safety.checkContent`
- **Type**: `object (2 properties)`

Content safety checking configuration for video generation.

#### [checkContent](https://runware.ai/docs/models/sync-3#request-safety-checkcontent)

- **Path**: `safety.checkContent`
- **Type**: `boolean`
- **Default**: `false`

Enable or disable content safety checking. When enabled, defaults to `fast` mode.

#### [mode](https://runware.ai/docs/models/sync-3#request-safety-mode)

- **Path**: `safety.mode`
- **Type**: `string`
- **Default**: `none`

Safety checking mode for video generation.

**Allowed values**:

- `none` Disables checking.
- `fast` Checks key frames.
- `full` Checks all frames.

### [ttl](https://runware.ai/docs/models/sync-3#request-ttl)

- **Type**: `integer`
- **Min**: `60`

Time-to-live (TTL) in seconds for generated content. Only applies when `outputType` is `URL`.

### [includeCost](https://runware.ai/docs/models/sync-3#request-includecost)

- **Type**: `boolean`
- **Default**: `false`

Include task cost in the response.

### [numberResults](https://runware.ai/docs/models/sync-3#request-numberresults)

- **Type**: `integer`
- **Min**: `1`
- **Max**: `20`
- **Default**: `1`

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

**Inputs**

Input resources for the task (images, audio, etc). These must be nested inside the \`inputs\` object.

### [video](https://runware.ai/docs/models/sync-3#request-inputs-video)

- **Path**: `inputs.video`
- **Type**: `string`
- **Required**: true

Video input (UUID or URL).

### [audio](https://runware.ai/docs/models/sync-3#request-inputs-audio)

- **Path**: `inputs.audio`
- **Type**: `string`

Audio input (UUID or URL).

**Generation Parameters**

Core parameters for controlling the generated content.

### [model](https://runware.ai/docs/models/sync-3#request-model)

- **Type**: `string`
- **Required**: true
- **Value**: `sync:lipsync@3`

Identifier of the model to use for generation.

**Learn more** (3 resources):

- [Text To Image: Model Selection The Foundation Of Generation](https://runware.ai/docs/guides/text-to-image#model-selection-the-foundation-of-generation) (guide)
- [Image Inpainting: Model Specialized Inpainting Models](https://runware.ai/docs/guides/image-inpainting#model-specialized-inpainting-models) (guide)
- [Image Outpainting: Other Critical Parameters](https://runware.ai/docs/guides/image-outpainting#other-critical-parameters) (guide)

### [speech](https://runware.ai/docs/models/sync-3#request-speech)

- **Path**: `speech.text`
- **Type**: `object (2 properties)`
- **Required**: true

Settings for speech generation.

#### [text](https://runware.ai/docs/models/sync-3#request-speech-text)

- **Path**: `speech.text`
- **Type**: `string`
- **Required**: true
- **Min**: `1`
- **Max**: `5000`

Text to convert to speech.

#### [voice](https://runware.ai/docs/models/sync-3#request-speech-voice)

- **Path**: `speech.voice`
- **Type**: `string`
- **Required**: true
- **Default**: `auto`

Voice identifier to use. Set to `auto` for automatic selection.

**Settings**

Technical parameters to fine-tune the inference process. These must be nested inside the \`settings\` object.

### [activeSpeakerDetection](https://runware.ai/docs/models/sync-3#request-settings-activespeakerdetection)

- **Path**: `settings.activeSpeakerDetection`
- **Type**: `object (4 properties)`

Speaker-targeting for multi-person clips.

#### [autoDetect](https://runware.ai/docs/models/sync-3#request-settings-activespeakerdetection-autodetect)

- **Path**: `settings.activeSpeakerDetection.autoDetect`
- **Type**: `boolean`
- **Default**: `false`

Automatically detect and target the active speaker.

#### [boundingBoxes](https://runware.ai/docs/models/sync-3#request-settings-activespeakerdetection-boundingboxes)

- **Path**: `settings.activeSpeakerDetection.boundingBoxes`
- **Type**: `array of arrays`

Per-frame face bounding boxes across the clip. Each box is \[x1, y1, x2, y2\].

#### [coordinates](https://runware.ai/docs/models/sync-3#request-settings-activespeakerdetection-coordinates)

- **Path**: `settings.activeSpeakerDetection.coordinates`
- **Type**: `array of numbers`

\[x, y\] point on the target speaker's face in the selected frame.

#### [frameNumber](https://runware.ai/docs/models/sync-3#request-settings-activespeakerdetection-framenumber)

- **Path**: `settings.activeSpeakerDetection.frameNumber`
- **Type**: `integer`
- **Min**: `0`

Frame index corresponding to the provided face coordinates.

### [segments](https://runware.ai/docs/models/sync-3#request-settings-segments)

- **Path**: `settings.segments`
- **Type**: `array of objects (5 properties)`

Time segments with audio sources for segmented lip sync workflows.

#### [startTime](https://runware.ai/docs/models/sync-3#request-settings-segments-starttime)

- **Path**: `settings.segments.startTime`
- **Type**: `float`
- **Required**: true
- **Min**: `0`
- **Step**: `0.01`

Start time in seconds for the segment.

#### [endTime](https://runware.ai/docs/models/sync-3#request-settings-segments-endtime)

- **Path**: `settings.segments.endTime`
- **Type**: `float`
- **Required**: true
- **Step**: `0.01`

End time in seconds for the segment. Must be greater than startTime.

#### [audio](https://runware.ai/docs/models/sync-3#request-settings-segments-audio)

- **Path**: `settings.segments.audio`
- **Type**: `string`
- **Required**: true

Audio source URL for this segment.

#### [audioStartTime](https://runware.ai/docs/models/sync-3#request-settings-segments-audiostarttime)

- **Path**: `settings.segments.audioStartTime`
- **Type**: `float`
- **Min**: `0`
- **Step**: `0.01`
- **Default**: `0`

Start time in seconds within the source audio file.

#### [audioEndTime](https://runware.ai/docs/models/sync-3#request-settings-segments-audioendtime)

- **Path**: `settings.segments.audioEndTime`
- **Type**: `float`
- **Min**: `0`
- **Step**: `0.01`

End time in seconds within the source audio file.

### [syncMode](https://runware.ai/docs/models/sync-3#request-settings-syncmode)

- **Path**: `settings.syncMode`
- **Type**: `string`
- **Default**: `cut_off`

Synchronization strategy when audio and video durations don't match.

**Allowed values**:

- `bounce` Audio bounces back and forth to fill video duration.
- `cut_off` Audio is cut when video ends.
- `silence` Remaining video plays with silence after audio ends.
- `remap` Audio is time-stretched or compressed to match video duration.

### [tts](https://runware.ai/docs/models/sync-3#request-settings-tts)

- **Path**: `settings.tts`
- **Type**: `object (3 properties)`

Configuration for the text-to-speech provider used with speech input.

#### [provider](https://runware.ai/docs/models/sync-3#request-settings-tts-provider)

- **Path**: `settings.tts.provider`
- **Type**: `string`
- **Default**: `elevenlabs`

Name of the TTS provider.

#### [stability](https://runware.ai/docs/models/sync-3#request-settings-tts-stability)

- **Path**: `settings.tts.stability`
- **Type**: `float`
- **Min**: `0`
- **Max**: `1`

Voice stability for the TTS provider. Higher values produce more consistent output.

#### [similarityBoost](https://runware.ai/docs/models/sync-3#request-settings-tts-similarityboost)

- **Path**: `settings.tts.similarityBoost`
- **Type**: `float`
- **Min**: `0`
- **Max**: `1`

Voice similarity enforcement for the TTS provider. Higher values make the voice more closely match the target.

## Response Parameters

### [taskType](https://runware.ai/docs/models/sync-3#response-tasktype)

- **Type**: `string`
- **Required**: true
- **Value**: `videoInference`

Type of the task.

### [taskUUID](https://runware.ai/docs/models/sync-3#response-taskuuid)

- **Type**: `string`
- **Required**: true
- **Format**: `UUID v4`

UUID of the task.

### [videoUUID](https://runware.ai/docs/models/sync-3#response-videouuid)

- **Type**: `string`
- **Required**: true
- **Format**: `UUID v4`

UUID of the output video.

### [videoURL](https://runware.ai/docs/models/sync-3#response-videourl)

- **Type**: `string`
- **Format**: `URI`

URL of the output video.

### [videoBase64Data](https://runware.ai/docs/models/sync-3#response-videobase64data)

- **Type**: `string`

Base64-encoded video data.

### [videoDataURI](https://runware.ai/docs/models/sync-3#response-videodatauri)

- **Type**: `string`
- **Format**: `URI`

Data URI of the output video.

### [seed](https://runware.ai/docs/models/sync-3#response-seed)

- **Type**: `integer`

The seed used for generation. If none was provided, shows the randomly generated seed.

### [NSFWContent](https://runware.ai/docs/models/sync-3#response-nsfwcontent)

- **Type**: `boolean`

Flag indicating if NSFW content was detected.

### [cost](https://runware.ai/docs/models/sync-3#response-cost)

- **Type**: `float`

Task cost in USD. Present when `includeCost` is set to `true` in the request.