react-1
react-1 is a video performance editing model designed for post-production direction without reshoots. It modifies acting and emotional delivery within existing footage while preserving identity and visual continuity, enabling directors to reshape performances using audio or written guidance.
API Options
Platform-level options for task execution and delivery.
-
taskType
string required value: videoInference -
Identifier for the type of task being performed
-
taskUUID
string required UUID v4 -
UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.
-
outputType
string default: URL -
Video output type.
Allowed values 1 value
-
outputFormat
string default: MP4 -
Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.
- `MP4`: Widely supported video container (H.264), recommended for general use.
- `WEBM`: Optimized for web delivery.
- `MOV`: QuickTime format, common in professional workflows (Apple ecosystem).
Allowed values 3 values
-
outputQuality
integer min: 20 max: 99 default: 95 -
Compression quality of the output. Higher values preserve quality but increase file size.
-
webhookURL
string URI -
Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.
Learn more 1 resource
- Webhooks PLATFORM
- Webhooks
-
deliveryMethod
string default: async -
Determines how the API delivers task results.
Allowed values 1 value
- Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.
Learn more 1 resource
- Task Polling PLATFORM
-
uploadEndpoint
string URI -
Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.
Common use cases:
- Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
- CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600 // Google Cloud Storage presigned URL https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789 // Custom storage endpoint https://storage.example.com/uploads/generated-image.jpgThe content data will be sent as the request body to the specified URL when generation is complete.
-
safety
object -
Content safety checking configuration for video generation.
Properties 2 properties
-
safety»checkContentcheckContent
boolean default: false -
Enable or disable content safety checking. When enabled, defaults to
fastmode.
-
safety»modemode
string default: none -
Safety checking mode for video generation.
Allowed values 3 values
- Disables checking.
- Checks key frames.
- Checks all frames.
-
-
ttl
integer min: 60 -
Time-to-live (TTL) in seconds for generated content. Only applies when
outputTypeisURL.
-
includeCost
boolean default: false -
Include task cost in the response.
-
numberResults
integer min: 1 max: 4 default: 1 -
Number of results to generate. Each result uses a different seed, producing variations of the same parameters.
Inputs
Input resources for the task (images, audio, etc). These must be nested inside the inputs object.
inputs object. Generation Parameters
Core parameters for controlling the generated content.
-
model
string required value: sync:react-1@1 -
Identifier of the model to use for generation.
Learn more 3 resources
Provider Settings
Parameters specific to this model provider. These must be nested inside the providerSettings.sync object.
providerSettings.sync object.-
providerSettings»sync»activeSpeakerDetectionactiveSpeakerDetection
object -
Speaker-targeting for multi-person clips.
Properties 1 property
-
providerSettings»sync»activeSpeakerDetection»autoDetectautoDetect
boolean default: false -
Automatically detect and target the active speaker.
-
-
providerSettings»sync»editRegioneditRegion
string default: face -
Region of the subject to animate during re-animation.
Allowed values 3 values
- Lip sync and emotional expressions in the face region.
- Only lip movements for synchronization.
- Full head movements with emotions and lip sync.
-
providerSettings»sync»emotionPromptemotionPrompt
string -
Emotional tone for performance re-animation.
Allowed values 6 values
-
providerSettings»sync»occlusionDetectionEnabledocclusionDetectionEnabled
boolean default: false -
Enable occlusion handling for obstructed faces.
-
providerSettings»sync»syncModesyncMode
string default: bounce -
Synchronization strategy when audio and video durations don't match.
Allowed values 5 values
- Audio bounces back and forth to fill video duration.
- Audio repeats from the beginning when it ends.
- Audio is cut when video ends.
- Remaining video plays with silence after audio ends.
- Audio is time-stretched or compressed to match video duration.
-
providerSettings»sync»temperaturetemperature
float min: 0 max: 1 step: 0.01 default: 0.5 -
Expressiveness of lip sync and facial movements.