
sync.
Production grade AI lip sync and video dubbing
Sync.so develops lip synchronization models that generate realistic mouth movement in video from a supplied audio track. The technology is used to align spoken dialogue with facial motion on existing footage, supporting automated dubbing and localization workflows. The models integrate alongside image, video, and audio generation systems to add lip sync capabilities without manual animation or custom facial rigs.
Models by sync.

Api Only
react-1
react-1 is a video performance editing model designed for post-production direction without reshoots. It modifies acting and emotional delivery within existing footage while preserving identity and visual continuity, enabling directors to reshape performances using audio or written guidance.

Api Only
lipsync-2-pro
lipsync-2-pro extends lipsync-2 with diffusion-based enhancement for studio-grade lip synchronization. It preserves fine facial details such as teeth, facial hair, and micro-expressions while supporting high-resolution output suitable for professional post-production workflows.

Api Only
lipsync-2
lipsync-2 is a zero-shot lip synchronization model that aligns spoken audio to existing video while preserving the speaker’s identity and natural speaking style. It works across live-action, animation, and AI-generated footage without training or fine-tuning.