ByteDance <> Runware

Seedance 2.0

State-of-the-art AI video generation

Access Seedance 2.0, ByteDance's latest video generation model, with support for all media input workflows. Generate cinematic video with native audio, realistic motion, and advanced camera control.

Get early access
0:00

Seedance 2.0 variants

Seedance 2.0 is available as a unified model family with two variants, both supporting the same core workflows and inputs.

Quality

Seedance 2.0

Optimized for quality. Designed for production use cases where visual fidelity, scene consistency, and more complex compositions matter.

Speed

Seedance 2.0 Fast

Optimized for speed. For workflows where lower latency is more important than quality.

All workflow types supported natively.

Text-to-videoImage-to-videoReference-to-video

One integration, no juggling between multiple endpoints.

// capabilities

Why Seedance 2.0 stands out

Native audio

Synchronized, cinema-quality audio generated in perfect timing with your video.

Advanced camera control

Cinematic prompts for movement, framing, and shot composition.

Realistic motion and physics

Stronger handling of movement, interaction, and physical realism across frames.

Great for anime

Great for stylized character motion, fluid transitions, and scene continuity.

Extreme realism

Photorealistic output with lifelike textures, lighting, and depth across every frame.

Perfect dialogue

Accurate lip sync and natural facial performance from spoken audio.

Cinematic quality

Great for sequences with consistent characters, scenes, and story continuity.

Reference-driven control

Guide outputs using one or more inputs to improve consistency, style, and structure.

Multi-shot outputs

Generate sequences with natural transitions and continuity, not just isolated clips.

// use cases

Built for real workflows

Seedance 2.0 is designed for real-world use, not just isolated clips. It works across a range of production and product workflows where control and quality matter.

Product videos from images

Turn static assets into dynamic video content without manual editing.

Social content at scale

Generate short-form video for campaigns, testing, and iteration.

Creative tooling

Power editors and platforms that need structured, controllable video output.

Storyboarding and previs

Quickly explore scenes, camera movement, and concepts before production.

Reference-driven generation

Use images and inputs to guide style, composition, and motion more precisely.

Character-led content

Maintain consistency across subjects, environments, and sequences.

frequentlyaskedquestions

What is Seedance 2.0?

Seedance 2.0 is ByteDance's latest AI video generation model, designed around a multimodal audio-video architecture. It supports text, image, audio, and video inputs, with a focus on cinematic quality, motion realism, and controllability.

What workflows are supported?

Text-to-video, image-to-video, and reference-to-video generation are supported across both Seedance 2.0 and Seedance 2.0 Fast.

What is the difference between the two variants?

Seedance 2.0 is optimized for higher quality output, while Seedance 2.0 Fast is optimized for speed and iteration.

Do I need separate integrations for each workflow?

No. All workflows are supported through the same integration, so you don't need to manage multiple endpoints.

// early access

Get early access to Seedance 2.0

We're rolling out access in stages. Request early access to start building with Seedance 2.0 and Seedance 2.0 Fast. We'll follow up with access details and next steps.

1Your details
2Access terms