Best Image-to-Video

Models selected for turning still images into short video clips with coherent motion and stable subjects. Useful for simple animation, camera movement, and bringing static visuals to life.

Best rated

P-Video-Avatar is a portrait-driven avatar video model that turns a single image into a speaking video using either an uploaded audio track or a generated voice from script. It is built for production avatar workflows with strong lip sync, selectable voices and languages, optional speaking-style control, seeded generation, and 720p or 1080p output for scalable talking-head video creation.

Featured Models

Top-performing models in this category, recommended by our community and performance benchmarks.

#2

by Alibaba

HappyHorse-1.0 is a video generation model for text-to-video and image-to-video workflows. It supports output at 720p or 1080p, clip durations from 3 to 15 seconds, seeded generation, watermark control, and first-frame image conditioning for image-to-video generation.

#3

by Skywork

SkyReels V4 is a unified multimodal video foundation model for joint video-audio generation, inpainting, and editing. It accepts text, images, video clips, masks, and audio references, and supports cinematic outputs up to 1080p, 32 FPS, and 15 seconds with synchronized audio, making it suitable for prompt-driven generation as well as guided editing workflows.

#4

by Kling AI

Kling VIDEO 3.0 4K is the 4K variant of Kling VIDEO 3.0 for text-to-video and image-to-video generation. It extends the 3.0 series from 720p Standard and 1080p Pro into 4K output while keeping the same multimodal strengths: native audio generation, multi-shot sequencing, element consistency, prompt-driven scene control, and stable temporal coherence across longer clips.

#5

by Kling AI

Kling VIDEO O3 4K is the 4K variant of Kling VIDEO O3 for text-to-video and image-to-video workflows. It raises the O3 line from 720p Standard and 1080p Pro to 4K output while preserving the series strengths: native audio generation, reference-guided video creation, prompt-based editing, multi-shot structure, and stable subject consistency for more demanding cinematic and advertising workflows.

#6

by ByteDance

Seedance 2.0 is a unified multimodal audio-video generation model from ByteDance that accepts text, image, audio, and video inputs in combination, supporting up to 9 images, 3 video clips, and 3 audio clips as reference. It generates multi-shot videos up to 15 seconds with dual-channel synchronized audio including dialogue, ambient sound, and effects. It features physics-aware motion, improved controllability for video extension and editing, and strong instruction following for complex scene composition.

#7

by ByteDance

Seedance 2.0 Fast is a speed-optimized variant of ByteDance's unified multimodal audio-video generation model. It accepts text, image, audio, and video inputs in combination, like Seedance 2.0, but targets shorter wall-clock times and higher throughput for iterative workflows. It produces multi-shot videos with dual-channel synchronized audio including dialogue, ambient sound, and effects, with physics-aware motion and editing controls, while prioritizing responsiveness over the last increment of visual refinement so teams can preview and ship ideas faster.

#8

by Alibaba

Wan2.7 is Alibaba's next-generation multimodal video model supporting text-to-video, image-to-video, reference-to-video, and video editing. It features multi-shot storytelling, subject-consistent multi-character generation, first-and-last-frame interpolation, video continuation, style transfer, instruction-based editing, and audio-conditioned generation with auto-dubbing. Output at 720p or 1080p, 30 FPS in multiple aspect ratios.

#9
Veo 3.1 Lite

Api Only

by Google

Veo 3.1 Lite is the most cost-effective model in the Veo 3.1 family, designed for high-volume applications requiring rapid iteration. It supports text-to-video and image-to-video generation at 720p or 1080p in landscape and portrait formats, with customizable duration of 4, 6, or 8 seconds. It maintains the same generation speed as Veo 3.1 Fast at less than 50% of the cost, and includes native synchronized audio generation.

#10

by PixVerse

PixVerse V6 is a video generation model focused on multi-shot storytelling with native synchronized audio. It provides over 20 cinematic camera controls including focal length, aperture, depth of field, lens distortion, and vignetting. It features improved character consistency across shots using multi-image references, supports 1080p output at up to 15 seconds, and includes multilingual text rendering in frames.

#11

by Lightricks

LTX-2.3 Fast is a performance-optimized variant of LTX 2.3 designed for rapid video generation with synchronized audio. It supports text-to-video, image-to-video, and audio-conditioned workflows while prioritizing speed, responsiveness, and cost efficiency for draft, preview, and high-velocity creative production use cases.

#12

by Lightricks

LTX-2.3 is a multimodal video generation model that produces synchronized video and audio from text or images. It supports text-to-video and image-to-video workflows with native dialogue and ambient sound generation, emphasizing temporal stability, strong motion coherence, and production-ready output quality for professional creative pipelines.

#13

Pruna P-Video is a real-time AI video generation model designed for fast creative iteration and production workflows. It supports text-to-video, image-to-video, and audio-to-video through a unified endpoint, delivering up to 1080p at 48 FPS with integrated dialogue generation and audio import. The model emphasizes speed, cost efficiency, sequencing consistency across clips, and stable subject identity, making it well suited for brand content, multi-format distribution, and rapid draft-to-refine pipelines.

#14

by Kling AI

Kling VIDEO 3.0 Pro is a unified multimodal video model that generates high-quality video with synchronized audio from text or images. It supports reference-guided generation, prompt-based editing, fine control over motion and pacing, and stable temporal coherence for cinematic and narrative clips. Native audio output includes dialogue, ambient sound, and effects aligned to the visuals.

#15

by Kling AI

Kling VIDEO O3 Pro is a unified multimodal video model that generates HD clips from text or images with native audio output. It prioritizes detail, motion realism, and stable subject identity, and it supports reference-driven generation plus prompt-based video editing with strong temporal consistency.

#16

by Vidu

Vidu Q3 is a multimodal video generation model that creates video with synchronized audio directly from text or images, supports intelligent multi-shot sequencing, and produces complete outputs with stable visuals and embedded subtitles without post-processing.

#17

by xAI

Grok Imagine Video is a multimodal generative video model that produces short video clips with native audio from text descriptions or static images. It supports text-to-video and image-to-video generation with synchronized sound effects and dialogue, enabling developers to animate scenes with motion, camera dynamics, and audio in a single API workflow.

#18

by PixVerse

PixVerse V5.6 is an upgraded video generation model that improves visual stability, motion clarity, and audio-visual alignment over previous versions. It supports text-to-video and image-to-video generation with optional native audio, delivering more accurate multi-character lip-sync, cleaner motion in complex scenes, and more natural speech and environmental sound for single-shot cinematic outputs.

#19

by Kling AI

KlingAI Avatar 2.0 Pro builds on the Standard version with higher visual fidelity, smoother motion, and improved expressivity. It generates up to five-minute avatar videos from a single image and audio track, with enhanced detail and production-ready results for varied character types.

#20

by Runway

Runway Gen-4.5 is an AI video generation model that creates short video clips from text prompts or static images with high visual fidelity and smooth motion. It supports both text-to-video and image-to-video generation with a range of aspect ratios and clip durations. Gen-4.5 emphasizes realistic motion, strong prompt adherence, and controllable composition, making it suitable for cinematic sequences and creative video workflows.

#21

by MiniMax

MiniMax Hailuo 2.3 Fast is the speed tier of the Hailuo 2.3 video family. It targets rapid iteration for social clips, ads, and previews. It produces 6 second 768p or 1080p outputs with smooth motion and stable composition. Ideal for high volume image driven video workflows.

#22

by Lightricks

LTX-2 Fast is the high speed tier of the LTX-2 video foundation model. It targets rapid cinematic iteration with strong motion quality and visual consistency. Generate short synced audio video clips from text or image prompts with low latency and efficient GPU use.

#23

by MiniMax

MiniMax Hailuo 2.3 is a cinematic video model for short form production. It accepts text prompts or image inputs and outputs 6 or 10 second clips at 768p or 1080p. It focuses on consistent motion, strong physics, and stable scenes for ads, social content, and creative shots.

#24

by Google

Veo 3.1 is a cinematic video generation model for developers. It turns text prompts or reference images into high fidelity scenes with richer native audio, better prompt adherence, and granular shot control. Use it for story driven clips with smoother motion and consistent style.

#25

by Google

Veo 3.1 Fast is a high speed variant of Veo 3.1 for rapid creative iteration. It supports text prompts, image prompts, and reference images. It targets low latency workflows while keeping cinematic quality for short form and multi shot video generation with native audio.

#26

by OpenAI

Sora 2 Pro is the higher quality Sora 2 variant for precision video work. It supports text prompts and image inputs. It outputs synchronized video with sound, higher resolution frames, and stronger temporal consistency. Ideal for production clips and demanding pipelines.

#27

by OpenAI

Sora 2 is OpenAI’s flagship generative model for video and audio. It accepts text prompts and generates visually rich clips with synchronized dialogue and sound. It improves physical realism and scene control. It also supports editing and extension of existing video inputs.

#28

by ByteDance

OmniHuman-1.5 generates high fidelity avatar video from a single image with audio and optional text prompts. It fuses multimodal reasoning with diffusion motion to keep identity stable, lip sync accurate, and gestures context aware for long, multi subject clips.

#29

by Runway

Runway Aleph is an in‑context video model for high fidelity cinematic work. It transforms text prompts, reference images, and source clips into new shots with consistent lighting, style, and motion. Developers can build workflows for video editing, angle generation, and scene transformation.

#30

by Runway

Runway Gen-4 Turbo is a high speed variant of Gen-4 for rapid video ideation. It turns reference images into short cinematic clips with strong character consistency, smooth motion, and reduced credit cost. Ideal for fast iteration in production and previsualization pipelines.

Explore other collections