MODEL ID pixverse:1@8
coming-soon

PixVerse V6

PixVerse
by PixVerse

PixVerse V6 is a video generation model focused on multi-shot storytelling with native synchronized audio. It provides over 20 cinematic camera controls including focal length, aperture, depth of field, lens distortion, and vignetting. It features improved character consistency across shots using multi-image references, supports 1080p output at up to 15 seconds, and includes multilingual text rendering in frames.

PixVerse V6

API Options

Platform-level options for task execution and delivery.

taskType

string required value: videoInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

outputType

string default: URL

Video output type.

Allowed values 1 value

outputFormat

string default: MP4

Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.

  • `MP4`: Widely supported video container (H.264), recommended for general use.
  • `WEBM`: Optimized for web delivery.
  • `MOV`: QuickTime format, common in professional workflows (Apple ecosystem).
Allowed values 3 values

outputQuality

integer min: 20 max: 99 default: 95

Compression quality of the output. Higher values preserve quality but increase file size.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: async

Determines how the API delivers task results.

Allowed values 1 value
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.
Learn more 1 resource

uploadEndpoint

string URI

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

Common use cases:

  • Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
  • CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg

The content data will be sent as the request body to the specified URL when generation is complete.

safety

object

Content safety checking configuration for video generation.

Properties 2 properties
safety » checkContent

checkContent

boolean default: false

Enable or disable content safety checking. When enabled, defaults to fast mode.

safety » mode

mode

string default: none

Safety checking mode for video generation.

Allowed values 3 values
Disables checking.
Checks key frames.
Checks all frames.

ttl

integer min: 60

Time-to-live (TTL) in seconds for generated content. Only applies when outputType is URL.

includeCost

boolean default: false

Include task cost in the response.

numberResults

integer min: 1 max: 20 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Inputs

Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

inputs » frameImages

frameImages

array of objects min items: 1max items: 2

An array of objects that define key frames to guide video generation. Each object specifies an input image and optionally its position within the video timeline.

The frameImages parameter allows you to constrain specific frames within the video sequence, ensuring that particular visual content appears at designated points. This is different from referenceImages, which provide overall visual guidance without constraining specific timeline positions.

When the frame parameter is omitted from objects, automatic distribution rules apply:

  • 1 image: Used as the first frame.
  • 2 images: First and last frames.
Examples 2 examples

Single frame (automatic positioning): When only one image is provided, it automatically becomes the first frame of the video.

"frameImages": [
  {
    "image": "aac49721-1964-481a-ae78-8a4e29b91402"
  }
]
First and last frames: With two images, they automatically become the first and last frames of the video sequence.
"frameImages": [
  {
    "image": "aac49721-1964-481a-ae78-8a4e29b91402",
    "frame": "first"
  },
  {
    "image": "3ad204c3-a9de-4963-8a1a-c3911e3afafe",
    "frame": "last"
  }
]
Properties 2 properties
inputs » frameImages » image

image

string required

Image input (UUID, URL, Data URI, or Base64).

inputs » frameImages » frame

frame

object

Target frame position for the image. Supports first and last frame.

Allowed values 4 values
First frame of the video.
Last frame of the video.
Frame index 0 (first frame).
Frame index -1 (last frame).
inputs » video

video

string

Video input (UUID or URL).

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: pixverse:1@8

Identifier of the model to use for generation.

Learn more 3 resources

positivePrompt

string required min: 1 max: 2048

Text prompt describing elements to include in the generated output.

Learn more 2 resources

negativePrompt

string min: 1 max: 2048

Prompt to guide what to exclude from generation. Ignored when guidance is disabled (CFGScale ≤ 1).

Learn more 1 resource

width

integer required* paired with height

Width of the generated media in pixels.

Learn more 2 resources

height

integer required* paired with width

Height of the generated media in pixels.

Learn more 2 resources

resolution

string default: 720p

Resolution preset for the output. When used with input media, automatically matches the aspect ratio from the input.

Allowed values 4 values

duration

float min: 1 max: 15 default: 5

Duration of the generation in seconds. Total frames = duration × fps.

seed

integer min: 0 max: 2147483647

Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » audio

audio

boolean default: false

Enable audio generation.

settings » multiClip

multiClip

boolean default: false

Enable multi-shot generation with varying camera angles.

settings » style

style

string

Artistic style aesthetic for video generation.

Allowed values 5 values
Japanese animation aesthetic.
Three-dimensional animated style with depth.
Stop-motion clay animation appearance.
Comic book or graphic novel visual style.
Futuristic, neon-lit dystopian aesthetic.
settings » thinking

thinking

string default: auto

Enhanced reasoning mode.

Allowed values 3 values
Max understanding.
Faster generation.
Automatic.