MODEL ID runware:200@6
live

Wan2.2 A14B

Alibaba
by Alibaba

Wan2.2 A14B is a Mixture of Experts video model with two 14B experts for layout and detail. It supports text prompts or reference images to generate cinematic 480p or 720p clips with stable inference cost and consistent motion. Ideal for pipelines on high end GPUs.

Wan2.2 A14B

API Options

Platform-level options for task execution and delivery.

taskType

string required value: videoInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

outputType

string default: URL

Video output type.

Allowed values 1 value

outputFormat

string default: MP4

File format for the generated video.

Allowed values 3 values

outputQuality

integer min: 20 max: 99 default: 95

Compression quality of the output. Higher values preserve quality but increase file size.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: async

Determines how the API delivers task results.

Allowed values 1 value
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.
Learn more 1 resource

uploadEndpoint

string URI

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

Common use cases:

  • Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
  • CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg

The content data will be sent as the request body to the specified URL when generation is complete.

safety

object

Content safety checking configuration for video generation.

Properties 2 properties
safety » checkContent

checkContent

boolean default: false

Enable or disable content safety checking. When enabled, defaults to 'fast' mode.

safety » mode

mode

string default: none

Safety checking mode for video generation.

Allowed values 3 values
Disables checking.
Checks key frames.
Checks all frames.

ttl

integer min: 60

Time-to-live (TTL) in seconds for generated content. Only applies when outputType is 'URL'.

includeCost

boolean default: false

Include task cost in the response.

numberResults

integer min: 1 max: 20 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

acceleration

string default: none

Optimization level.

Allowed values 4 values

Advanced caching mechanisms to speed up generation.

Properties 12 properties
acceleratorOptions » cacheEndStep

cacheEndStep

integer min: 1

Absolute step number to end caching. Must be greater than cacheStartStep and less than or equal to steps.

acceleratorOptions » cacheEndStepPercentage

cacheEndStepPercentage

integer min: 1 max: 100

Percentage of steps to end caching. Alternative to cacheEndStep. Must be greater than cacheStartStepPercentage.

acceleratorOptions » cacheMaxConsecutiveSteps

cacheMaxConsecutiveSteps

integer min: 1 max: 5 default: 3

Limits the maximum number of consecutive steps that can use cached computations before forcing a fresh computation.

acceleratorOptions » cacheStartStep

cacheStartStep

integer min: 0

Absolute step number to start caching. Must be less than cacheEndStep.

acceleratorOptions » cacheStartStepPercentage

cacheStartStepPercentage

integer min: 0 max: 99

Percentage of steps to start caching. Alternative to cacheStartStep. Must be less than cacheEndStepPercentage.

acceleratorOptions » fbCache

fbCache

boolean default: false

First Block Cache (FBCache) acceleration. Reuses feature block computations across steps.

acceleratorOptions » fbCacheThreshold

fbCacheThreshold

float min: 0 max: 1 step: 0.01 default: 0.25

Controls the sensitivity threshold for determining when to reuse cached computations. Lower values reuse more aggressively.

acceleratorOptions » teaCache

teaCache

boolean default: false

TeaCache acceleration for transformer-based models. Estimates step differences to skip redundant computations.

acceleratorOptions » teaCacheDistance

teaCacheDistance

float min: 0 max: 1 step: 0.01 default: 0.5

Controls the aggressiveness of the TeaCache feature. Lower values prioritize quality, higher values prioritize speed.

acceleratorOptions » dbCache

dbCache

boolean default: false

DB Cache (CacheDiT) acceleration. Caches and reuses intermediate transformer block outputs to skip redundant computations.

acceleratorOptions » dbCacheThreshold

dbCacheThreshold

float min: 0 max: 1 step: 0.01 default: 0.25

Controls the sensitivity threshold for DB Cache. Lower values reuse cached blocks more aggressively, higher values prioritize quality.

acceleratorOptions » dbCacheSkipInterval

dbCacheSkipInterval

integer min: 1 default: 5

Controls how many steps to skip between cache refreshes. Higher values skip more steps for faster generation at the cost of quality.

Inputs

Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

inputs » frameImages

frameImages

array of objects min items: 1max items: 2

For image-to-video workflows, each item can be a plain image input (UUID, URL, Data URI, or Base64), or an object with an explicit frame position.

When no frame is specified, images are distributed automatically:

  • 1 image: First frame.
  • 2 images: First and last frames.
  • 3+ images: First and last frames, intermediates evenly spaced.
Properties 2 properties
inputs » frameImages » image

image

string required

Image input (UUID, URL, Data URI, or Base64).

inputs » frameImages » frame

frame

object

Target frame position for the image. Supports first and last frame.

Allowed values 4 values
First frame of the video.
Last frame of the video.
Frame index 0 (first frame).
Frame index -1 (last frame).

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: runware:200@6

Identifier of the model to use for generation.

Learn more 3 resources

positivePrompt

string required min: 2 max: 3000

Text prompt describing elements to include in the generated output.

Learn more 2 resources

negativePrompt

string min: 2 max: 3000

Prompt to guide what to exclude from generation. Ignored when guidance is disabled (CFGScale ≤ 1).

Learn more 1 resource

Width of the generated media in pixels.

Learn more 2 resources

Height of the generated media in pixels.

Learn more 2 resources

resolution

string

Resolution preset for the output. When used with input media, automatically matches the aspect ratio from the input.

Allowed values 3 values

duration

integer min: 1 max: 15

Duration of the generation in seconds. Total frames = duration × fps.

fps

integer min: 16 max: 60

Frames per second for video generation. Higher values create smoother motion but require more processing time.

seed

integer min: 0 max: 9223372036854776000

Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.

Learn more 1 resource

steps

integer min: 4 max: 50 default: 20

Total number of denoising steps. Higher values generally produce more detailed results but take longer.

Learn more 1 resource

scheduler

string

Scheduler to use for the diffusion process.

Allowed values 75 values
Learn more 2 resources

CFGScale

float min: 0.1 max: 30 step: 0.01

Guidance scale representing how closely the images will resemble the prompt.

Learn more 1 resource

lora

array of objects min items: 1

With LoRA (Low-Rank Adaptation), you can adapt a model to specific styles or features by emphasizing particular aspects of the data. This technique enhances the quality and relevance of generated content and can be especially useful when the output needs to adhere to a specific artistic style or follow particular guidelines.

Multiple LoRA models can be used simultaneously to achieve different adaptation goals.

Examples 1 example
{
  "taskType": "imageInference",
  "taskUUID": "a770f077-f413-47de-9dac-be0b26a35da6",
  "positivePrompt": "a magical forest with glowing mushrooms, pixel art style",
  "model": "civitai:101055@128078",
  "height": 1024,
  "width": 1024,
  "lora": [ 
    { 
      "model": "civitai:120096@135931",
      "weight": 0.8
    } 
  ] 
}
Learn more 1 resource
Properties 3 properties
lora » model

model

string required

LoRA model identifier.

lora » weight

weight

float min: -4 max: 4 step: 0.01 default: 1

Strength of the LoRA influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the LoRA's style.

lora » transformer

transformer

string default: both

Transformer stages to apply LoRA. Some video models use separate high-noise and low-noise processing stages, and LoRAs can be selectively applied to optimize their effectiveness.

Allowed values 3 values
Apply LoRA only to the high-noise processing stage (coarse structure and early generation steps).
Apply LoRA only to the low-noise processing stage (fine details and later generation steps).
Apply LoRA to both stages for full coverage.