Wan2.2 A14B Turbo
Wan2.2 A14B Turbo accelerates Wan2.2 with fused Lightning LoRA for ultra fast diffusion. It cuts inference to 8 steps while preserving cinematic structure and detail. Ideal for rapid 480p to 720p video prototyping and iteration in production workflows.
API Options
Platform-level options for task execution and delivery.
-
taskType
string required value: videoInference -
Identifier for the type of task being performed
-
taskUUID
string required UUID v4 -
UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.
-
outputType
string default: URL -
Video output type.
Allowed values 1 value
-
outputFormat
string default: MP4 -
Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.
- `MP4`: Widely supported video container (H.264), recommended for general use.
- `WEBM`: Optimized for web delivery.
- `MOV`: QuickTime format, common in professional workflows (Apple ecosystem).
Allowed values 3 values
-
outputQuality
integer min: 20 max: 99 default: 95 -
Compression quality of the output. Higher values preserve quality but increase file size.
-
webhookURL
string URI -
Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.
Learn more 1 resource
- Webhooks PLATFORM
- Webhooks
-
deliveryMethod
string default: async -
Determines how the API delivers task results.
Allowed values 1 value
- Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.
Learn more 1 resource
- Task Polling PLATFORM
-
uploadEndpoint
string URI -
Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.
Common use cases:
- Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
- CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600 // Google Cloud Storage presigned URL https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789 // Custom storage endpoint https://storage.example.com/uploads/generated-image.jpgThe content data will be sent as the request body to the specified URL when generation is complete.
-
safety
object -
Content safety checking configuration for video generation.
Properties 2 properties
-
safety»checkContentcheckContent
boolean default: false -
Enable or disable content safety checking. When enabled, defaults to
fastmode.
-
safety»modemode
string default: none -
Safety checking mode for video generation.
Allowed values 3 values
- Disables checking.
- Checks key frames.
- Checks all frames.
-
-
ttl
integer min: 60 -
Time-to-live (TTL) in seconds for generated content. Only applies when
outputTypeisURL.
-
includeCost
boolean default: false -
Include task cost in the response.
-
numberResults
integer min: 1 max: 20 default: 1 -
Number of results to generate. Each result uses a different seed, producing variations of the same parameters.
-
acceleration
string default: none -
Optimization level.
Allowed values 4 values
-
acceleratorOptions
object -
Advanced caching mechanisms to speed up generation.
Properties 12 properties
-
acceleratorOptions»cacheEndStepcacheEndStep
integer min: 1 -
Absolute step number to end caching. Must be greater than
cacheStartStepand less than or equal tosteps.
-
acceleratorOptions»cacheEndStepPercentagecacheEndStepPercentage
integer min: 1 max: 100 -
Percentage of steps to end caching. Alternative to
cacheEndStep. Must be greater thancacheStartStepPercentage.
-
acceleratorOptions»cacheMaxConsecutiveStepscacheMaxConsecutiveSteps
integer min: 1 max: 5 default: 3 -
Limits the maximum number of consecutive steps that can use cached computations before forcing a fresh computation.
-
acceleratorOptions»cacheStartStepcacheStartStep
integer min: 0 -
Absolute step number to start caching. Must be less than
cacheEndStep.
-
acceleratorOptions»cacheStartStepPercentagecacheStartStepPercentage
integer min: 0 max: 99 -
Percentage of steps to start caching. Alternative to
cacheStartStep. Must be less thancacheEndStepPercentage.
-
acceleratorOptions»fbCachefbCache
boolean default: false -
First Block Cache (FBCache) acceleration. Reuses feature block computations across steps.
-
acceleratorOptions»fbCacheThresholdfbCacheThreshold
float min: 0 max: 1 step: 0.01 default: 0.25 -
Controls the sensitivity threshold for determining when to reuse cached computations. Lower values reuse more aggressively.
-
acceleratorOptions»teaCacheteaCache
boolean default: false -
TeaCache acceleration for transformer-based models. Estimates step differences to skip redundant computations.
-
acceleratorOptions»teaCacheDistanceteaCacheDistance
float min: 0 max: 1 step: 0.01 default: 0.5 -
Controls the aggressiveness of the TeaCache feature. Lower values prioritize quality, higher values prioritize speed.
-
acceleratorOptions»dbCachedbCache
boolean default: false -
DB Cache (CacheDiT) acceleration. Caches and reuses intermediate transformer block outputs to skip redundant computations.
-
acceleratorOptions»dbCacheThresholddbCacheThreshold
float min: 0 max: 1 step: 0.01 default: 0.25 -
Controls the sensitivity threshold for DB Cache. Lower values reuse cached blocks more aggressively, higher values prioritize quality.
-
acceleratorOptions»dbCacheSkipIntervaldbCacheSkipInterval
integer min: 1 default: 5 -
Controls how many steps to skip between cache refreshes. Higher values skip more steps for faster generation at the cost of quality.
-
Inputs
Input resources for the task (images, audio, etc). These must be nested inside the inputs object.
inputs object.-
inputs»frameImagesframeImages
array of objects min items: 1max items: 2 -
An array of objects that define key frames to guide video generation. Each object specifies an input image and optionally its position within the video timeline.
The
frameImagesparameter allows you to constrain specific frames within the video sequence, ensuring that particular visual content appears at designated points. This is different fromreferenceImages, which provide overall visual guidance without constraining specific timeline positions.When the
frameparameter is omitted from objects, automatic distribution rules apply:- 1 image: Used as the first frame.
- 2 images: First and last frames.
Examples 2 examples
Single frame (automatic positioning): When only one image is provided, it automatically becomes the first frame of the video.
First and last frames: With two images, they automatically become the first and last frames of the video sequence."frameImages": [ { "image": "aac49721-1964-481a-ae78-8a4e29b91402" } ]"frameImages": [ { "image": "aac49721-1964-481a-ae78-8a4e29b91402", "frame": "first" }, { "image": "3ad204c3-a9de-4963-8a1a-c3911e3afafe", "frame": "last" } ]Properties 2 properties
-
inputs»frameImages»imageimage
string required -
Image input (UUID, URL, Data URI, or Base64).
-
inputs»frameImages»frameframe
object -
Target frame position for the image. Supports first and last frame.
Allowed values 4 values
- First frame of the video.
- Last frame of the video.
- Frame index 0 (first frame).
- Frame index -1 (last frame).
Generation Parameters
Core parameters for controlling the generated content.
-
model
string required value: runware:200@8 -
Identifier of the model to use for generation.
Learn more 3 resources
-
positivePrompt
string required min: 2 max: 3000 -
Text prompt describing elements to include in the generated output.
Learn more 2 resources
-
negativePrompt
string min: 2 max: 3000 -
Prompt to guide what to exclude from generation. Ignored when guidance is disabled (CFGScale ≤ 1).
Learn more 1 resource
-
width
integer paired with height -
Width of the generated media in pixels.
Learn more 2 resources
-
height
integer paired with width -
Height of the generated media in pixels.
Learn more 2 resources
-
resolution
string -
Resolution preset for the output. When used with input media, automatically matches the aspect ratio from the input.
Allowed values 3 values
-
duration
integer min: 1 max: 15
-
fps
integer min: 16 max: 60 -
Frames per second for video generation. Higher values create smoother motion but require more processing time.
-
seed
integer min: 0 max: 9223372036854776000 -
Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.
Learn more 1 resource
-
steps
integer min: 4 max: 50 default: 20 -
Total number of denoising steps. Higher values generally produce more detailed results but take longer.
Learn more 1 resource
-
scheduler
string -
Scheduler to use for the diffusion process.
Allowed values 75 values
Learn more 2 resources
-
CFGScale
float min: 0.1 max: 30 step: 0.01 -
Guidance scale representing how closely the output will resemble the prompt. Higher values produce results more aligned with the prompt.
Learn more 1 resource
-
lora
array of objects min items: 1 -
resources:
With LoRA (Low-Rank Adaptation), you can adapt a model to specific styles or features by emphasizing particular aspects of the data. This technique enhances the quality and relevance of generated content and can be especially useful when the output needs to adhere to a specific artistic style or follow particular guidelines.
Multiple LoRA models can be used simultaneously to achieve different adaptation goals.
Examples 1 example
"lora": [ { "model": "<lora-model-air>", "weight": 0.8 } ]Properties 3 properties
-
lora»modelmodel
string required -
LoRA model identifier.
-
lora»weightweight
float min: -4 max: 4 step: 0.01 default: 1 -
Strength of the LoRA influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the LoRA's style.
-
lora»transformertransformer
string default: both -
Transformer stages to apply LoRA. Some video models use separate high-noise and low-noise processing stages, and LoRAs can be selectively applied to optimize their effectiveness.
Allowed values 3 values
- Apply LoRA only to the high-noise processing stage (coarse structure and early generation steps).
- Apply LoRA only to the low-noise processing stage (fine details and later generation steps).
- Apply LoRA to both stages for full coverage.
-