Stable Diffusion 3
Stable Diffusion 3 is a next generation text to image model with improved prompt adherence and typography. It handles complex scenes with multiple subjects and fine detail. It targets both local and cloud deployment so developers can integrate high quality image generation into products.
API Options
Platform-level options for task execution and delivery.
-
taskType
string required value: imageInference -
Identifier for the type of task being performed
-
taskUUID
string required UUID v4 -
UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.
-
outputType
string default: URL -
Image output type.
Allowed values 3 values
-
outputFormat
string default: JPG -
Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.
- `JPG`: Best for photorealistic images with smaller file sizes (no transparency).
- `PNG`: Lossless compression, supports high quality and transparency (alpha channel).
- `WEBP`: Modern format providing superior compression and transparency support.
**Transparency**: If you are using features like background removal or LayerDiffuse that require transparency, you must select a format that supports an alpha channel (e.g., `PNG`, `WEBP`, `TIFF`). `JPG` does not support transparency.Allowed values 3 values
-
outputQuality
integer min: 20 max: 99 default: 95 -
Compression quality of the output. Higher values preserve quality but increase file size.
-
webhookURL
string URI -
Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.
Learn more 1 resource
- Webhooks PLATFORM
- Webhooks
-
deliveryMethod
string default: sync -
Determines how the API delivers task results.
Allowed values 2 values
- Returns complete results directly in the API response.
- Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Learn more 1 resource
- Task Polling PLATFORM
-
uploadEndpoint
string URI -
Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.
Common use cases:
- Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
- CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600 // Google Cloud Storage presigned URL https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789 // Custom storage endpoint https://storage.example.com/uploads/generated-image.jpgThe content data will be sent as the request body to the specified URL when generation is complete.
-
safety
object -
Content safety checking configuration for image generation.
Properties 2 properties
-
safety»checkContentcheckContent
boolean default: false -
Enable or disable content safety checking. When enabled, defaults to
fastmode.
-
safety»modemode
string default: none -
Safety checking mode for image generation.
Allowed values 2 values
- Disables checking.
- Performs a single check.
-
-
ttl
integer min: 60 -
Time-to-live (TTL) in seconds for generated content. Only applies when
outputTypeisURL.
-
includeCost
boolean default: false -
Include task cost in the response.
-
numberResults
integer min: 1 max: 20 default: 1 -
Number of results to generate. Each result uses a different seed, producing variations of the same parameters.
-
acceleratorOptions
object -
Advanced caching mechanisms to speed up generation.
Properties 12 properties
-
acceleratorOptions»cacheEndStepcacheEndStep
integer min: 1 -
Absolute step number to end caching. Must be greater than
cacheStartStepand less than or equal tosteps.
-
acceleratorOptions»cacheEndStepPercentagecacheEndStepPercentage
integer min: 1 max: 100 -
Percentage of steps to end caching. Alternative to
cacheEndStep. Must be greater thancacheStartStepPercentage.
-
acceleratorOptions»cacheMaxConsecutiveStepscacheMaxConsecutiveSteps
integer min: 1 max: 5 default: 3 -
Limits the maximum number of consecutive steps that can use cached computations before forcing a fresh computation.
-
acceleratorOptions»cacheStartStepcacheStartStep
integer min: 0 -
Absolute step number to start caching. Must be less than
cacheEndStep.
-
acceleratorOptions»cacheStartStepPercentagecacheStartStepPercentage
integer min: 0 max: 99 -
Percentage of steps to start caching. Alternative to
cacheStartStep. Must be less thancacheEndStepPercentage.
-
acceleratorOptions»fbCachefbCache
boolean default: false -
First Block Cache (FBCache) acceleration. Reuses feature block computations across steps.
-
acceleratorOptions»fbCacheThresholdfbCacheThreshold
float min: 0 max: 1 step: 0.01 default: 0.25 -
Controls the sensitivity threshold for determining when to reuse cached computations. Lower values reuse more aggressively.
-
acceleratorOptions»teaCacheteaCache
boolean default: false -
TeaCache acceleration for transformer-based models. Estimates step differences to skip redundant computations.
-
acceleratorOptions»teaCacheDistanceteaCacheDistance
float min: 0 max: 1 step: 0.01 default: 0.5 -
Controls the aggressiveness of the TeaCache feature. Lower values prioritize quality, higher values prioritize speed.
-
acceleratorOptions»dbCachedbCache
boolean default: false -
DB Cache (CacheDiT) acceleration. Caches and reuses intermediate transformer block outputs to skip redundant computations.
-
acceleratorOptions»dbCacheThresholddbCacheThreshold
float min: 0 max: 1 step: 0.01 default: 0.25 -
Controls the sensitivity threshold for DB Cache. Lower values reuse cached blocks more aggressively, higher values prioritize quality.
-
acceleratorOptions»dbCacheSkipIntervaldbCacheSkipInterval
integer min: 1 default: 5 -
Controls how many steps to skip between cache refreshes. Higher values skip more steps for faster generation at the cost of quality.
-
Inputs
Input resources for the task (images, audio, etc). These must be nested inside the inputs object.
inputs object.-
inputs»seedImageseedImage
string -
Image used as a starting point for the generation (UUID, URL, Data URI, or Base64).
Learn more 3 resources
-
inputs»maskImagemaskImage
string -
Image used to specify which areas of the seed image should be edited (UUID, URL, Data URI, or Base64).
Learn more 1 resource
Generation Parameters
Core parameters for controlling the generated content.
-
model
string required value: runware:5@1 -
Identifier of the model to use for generation.
Learn more 3 resources
-
positivePrompt
string min: 2 max: 3000 -
Text prompt describing elements to include in the generated output.
Learn more 2 resources
-
negativePrompt
string min: 2 max: 3000 -
Prompt to guide what to exclude from generation. Ignored when guidance is disabled (CFGScale ≤ 1).
Learn more 1 resource
-
width
integer default: 1024 -
Width of the generated media in pixels.
Learn more 2 resources
-
height
integer default: 1024 -
Height of the generated media in pixels.
Learn more 2 resources
-
seed
integer min: 0 max: 9223372036854776000 -
Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.
Learn more 1 resource
-
steps
integer min: 1 default: 28 -
Total number of denoising steps. Higher values generally produce more detailed results but take longer.
Learn more 1 resource
-
scheduler
string -
Scheduler to use for the diffusion process.
Allowed values 75 values
Learn more 2 resources
-
CFGScale
float min: 0 max: 30 step: 0.01 -
Guidance scale representing how closely the output will resemble the prompt. Higher values produce results more aligned with the prompt.
Learn more 1 resource
-
strength
float min: 0 max: 1 step: 0.01 default: 0.8 -
Strength of the transformation. Lower values result in more influence from the original input.
-
maskMargin
integer min: 32 max: 128 -
Extra context pixels around the masked region during inpainting. The model zooms into the masked area with these additional pixels for better integration.
Learn more 1 resource
-
clipSkip
integer min: 0 max: 4 -
Number of layers to skip in the CLIP model.
Learn more 2 resources
-
outpaint
object -
Extends image boundaries in specified directions. Final width/height must account for original image plus extensions.
Learn more 1 resource
Properties 4 properties
-
outpaint»bottombottom
integer min: 0 -
Number of pixels to extend to the bottom.
-
outpaint»leftleft
integer min: 0 -
Number of pixels to extend to the left.
-
outpaint»rightright
integer min: 0 -
Number of pixels to extend to the right.
-
outpaint»toptop
integer min: 0 -
Number of pixels to extend to the top.
-
-
lora
array of objects min items: 1 -
With LoRA (Low-Rank Adaptation), you can adapt a model to specific styles or features by emphasizing particular aspects of the data. This technique enhances the quality and relevance of generated content and can be especially useful when the output needs to adhere to a specific artistic style or follow particular guidelines.
Multiple LoRA models can be used simultaneously to achieve different adaptation goals.
Examples 1 example
"lora": [ { "model": "<lora-model-air>", "weight": 0.8 } ]Learn more 1 resource
Properties 3 properties
-
lora»modelmodel
string required -
LoRA model identifier.
-
lora»weightweight
float min: -4 max: 4 step: 0.01 default: 1 -
Strength of the LoRA influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the LoRA's style.
-
lora»transformertransformer
string default: both -
Transformer stages to apply LoRA. Some video models use separate high-noise and low-noise processing stages, and LoRAs can be selectively applied to optimize their effectiveness.
Allowed values 3 values
- Apply LoRA only to the high-noise processing stage (coarse structure and early generation steps).
- Apply LoRA only to the low-noise processing stage (fine details and later generation steps).
- Apply LoRA to both stages for full coverage.
-
-
controlNet
array of objects min items: 1 -
With ControlNet, you can provide a guide image to help the model generate images that align with the desired structure. This guide image can be generated with our ControlNet preprocessing tool, extracting guidance information from an input image. The guide image can be in the form of an edge map, a pose, a depth estimation or any other type of control image that guides the generation process via the ControlNet model.
Multiple ControlNet models can be used at the same time to provide different types of guidance information to the model.
Examples 1 example
"controlNet": [ { "model": "<controlnet-model-air>", "guideImage": "c64351d5-4c59-42f7-95e1-eace013eddab", "weight": 0.7, "startStep": 0, "endStep": 20, "controlMode": "controlnet" } ]Learn more 2 resources
Properties 8 properties
-
controlNet»modelmodel
string required -
ControlNet model identifier.
-
controlNet»weightweight
float min: -4 max: 4 step: 0.01 default: 1 -
Strength of the ControlNet influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the guide image.
-
controlNet»guideImageguideImage
string required -
Reference image for ControlNet guidance (UUID, URL, Data URI, or Base64).
-
controlNet»controlModecontrolMode
string default: balanced -
ControlNet guidance mode.
Allowed values 3 values
- Equal weight between ControlNet and prompt.
- Prioritize ControlNet guidance.
- Prioritize prompt guidance.
-
controlNet»endStependStep
integer min: 1 -
Absolute step number to end ControlNet influence. Must be greater than
startStepand less than or equal tosteps.
-
controlNet»endStepPercentageendStepPercentage
integer min: 1 max: 100 -
Percentage of steps to end ControlNet influence. Must be greater than
startStepPercentage.
-
controlNet»startStepstartStep
integer min: 0 -
Absolute step number to start ControlNet influence. Must be less than
endStep.
-
controlNet»startStepPercentagestartStepPercentage
integer min: 0 max: 99 -
Percentage of steps to start ControlNet influence. Must be less than
endStepPercentage.
-
-
ipAdapters
array of objects min items: 1 -
IP-Adapters enable image-prompted generation, allowing you to use reference images to guide the style and content of your generations. Multiple IP Adapters can be used simultaneously.
Examples 1 example
"ipAdapters": [ { "model": "<ip-adapter-model-air>", "guideImages": ["c64351d5-4c59-42f7-95e1-eace013eddab"], "weight": 0.75 }, { "model": "<ip-adapter-model-air>", "guideImages": ["d7e8f9a0-2b5c-4e7f-a1d3-9c8b7a6e5d4f"], "weight": 0.5 } ]Learn more 1 resource
Properties 7 properties
-
ipAdapters»modelmodel
string required -
We make use of the AIR system to identify IP-Adapter models. This identifier is a unique string that represents a specific model.
Supported models list
AIR ID Model Name runware:55@1 IP Adapter SDXL runware:55@2 IP Adapter SDXL Plus runware:55@3 IP Adapter SDXL Plus Face runware:55@4 IP Adapter SDXL Vit-H runware:55@5 IP Adapter SD 1.5 runware:55@6 IP Adapter SD 1.5 Plus runware:55@7 IP Adapter SD 1.5 Light runware:55@8 IP Adapter SD 1.5 Plus Face runware:55@10 IP Adapter SD 1.5 Vit-G
-
ipAdapters»weightweight
float min: -4 max: 4 step: 0.01 default: 1 -
Strength of the IP-Adapter influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the reference.
-
ipAdapters»guideImagesguideImages
array of strings required min items: 1 -
Images to guide the IP-Adapter (UUID, URL, Data URI, or Base64).
-
ipAdapters»combineMethodcombineMethod
string default: concat -
Controls how multiple reference images are combined.
Allowed values 5 values
-
ipAdapters»embedScalingembedScaling
string default: kv -
Determines which embedding components are used and their strength.
Allowed values 4 values
-
ipAdapters»weightTypeweightType
string default: normal -
Shapes how influence evolves during generation.
Allowed values 13 values
-
ipAdapters»weightCompositionweightComposition
float min: 0 max: 1 step: 0.01 -
Controls composition/layout influence specifically.
-
Features
Standalone addons and post-processing features.
-
ultralytics
object -
Configuration object for Ultralytics face enhancement during generation. This feature uses face detection and inpainting to improve facial details in the same generation step, without requiring post-processing.
Face enhancement is available for Stable Diffusion 1.X, SDXL, and FLUX models. The system automatically detects faces and applies targeted refinement to improve quality while maintaining consistency with the overall generation.
Properties 8 properties
-
ultralytics»CFGScaleCFGScale
float min: 0 max: 50 step: 0.1 default: 8 -
Face refinement guidance scale.
-
ultralytics»confidenceconfidence
float min: 0 max: 1 step: 0.01 default: 0.9 -
Confidence threshold for detection.
-
ultralytics»maskBlurmaskBlur
integer min: 0 max: 100 default: 5 -
Mask feathering amount. Higher values create softer transitions between the enhanced face region and surrounding areas.
-
ultralytics»maskPaddingmaskPadding
integer min: 0 max: 20 default: 5 -
Padding around detected face in pixels. Expands the refinement area to include surrounding context like hair and neck.
-
ultralytics»negativePromptnegativePrompt
string -
Negative prompt for detection.
-
ultralytics»positivePromptpositivePrompt
string -
Positive prompt for detection.
-
ultralytics»stepssteps
integer min: 1 max: 100 default: 20 -
Number of face refinement steps.
-
ultralytics»strengthstrength
float min: 0 max: 1 step: 0.01 default: 0.3 -
Refinement strength. Lower values preserve more of the original, higher values allow more aggressive reconstruction.
-
Glasshouse Rooftop Supper Club
{
"taskType": "imageInference",
"taskUUID": "cc58c87f-bc7a-450e-a897-0e25a60e1cb2",
"model": "runware:5@1",
"positivePrompt": "An elegant rooftop glasshouse supper club above a futuristic coastal city at blue hour, three distinct subjects in the foreground: a sharply dressed saxophone player in a silver suit, a florist arranging oversized dahlias and ferns, and a pastry chef presenting a luminous citrus dessert tower. Background filled with guests in tailored clothing, suspended lanterns, curved glass architecture, distant skyline, hovering water taxis on the bay below. On a polished brass entrance sign, clearly readable typography: 'Aster Nine'. Rich reflections on glass, intricate botanical details, polished marble floor, refined cinematic composition, high detail, sophisticated color palette of amber, jade, indigo, and soft coral, realistic faces and hands, premium editorial photography aesthetic.",
"width": 1024,
"height": 768,
"seed": 20672,
"steps": 28,
"CFGScale": 7.5
}{
"taskType": "imageInference",
"taskUUID": "cc58c87f-bc7a-450e-a897-0e25a60e1cb2",
"imageUUID": "e597067a-f555-47f3-9e13-08591c461c09",
"imageURL": "https://im.runware.ai/image/os/a24d12/ws/2/ii/e597067a-f555-47f3-9e13-08591c461c09.jpg",
"seed": 20672,
"cost": 0.0013
}Sunken Metro Aquarium Concourse
{
"taskType": "imageInference",
"taskUUID": "15d277a9-6c2e-4631-bacb-2d1279a9496d",
"model": "runware:5@1",
"positivePrompt": "A vast abandoned subway concourse transformed into an underwater public aquarium after the city sank, viewed from a wide angle. Cracked tiled platforms and escalators descend into clear teal water, with schools of silver fish circling through old ticket gates. Three divers in elegant vintage diving suits guide visitors along illuminated walkways, while a child points at a giant manta ray gliding beneath a suspended station sign reading \"ATLANTIC LINE\". Sea grass grows between mosaic floor patterns, soft beams of sunlight filter through ruptured ceiling panels, drifting particles, submerged vending machines, polished brass details, cinematic depth, extraordinary texture, realistic water caustics, highly detailed environment, balanced composition, atmospheric and surreal yet believable.",
"width": 1024,
"height": 768,
"seed": 3982,
"steps": 28,
"CFGScale": 7.5
}{
"taskType": "imageInference",
"taskUUID": "15d277a9-6c2e-4631-bacb-2d1279a9496d",
"imageUUID": "766df440-b76b-4d09-ab9d-fcc064cdb8c6",
"imageURL": "https://im.runware.ai/image/os/a20d05/ws/2/ii/766df440-b76b-4d09-ab9d-fcc064cdb8c6.jpg",
"seed": 3982,
"cost": 0.0013
}Subterranean Fossil Cathedral Expedition
{
"taskType": "imageInference",
"taskUUID": "505bd5b3-5e1b-42bc-a2ab-79421933d2f8",
"model": "runware:5@1",
"positivePrompt": "A vast underground cathedral carved into an ancient fossil bed, colossal rib bones arching overhead like vaulted ceilings, three expedition members in weathered climbing gear crossing a narrow suspended bridge above a glowing amber chasm, one carrying a lantern, one sketching symbols on a field slate, one examining a stone plinth with clearly carved text reading \"STRATUM HALL\". Layers of sediment walls reveal embedded ammonites and fern impressions, drifting dust, braided ropes, old pulleys, camp crates, scattered excavation tools, depth fading into darkness. Dramatic cinematic composition, ultra-detailed textures, realistic anatomy, precise architecture, volumetric light shafts from fissures above, earthy copper and ochre palette, awe-filled mood, high detail, sharp focus.",
"width": 1024,
"height": 1536,
"seed": 69217,
"steps": 28,
"CFGScale": 7.5
}{
"taskType": "imageInference",
"taskUUID": "505bd5b3-5e1b-42bc-a2ab-79421933d2f8",
"imageUUID": "7c9b27fa-fd41-4637-9067-dad062c6ef29",
"imageURL": "https://im.runware.ai/image/os/a10d08/ws/2/ii/7c9b27fa-fd41-4637-9067-dad062c6ef29.jpg",
"seed": 69217,
"cost": 0.0026
}Clockwork Orchard Parade
{
"taskType": "imageInference",
"taskUUID": "d5774636-68bb-4b60-bd05-a88fa163c8e6",
"model": "runware:5@1",
"positivePrompt": "A fantastical spring orchard where mechanical fruit trees bloom with copper leaves and polished brass branches, a parade of tiny clockwork animals winding along a cobblestone path, including a fox with jeweled gears, a heron on stilts, and two rabbit drummers; children in patchwork capes watching from beneath striped awnings, baskets of pears and apricots, drifting petals, warm late-afternoon sunlight, intricate fine detail, storybook grandeur, crisp focus, rich color harmony, elegant composition, and a hand-painted wooden sign that clearly reads \"ORCHARD DAY\"",
"width": 1024,
"height": 1024,
"seed": 82457,
"steps": 30,
"CFGScale": 7.5
}{
"taskType": "imageInference",
"taskUUID": "d5774636-68bb-4b60-bd05-a88fa163c8e6",
"imageUUID": "be154d45-7882-47f1-9cc7-f843620f83d2",
"imageURL": "https://im.runware.ai/image/os/a05d22/ws/2/ii/be154d45-7882-47f1-9cc7-f843620f83d2.jpg",
"seed": 82457,
"cost": 0.0019
}