MODEL ID bria:10@1
live

Bria 3.2

Bria
by Bria

Bria 3.2 is a compact text to image model built on fully licensed data. It delivers strong prompt alignment, high aesthetic quality, and reliable short text rendering. Ideal for enterprise workflows that need compliant image generation with predictable behavior and easy integration.

Bria 3.2

API Options

Platform-level options for task execution and delivery.

taskType

string required value: imageInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

outputType

string default: URL

Image output type.

Allowed values 3 values

outputFormat

string default: JPG

Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.

  • `JPG`: Best for photorealistic images with smaller file sizes (no transparency).
  • `PNG`: Lossless compression, supports high quality and transparency (alpha channel).
  • `WEBP`: Modern format providing superior compression and transparency support.
**Transparency**: If you are using features like background removal or LayerDiffuse that require transparency, you must select a format that supports an alpha channel (e.g., `PNG`, `WEBP`, `TIFF`). `JPG` does not support transparency.
Allowed values 3 values

outputQuality

integer min: 20 max: 99 default: 95

Compression quality of the output. Higher values preserve quality but increase file size.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: sync

Determines how the API delivers task results.

Allowed values 2 values
Returns complete results directly in the API response.
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Learn more 1 resource

uploadEndpoint

string URI

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

Common use cases:

  • Cloud storage: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
  • CDN integration: Upload to content delivery networks for immediate distribution.
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg

The content data will be sent as the request body to the specified URL when generation is complete.

safety

object

Content safety checking configuration for image generation.

Properties 2 properties
safety » checkContent

checkContent

boolean default: false

Enable or disable content safety checking. When enabled, defaults to fast mode.

safety » mode

mode

string default: none

Safety checking mode for image generation.

Allowed values 2 values
Disables checking.
Performs a single check.

ttl

integer min: 60

Time-to-live (TTL) in seconds for generated content. Only applies when outputType is URL.

includeCost

boolean default: false

Include task cost in the response.

numberResults

integer min: 1 max: 20 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: bria:10@1

Identifier of the model to use for generation.

Learn more 3 resources

positivePrompt

string required min: 2 max: 3000

Text prompt describing elements to include in the generated output.

Learn more 2 resources

negativePrompt

string min: 2 max: 3000

Prompt to guide what to exclude from generation. Ignored when guidance is disabled (CFGScale ≤ 1).

Learn more 1 resource

width

integer required paired with height

Width of the generated media in pixels.

Learn more 2 resources

height

integer required paired with width

Height of the generated media in pixels.

Learn more 2 resources

seed

integer min: 0 max: 9223372036854776000

Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.

Learn more 1 resource

steps

integer min: 20 max: 50 default: 20

Total number of denoising steps. Higher values generally produce more detailed results but take longer.

Learn more 1 resource

controlNet

array of objects min items: 1

With ControlNet, you can provide a guide image to help the model generate images that align with the desired structure. This guide image can be generated with our ControlNet preprocessing tool, extracting guidance information from an input image. The guide image can be in the form of an edge map, a pose, a depth estimation or any other type of control image that guides the generation process via the ControlNet model.

Multiple ControlNet models can be used at the same time to provide different types of guidance information to the model.

Examples 1 example
"controlNet": [
  {
    "model": "<controlnet-model-air>",
    "guideImage": "c64351d5-4c59-42f7-95e1-eace013eddab",
    "weight": 0.7,
    "startStep": 0,
    "endStep": 20,
    "controlMode": "controlnet"
  }
]
Learn more 2 resources
Properties 8 properties
controlNet » model

model

string required

ControlNet model identifier.

controlNet » weight

weight

float min: -4 max: 4 step: 0.01 default: 1

Strength of the ControlNet influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the guide image.

controlNet » guideImage

guideImage

string required

Reference image for ControlNet guidance (UUID, URL, Data URI, or Base64).

controlNet » controlMode

controlMode

string default: balanced

ControlNet guidance mode.

Allowed values 3 values
Equal weight between ControlNet and prompt.
Prioritize ControlNet guidance.
Prioritize prompt guidance.
controlNet » endStep

endStep

integer min: 1

Absolute step number to end ControlNet influence. Must be greater than startStep and less than or equal to steps.

controlNet » endStepPercentage

endStepPercentage

integer min: 1 max: 100

Percentage of steps to end ControlNet influence. Must be greater than startStepPercentage.

controlNet » startStep

startStep

integer min: 0

Absolute step number to start ControlNet influence. Must be less than endStep.

controlNet » startStepPercentage

startStepPercentage

integer min: 0 max: 99

Percentage of steps to start ControlNet influence. Must be less than endStepPercentage.

ipAdapters

array of objects min items: 1

IP-Adapters enable image-prompted generation, allowing you to use reference images to guide the style and content of your generations. Multiple IP Adapters can be used simultaneously.

Examples 1 example
"ipAdapters": [
  {
    "model": "<ip-adapter-model-air>",
    "guideImages": ["c64351d5-4c59-42f7-95e1-eace013eddab"],
    "weight": 0.75
  },
  {
    "model": "<ip-adapter-model-air>",
    "guideImages": ["d7e8f9a0-2b5c-4e7f-a1d3-9c8b7a6e5d4f"],
    "weight": 0.5
  }
]
Learn more 1 resource
Properties 7 properties
ipAdapters » model

model

string required

We make use of the AIR system to identify IP-Adapter models. This identifier is a unique string that represents a specific model.

Supported models list
AIR IDModel Name
runware:55@1 IP Adapter SDXL
runware:55@2 IP Adapter SDXL Plus
runware:55@3 IP Adapter SDXL Plus Face
runware:55@4 IP Adapter SDXL Vit-H
runware:55@5 IP Adapter SD 1.5
runware:55@6 IP Adapter SD 1.5 Plus
runware:55@7 IP Adapter SD 1.5 Light
runware:55@8 IP Adapter SD 1.5 Plus Face
runware:55@10 IP Adapter SD 1.5 Vit-G
ipAdapters » weight

weight

float min: -4 max: 4 step: 0.01 default: 1

Strength of the IP-Adapter influence. A value of 0 means no influence. Higher values increase the influence, and negative values can be used to steer away from the reference.

ipAdapters » guideImages

guideImages

array of strings required min items: 1

Images to guide the IP-Adapter (UUID, URL, Data URI, or Base64).

ipAdapters » combineMethod

combineMethod

string default: concat

Controls how multiple reference images are combined.

Allowed values 5 values
ipAdapters » embedScaling

embedScaling

string default: kv

Determines which embedding components are used and their strength.

Allowed values 4 values
ipAdapters » weightType

weightType

string default: normal

Shapes how influence evolves during generation.

Allowed values 13 values
ipAdapters » weightComposition

weightComposition

float min: 0 max: 1 step: 0.01

Controls composition/layout influence specifically.

Provider Settings

Parameters specific to this model provider. These must be nested inside the providerSettings.bria object.

providerSettings » bria » contentModeration

contentModeration

boolean default: true

Apply content moderation to inputs and outputs.

providerSettings » bria » enhanceImage

enhanceImage

boolean default: false

Generate images with richer details and sharper textures.

providerSettings » bria » ipSignal

ipSignal

boolean default: false

Flag potential IP-related content in prompt or output.

providerSettings » bria » medium

medium

string

Artistic medium.

Allowed values 2 values
providerSettings » bria » promptEnhancement

promptEnhancement

boolean default: false

Expand prompt with descriptive variations.