Model Search API
Introduction
The Model Search API enables discovery of available models on the Runware platform, providing powerful search and filtering capabilities. Whether exploring public models from the community or managing private models within your organization, this API helps find the perfect model for any image generation task.
Models discovered through this API can be immediately used in image generation tasks by referencing their AIR identifiers. This enables dynamic model selection in applications and helps discover new models for specific artistic styles. For optimal search performance, consider using specific filters to narrow results and combining multiple criteria to find the most relevant models.
Search capabilities
The search functionality works across model names, versions, tags, and other fields, allowing users to both find specific models and discover related ones that match their search terms.
Multiple filters are available to narrow down results based on technical aspects of the models such as their category, type, architecture, and specific capabilities, making it easy to find exactly what you need.
The visibility filter helps manage which models appear in the results: choose between your organization's public models, private models, or all available models including those from the community.
Search results
Search queries return comprehensive information about matching models. A unique AIR identifier is provided for each model, which is essential for image generation requests. The metadata includes the model's name, version and tags, while technical-related fields detail the model's category, type and architecture, along with its visibility status.
Results are returned in a paginated format to ensure efficient processing of large result sets. The default limit is 20 models per page, though this can be customized using the limit
parameter. Navigation through the results is handled through the offset
parameter, allowing you to move through the complete set of matches if needed.
Request
Our API always accepts an array of objects as input, where each object represents a specific task to be performed. The structure of the object varies depending on the type of the task. For this section, we will focus on the parameters related to the model search task.
The following JSON snippet shows the basic structure of a request object. All properties are explained in detail in the next section.
-
taskType
-
The type of task to be performed. For this task, the value should be
modelSearch
. -
taskUUID
-
When a task is sent to the API you must include a random UUID v4 string using the
taskUUID
parameter. This string is used to match the async responses to their corresponding tasks.If you send multiple tasks at the same time, the
taskUUID
will help you match the responses to the correct tasks.The
taskUUID
must be unique for each task you send to the API. -
search
-
Search term to filter models. The search is performed across multiple fields with different weights:
- Model name as exact phrase (boost: 10).
- Model AIR identifier (with wildcard matching, boost: 5).
- Model name (with wildcard matching, boost: 5).
- Model version (exact word matching).
- Model tags (exact word matching).
The search is case-insensitive and will return models that match any of these criteria, with results ordered by relevance.
-
tags
-
Filter models by matching any of the provided tags in this array. Models that contain at least one of these tags will be included in the results.
-
category
-
Filter models by their category.
Possible values:
checkpoint
: Base models that serve as the foundation for image generation.lora
: LoRA (Low-Rank Adaptation) models used to add specific styles or concepts.lycoris
: Alternative to LoRA models, offering different adaptation techniques.controlnet
: Models designed for guided image generation with specific conditions.vae
: Variational Autoencoders used for improving image quality and details.embedding
: Textual embeddings used to add new concepts to the model's vocabulary.
-
type
-
Filter checkpoint models by their type.
Possible values:
base
: Standard models for general image generation.inpainting
: Models for filling in or modifying parts of existing images.refiner
: Models that improve the quality and details of generated images.
Note: This parameter is only applicable when
category
is set tocheckpoint
. -
architecture
-
Filter models by their architecture.
Possible values:
flux1d
: FLUX.1 Devflux1s
: FLUX.1 Schnellpony
: Ponysd1x
: SD 1.5sdhyper
: SD 1.5 Hypersd1xlcm
: SD 1.5 LCMsd3
: SD 3sdxl
: SDXL 1.0sdxllcm
: SDXL 1.0 LCMsdxldistilled
: SDXL Distilledsdxlhyper
: SDXL Hypersdxllightning
: SDXL Lightningsdxlturbo
: SDXL Turbo
-
conditioning
-
Filter ControlNet models by their conditioning type.
Possible values:
blur
: Uses blurred images to guide the generation.canny
: Follows edge detection maps as reference.depth
: Creates images based on depth map information.gray
: Takes grayscale images as input reference.hed
: Works with holistic edge detection patterns.inpaint
: Uses masks to control generation areas.inpaintdepth
: Combines both masks and depth information.lineart
: Takes line art as reference input.lowquality
: References low quality images for generation.normal
: Works with normal map information.openmlsd
: Guided by line segment detection.openpose
: Creates images following human pose guides.outfit
: Works with clothing and outfit patterns.pix2pix
: Takes reference images as guidance.qrcode
: Uses QR codes as structural reference.scribble
: Follows simple sketches or scribbles.seg
: Based on segmentation map guides.shuffle
: Works with rearranged content as reference.sketch
: Uses sketch drawings as guidance.softedge
: Follows soft edge detection patterns.tile
: Based on tiling and pattern references.
-
visibility
-
Filter models by their visibility status and ownership:
public
: Show only your organization's public models.private
: Show only your organization's private models.all
: Show both community models and all your organization's models (public and private).
-
limit
-
Maximum number of items to return in a single request. Used for pagination in combination with offset.
-
offset
-
Number of items to skip in the result set. Used for pagination in combination with limit.
Response
Results will be delivered in the format below.
-
taskType
-
The API will return the
taskType
you sent in the request. In this case, it will bemodelSearch
. This helps match the responses to the correct task type. -
taskUUID
-
The API will return the
taskUUID
you sent in the request. This way you can match the responses to the correct request tasks. -
totalResults
-
The total number of models that match your search criteria, including those beyond the current page limit.
Use this value in combination with offset and limit parameters to implement pagination.
-
results
-
An array containing the matching models for your search. Each object in the array includes the model's metadata such as AIR identifier, name, tags, preview image, default parameters and others.
For detailed information about each field in the results object, check the parameters below.
-
air
-
We make use of the AIR (Artificial Intelligence Resource) system to identify models. This identifier is a unique string that represents a specific model.
You can use the AIR identifier to reference this model in other API calls, such as image generation requests.
-
name
-
The display name of the model.
-
version
-
The version label of the model.
-
category
-
The category of the model. See possible values.
-
architecture
-
The architecture of the model. See possible values.
-
type
-
The type of checkpoint model. See possible values.
Note: This parameter is only returned when the model's category is
checkpoint
. -
tags
-
Array of tags associated with the model.
-
heroImage
-
URL of the model's preview image.
-
private
-
Indicates whether this is a private model (
true
) or a public one (false
). -
comment
-
Additional notes or comments about the model.
-
defaultWidth
-
The recommended width for image generation with this model.
Note: This parameter is only returned when the model's category is
checkpoint
. -
defaultHeight
-
The recommended height for image generation with this model.
Note: This parameter is only returned when the model's category is
checkpoint
. -
defaultSteps
-
The default number of steps to use with this model when not specified during inference.
Note: This parameter is only returned when the model's category is
checkpoint
. -
defaultScheduler
-
The default scheduler to use with this model when not specified during inference.
Note: This parameter is only returned when the model's category is
checkpoint
. -
defaultCFG
-
The default CFG (Classifier Free Guidance) scale to use with this model when not specified during inference.
Note: This parameter is only returned when the model's category is
checkpoint
. -
defaultStrength
-
The default strength value to use with this inpainting model when not specified during inference.
Note: This parameter is only returned when the model's category is
checkpoint
and type isinpainting
. -
positiveTriggerWords
-
Words or phrases that need to be included in the prompt to properly activate this LoRA model. Not all LoRA models have trigger words.
Note: This parameter is only returned when the model's category is
lora
,lycoris
, orembedding
. -
conditioning
-
The conditioning type of the ControlNet model. See possible values.
Note: This parameter is only returned when the model's category is
controlnet
.
stringstringstringstringstringstringstring[]stringbooleanstringintegerintegerintegerstringfloatfloatstringstring -