FLUX Tools
Introduction
FLUX Tools is a suite of specialized models designed to add control to the base FLUX text-to-image models, enabling sophisticated modification and re-creation of real and generated images. These tools integrate seamlessly with our standard Image Inference API while offering expanded capabilities for specific image manipulation tasks.
The FLUX Tools suite consists of four distinct features:
- FLUX Fill (
runware:102@1
): State-of-the-art inpainting and outpainting capabilities for editing and expanding images. - FLUX Canny (
runware:104@1
): Structural guidance based on canny edges extracted from input images. - FLUX Depth (
runware:103@1
): Structural guidance based on depth maps extracted from input images. - FLUX Redux (
runware:105@1
): Image variation and restyling for refining or transforming existing images.
Each tool is optimized for specific use cases while maintaining the quality and performance that FLUX models are known for.
FLUX Tools models are used through the same standard Image Inference task but with specific parameter combinations and restrictions. This documentation covers the unique requirements for each tool.
FLUX Fill: Advanced Inpainting
FLUX Fill introduces advanced inpainting capabilities that allow for seamless editing that integrates naturally with existing images. It also supports outpainting, enabling extension of images beyond their original borders.

- Flux
- Inpainting
- Fill
Usage
FLUX Fill is used with the following specific configuration:
- Use base model AIR ID
runware:102@1
. - Provide
seedImage
andmaskImage
as you would for standard inpainting. - Unlike regular inpainting models, FLUX Fill does not support the
maskMargin
parameter for zoomed/detailed inpainting. - The
strength
parameter is also not compatible with this model, the balance between existing and new content is controlled entirely through prompting.
{
"taskType": "imageInference",
"taskUUID": "a770f077-f413-47de-9dac-be0b26a35da6",
"model": "runware:102@1",
"positivePrompt": "a blue denim jacket",
"seedImage": "59a2edc2-45e6-429f-be5f-7ded59b92046",
"maskImage": "b6a06b3b-ce32-4884-ad93-c5eca7937ba0",
"width": 1024,
"height": 1024,
"steps": 30
}
Use cases
FLUX Fill excels at:
- Object Replacement: Replace specific objects in images while maintaining lighting and context.
- Background Modification: Change backgrounds while preserving the main subject.
- Content Extension: Expand images beyond their boundaries through outpainting.
- Detail Enhancement: Add or refine details within specific areas of an image.
Playground
FLUX Canny/Depth: Structural conditioning
Structural conditioning models use canny edge or depth detection to maintain precise control during image inference. By preserving the original image's structure through edge or depth maps, users can make text-guided images while keeping the core composition intact, which is particularly effective for retexturing images and style transformations.

- Base model
- Flux
- canny2img
- controlnet
- canny

- Base model
- Flux
- depth2img
- controlnet
- depth
FLUX Canny and Depth are hybrid models that combine the base FLUX image generation capabilities with embedded ControlNet functionality. Unlike standard ControlNet usage, FLUX Canny/Depth tools don't require a controlNet
object. Instead, the guide image is provided directly in the seedImage
parameter.
Usage
- Use base model AIR ID
runware:104@1
for FLUX Canny orrunware:103@1
for FLUX Depth. - Provide the structural guide image (edge map or depth map) directly in the
seedImage
parameter. - There is no
weight
parameter for FLUX Canny/Depth. The strength of the conditioning can't be controlled.
{
"taskType": "imageInference",
"taskUUID": "a770f077-f413-47de-9dac-be0b26a35da6",
"model": "runware:103@1",
"positivePrompt": "a watercolor painting of a forest",
"seedImage": "59a2edc2-45e6-429f-be5f-7ded59b92046",
"width": 1024,
"height": 1024,
"steps": 30
}
Preparing guide images
You can use our ControlNet preprocessing tools to generate appropriate edge or depth maps:
- For FLUX Canny: Use the
canny
preprocessor type. - For FLUX Depth: Use the
depth
preprocessor type.
{
"taskType": "imageControlNetPreProcess",
"taskUUID": "3303f1be-b3dc-41a2-94df-ead00498db57",
"inputImage": "ff1d9a0b-b80f-4665-ae07-8055b99f4aea",
"preProcessorType": "canny"
}
Use Cases
FLUX Canny/Depth excel at:
- Style Transfer: Transform image styles while maintaining structural composition and spatial relationships.
- Content Preservation: Generate new images that follow the exact structure of reference images.
- Scene Retexturing: Modify materials and textures while preserving object shapes and positions.
- Artistic Reinterpretation: Create artistic variants of photos with consistent structure but creative styling.
- Consistent Series Generation: Produce multiple variations with identical structural elements but different details.
Playground
FLUX Redux: Image variation and restyling
FLUX Redux is an IP-Adapter model that enables image variation generation. Given an input image (guide image), FLUX Redux can reproduce the image with variations, allowing refinement of existing images or creating multiple alternatives based on a single reference.

- Flux
- IPAdapter
- Redux
- img2img
Usage
FLUX Redux requires a different approach than the other FLUX tools:
- Use iP-Adapter model AIR ID
runware:105@1
. - Provide the input image in the
guideImage
parameter inside theipAdapters
object. - Use a FLUX base model (typically FLUX dev model -
runware:101@1
) as the base model. Other FLUX models can be used as well. - There is no
weight
parameter for FLUX Redux. The variation level and guidance can be controlled through thepositivePrompt
parameter.
To generate pure variations of the input image without any prompt guidance, use __BLANK__
as your positivePrompt
. This special keyword tells the model to focus exclusively on the visual information from the input image, creating variations that maintain its core characteristics without additional text influences.
[
{
"taskType": "imageInference",
"taskUUID": "a770f077-f413-47de-9dac-be0b26a35da6",
"model": "runware:101@1",
"positivePrompt": "elegant portrait, detailed features",
"width": 1024,
"height": 1024,
"steps": 30,
"ipAdapters": [
{
"guideImage": "59a2edc2-45e6-429f-be5f-7ded59b92046",
"model": "runware:105@1"
}
]
}
]
Use cases
FLUX Redux excels at:
- Image Variations: Generate subtle alternatives of an input image while preserving key visual elements.
- Style Adaptation: Modify the artistic style of an image while maintaining subject recognition.
- Visual Concept Mixing: Combine visual concepts from the input image with new elements specified in the prompt.
- Subject Preservation: Ensure specific subjects or elements remain recognizable across style transformations.
Best practices
For optimal results with FLUX Tools, consider these best practices:
- FLUX Fill performs best with clear, defined masks with slight feathering at the edges.
- For FLUX Canny, adjust the edge detection thresholds in preprocessing to control level of detail.
- Higher step counts often yield better results for complex transformations.
- Consider using a higher CFG scale when precision is required.