FLUX.2 [max]

The latest state-of-the-art model from Black Forest Labs, generating images grounded in live web information.

4519c9f8-ce15-40ec-8a48-0f4ad16159af
Commercial use

Output images cost $0.07 for the first megapixel, then $0.03 per additional megapixel. Reference images cost an additional $0.03 per megapixel.

1024x1024$0.07
1536x1024$0.10
1024x1536$0.10
1920x1080$0.13
1024x1024 · 1 reference (1920x1080)$0.16
1024x1024 · 3 references (1024x1024)$0.16
Text To ImageImage To ImageReference To Image

FLUX.2 [max] is a high-precision text to image and image editing model from Black Forest Labs that generates visuals grounded in real-time information via live web search. It delivers maximum prompt adherence with multi-reference editing and state-of-the-art consistency across identities, objects, and details.

Examples

4c9b0974-87ec-4444-8d23-262d97d9e75a
85de3886-04b0-4c41-9737-ab259f32cbe7
f1a0e003-11e1-4c84-8800-37a0342acfb6
94a03d40-2031-41cb-9896-94d5ee87fe02
b652f446-c58e-49e3-9604-3a83c4b6e98e
b6402b0c-f2d6-4043-82bc-c3ec74f597a0
9d168994-1dc1-4536-a48c-d4631390b832
d27116ee-8c7e-4e6a-8cf3-bbaeb870f722
b2a1045a-99cf-470c-a306-58125ed660d0

README

Overview

FLUX.2 [max] is a high-quality image generation and editing model from Black Forest Labs, designed for creative and commercial workflows that require strong prompt adherence, visual consistency, and reliable real-world context. It sits at the premium end of the FLUX.2 model family and is well suited to use cases where accuracy, grounding, and controlled output matter.

In addition to generating polished visual content, FLUX.2 [max] can produce images grounded in up-to-date, real-world information when paired with external context. This makes it useful for scenarios such as visualising current products, events, locations, or trends, where prompts benefit from factual or time-sensitive inputs rather than purely synthetic imagination.

How it Works

FLUX.2 [max] is part of the FLUX.2 architecture family, combining a latent flow-matching backbone with a vision-language system capable of incorporating both prompt instructions and external context.

Prompt Interpretation

The model processes natural language prompts to understand composition, style, subject relationships, lighting, and intent. It performs well with longer, structured prompts and can incorporate injected context such as real-world data or search results when provided by the application layer.

Image Generation

FLUX.2 [max] generates high-resolution images with an emphasis on clean composition, stable structure, and visual coherence. It is designed to handle complex constraints without drifting from the original intent of the prompt.

Grounded & Context-Aware Generation

When supplied with external context, such as web search results or structured real-world information, FLUX.2 [max] can ground its outputs in factual data. This allows images to reflect current products, known entities, or recent events, rather than relying solely on generic or outdated priors.

Multi-Reference Editing

The model supports multiple reference images, enabling it to combine identity, layout, colour, and stylistic cues into a single output. This is particularly useful for workflows that require consistency across iterations or variations.

Key Features

  • High-Quality Image Output
    Produces detailed, well-composed images suitable for professional creative workflows.

  • Strong Prompt Adherence
    Accurately follows complex instructions around composition, materials, lighting, and style.

  • Grounded Image Generation
    Supports context-aware outputs when paired with real-world or search-derived information.

  • Multi-Reference Support
    Combines multiple reference images to preserve visual consistency across outputs.

  • Production-Focused Design
    Well suited to marketing imagery, product visuals, editorial assets, and concept frames.

Technical Specifications

  • Model Name: FLUX.2 [max]
  • Model Type: Image generation and multi-reference image editing
  • Input: Text prompt with optional reference images and external context
  • Prompt Handling: Optimised for detailed, structured prompts
  • Architecture: FLUX.2 latent flow-matching architecture with integrated vision-language understanding
  • Model Family: FLUX.2 [max], FLUX.2 [pro], FLUX.2 [flex], FLUX.2 [dev]

How to Use

  1. Write a prompt describing the subject, composition, lighting, and visual style.
  2. Optionally include reference images to guide consistency.
  3. If required, inject real-world context or structured data to ground the generation.
  4. Submit the request using the FLUX.2 [max] model.
  5. Review and iterate as needed.

Example prompt:
Create a clean, slightly angled isometric 3D illustration of San Francisco, highlighting recognisable landmarks such as the Golden Gate Bridge and the surrounding hills. Use soft, stylised geometry with realistic materials, subtle surface detail, and natural daylight with gentle shadows. Reflect the current weather conditions in the scene, including sky colour, lighting, and atmosphere. Keep the composition minimal with a calm, solid background. At the top of the image, place the city name “San Francisco” in bold text, with a weather icon below it, followed by today’s date in small text and the current temperature in medium-sized text. Ensure all text is centred, evenly spaced, and integrated cleanly with the scene without overwhelming the buildings.

Tips for Better Results

  • Use grounded context when accuracy or recency matters.
  • Keep prompts structured and explicit, especially when combining creative and factual requirements.
  • Use reference images to stabilise composition and identity.
  • Avoid overloading the prompt; introduce constraints gradually.

Notes & Limitations

  • Grounded generation depends on the quality and clarity of the external context provided.
  • FLUX.2 [max] prioritises consistency and control over rapid experimentation.
  • For lower-cost iteration or exploratory work, other FLUX.2 variants may be more appropriate.

Documentation

You can find full usage details, parameters, and examples here: https://runware.ai/docs/en/providers/bfl