MODEL ID minimax:m2.7@highspeed
live

MiniMax M2.7 Highspeed

MiniMax
by MiniMax

MiniMax M2.7‑Highspeed is the performance‑tuned variant of M2.7, built for lower latency and higher throughput while keeping output behavior consistent with the standard model. It’s a strong fit for interactive coding agents, tool‑calling pipelines, and office automation flows where responsiveness matters.

MiniMax M2.7 Highspeed

API Options

Platform-level options for task execution and delivery.

taskType

string required value: textInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: sync

Determines how the API delivers task results.

Allowed values 3 values
Returns complete results directly in the API response.
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Streams results token-by-token as they are generated.
Learn more 1 resource

includeCost

boolean default: false

Include task cost in the response.

includeUsage

boolean default: false

Include token usage statistics in the response.

numberResults

integer min: 1 max: 4 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: minimax:m2.7@highspeed

Identifier of the model to use for generation.

Learn more 3 resources

seed

integer min: 0 max: 4294967295

Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.

messages

array of objects required min items: 1

Array of chat messages forming the conversation context.

Properties 2 properties
messages » role

role

string required

The role of the message author.

Allowed values 2 values
messages » content

content

string required min: 1

The text content of the message.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » systemPrompt

systemPrompt

string min: 1 max: 200000

System-level instruction that guides the model's behavior and output style across the entire generation.

settings » temperature

temperature

float min: 0 max: 1 step: 0.01 default: 1

Controls randomness in generation. Lower values produce more deterministic outputs, higher values increase variation and creativity.

settings » topP

topP

float min: 0 max: 1 step: 0.01 default: 0.95

Nucleus sampling parameter that controls diversity by limiting the probability mass. Lower values make outputs more focused, higher values increase diversity.

settings » maxTokens

maxTokens

integer min: 1 max: 196608 default: 32768

Maximum number of tokens to generate in the response.