MODEL ID zai-glm-5-1
live

GLM-5.1

Z.ai
by Z.ai

GLM-5.1 is Z.ai’s flagship language model for agentic engineering, coding, reasoning, and tool-driven workflows. It supports a 200K token context window with up to 128K output tokens, deep thinking, function calling, structured output, and streaming tool calls, and is designed to stay effective over long multi-step sessions rather than only short-horizon tasks.

GLM-5.1

API Options

Platform-level options for task execution and delivery.

taskType

string required value: textInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: sync

Determines how the API delivers task results.

Allowed values 3 values
Returns complete results directly in the API response.
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Streams results token-by-token as they are generated.
Learn more 1 resource

includeCost

boolean default: false

Include task cost in the response.

includeUsage

boolean default: false

Include token usage statistics in the response.

numberResults

integer min: 1 max: 4 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: zai-glm-5-1

Identifier of the model to use for generation.

Learn more 3 resources

seed

integer min: 0 max: 9223372036854776000

Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.

Learn more 1 resource

messages

array of objects required min items: 1

Array of chat messages forming the conversation context.

Properties 2 properties
messages » role

role

string required

The role of the message author.

Allowed values 2 values
messages » content

content

string required min: 1

The text content of the message.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » systemPrompt

systemPrompt

string min: 1 max: 200000

System-level instruction that guides the model's behavior and output style across the entire generation.

settings » temperature

temperature

float min: 0 max: 1 step: 0.01 default: 1

Controls randomness in generation. Lower values produce more deterministic outputs, higher values increase variation and creativity.

settings » topP

topP

float min: 0.01 max: 1 step: 0.01

Nucleus sampling parameter that controls diversity by limiting the probability mass. Lower values make outputs more focused, higher values increase diversity.

settings » frequencyPenalty

frequencyPenalty

float min: -2 max: 2 step: 0.01

Penalizes tokens based on their frequency in the output so far. A value of 0.0 disables the penalty.

settings » maxTokens

maxTokens

integer min: 1 max: 131072 default: 65536

Maximum number of tokens to generate in the response.

settings » presencePenalty

presencePenalty

float min: -2 max: 2 step: 0.01

Encourages the model to introduce new topics. A value of 0.0 disables the penalty.

settings » stopSequences

stopSequences

array of strings min: 1 max: 50 max items: 4

Array of sequences that will cause the model to stop generating further tokens when encountered.

settings » thinkingLevel

thinkingLevel

string default: none

Controls the depth of internal reasoning the model performs before generating a response.

Allowed values 2 values

toolChoice

object

Controls how the model selects which tool to call. This only takes effect when tools are defined.

Examples 3 examples

Let the model decide (default):

"toolChoice": {
  "type": "auto"
}

Force a specific tool call:

"toolChoice": {
  "type": "tool",
  "name": "get_weather"
}

Require any tool call:

"toolChoice": {
  "type": "any"
}
Properties 2 properties
toolChoice » type

type

string required

Strategy the model uses to decide when and which tools to call.

Allowed values 4 values
The model decides whether to call a tool based on the conversation context. This is the recommended default.
The model must call at least one tool but chooses which one. Useful when you always need structured output.
The model must call the specific tool identified by name. Use this to force a particular function call.
The model will not call any tool, even if tools are defined. Useful for forcing a text-only response.
toolChoice » name

name

string

Name of the specific tool the model must call. Required when type is tool.

tools

array of objects min items: 1

An array of tool definitions that the model may call during generation. The model can invoke one or more tools based on the conversation context, outputting structured calls with arguments instead of (or alongside) free-text.

For function tools, each definition requires:

  • type: "function"
  • name: Unique identifier (alphanumeric, hyphens, underscores; max 64 chars).
  • description: What the function does. The model uses this to decide when to call it.
  • schema: JSON Schema object describing the expected input arguments.

The search tool is executed server-side by the provider. You don't need to handle the tool result yourself.

The codeInterpreter tool is executed server-side by the provider. You don't need to handle the tool result yourself.

Examples 4 examples
Function tool, weather lookup:
"tools": [
  {
    "type": "function",
    "name": "get_weather",
    "description": "Get current weather for a city",
    "schema": {
      "type": "object",
      "properties": {
        "city": { "type": "string", "description": "City name" }
      },
      "required": ["city"]
    }
  }
],
"toolChoice": { "type": "auto" }
Built-in web search:
"tools": [
  { "type": "search" }
]
Built-in code interpreter:
"tools": [
  { "type": "codeInterpreter" }
]
Multiple function tools:
"tools": [
  {
    "type": "function",
    "name": "search_products",
    "description": "Search the product catalog by query and filters.",
    "schema": {
      "type": "object",
      "properties": {
        "query": { "type": "string" },
        "category": { "type": "string" }
      },
      "required": ["query"]
    }
  },
  {
    "type": "function",
    "name": "add_to_cart",
    "description": "Add a product to the user's shopping cart.",
    "schema": {
      "type": "object",
      "properties": {
        "productId": { "type": "string" },
        "quantity": { "type": "integer", "minimum": 1 }
      },
      "required": ["productId"]
    }
  }
]
Properties 4 properties
tools » type

type

string required

The kind of tool to make available to the model. User-defined functions require name and schema, while built-in tools (search, codeInterpreter) are executed server-side by the provider.

Allowed values 1 value
tools » name

name

string max: 64

Unique function name. Required for function tools.

tools » description

description

string

Explanation of what the function does, used by the model to decide when to call it.

tools » schema

schema

object

JSON Schema object describing the function's input parameters.