MODEL ID google-gemini-3-1-pro
live

Gemini 3.1 Pro

Google
by Google

Gemini 3.1 Pro is Google’s flagship multimodal language model that processes text alongside images, audio, video, code, and documents. It offers high-performance reasoning, complex instruction following, and deep contextual understanding for a wide range of tasks across language, analysis, and problem solving.

Gemini 3.1 Pro

API Options

Platform-level options for task execution and delivery.

taskType

string required value: textInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: sync

Determines how the API delivers task results.

Allowed values 3 values
Returns complete results directly in the API response.
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Streams results token-by-token as they are generated.
Learn more 1 resource

includeCost

boolean default: false

Include task cost in the response.

includeUsage

boolean default: false

Include token usage statistics in the response.

numberResults

integer min: 1 max: 4 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Inputs

Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

inputs » audios

audios

array of strings min items: 1

Array of audio inputs (UUID, URL, or Base64).

inputs » documents

documents

array of strings min items: 1

Array of document inputs (UUID, URL, or Base64).

inputs » images

images

array of strings min items: 1

Array of image inputs (UUID, URL, Data URI, or Base64).

inputs » videos

videos

array of strings min items: 1

Array of video inputs (UUID, URL, or Base64).

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: google-gemini-3-1-pro

Identifier of the model to use for generation.

Learn more 3 resources

seed

integer min: 0 max: 4294967295

Random seed for reproducible generation. When not provided, a random seed is generated in the unsigned 32-bit range.

messages

array of objects required min items: 1

Array of chat messages forming the conversation context.

Properties 2 properties
messages » role

role

string required

The role of the message author.

Allowed values 2 values
messages » content

content

string required min: 1

The text content of the message.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » systemPrompt

systemPrompt

string min: 1 max: 200000

System-level instruction that guides the model's behavior and output style across the entire generation.

settings » temperature

temperature

float min: 0 max: 2 step: 0.01 default: 1

Controls randomness in generation. Lower values produce more deterministic outputs, higher values increase variation and creativity.

settings » topP

topP

float min: 0 max: 1 step: 0.01

Nucleus sampling parameter that controls diversity by limiting the probability mass. Lower values make outputs more focused, higher values increase diversity.

settings » maxTokens

maxTokens

integer min: 1 max: 128000 default: 4096

Maximum number of tokens to generate in the response.

settings » stopSequences

stopSequences

array of strings min: 1 max: 50 max items: 5

Array of sequences that will cause the model to stop generating further tokens when encountered.

settings » thinkingLevel

thinkingLevel

string default: high

Controls the depth of internal reasoning the model performs before generating a response.

Allowed values 3 values

toolChoice

object

Controls how the model selects which tool to call. This only takes effect when tools are defined.

Examples 3 examples

Let the model decide (default):

"toolChoice": {
  "type": "auto"
}

Force a specific tool call:

"toolChoice": {
  "type": "tool",
  "name": "get_weather"
}

Require any tool call:

"toolChoice": {
  "type": "any"
}
Properties 2 properties
toolChoice » type

type

string required

Strategy the model uses to decide when and which tools to call.

Allowed values 4 values
The model decides whether to call a tool based on the conversation context. This is the recommended default.
The model must call at least one tool but chooses which one. Useful when you always need structured output.
The model must call the specific tool identified by name. Use this to force a particular function call.
The model will not call any tool, even if tools are defined. Useful for forcing a text-only response.
toolChoice » name

name

string

Name of the specific tool the model must call. Required when type is tool.

tools

array of objects min items: 1

An array of tool definitions that the model may call during generation. The model can invoke one or more tools based on the conversation context, outputting structured calls with arguments instead of (or alongside) free-text.

For function tools, each definition requires:

  • type: "function"
  • name: Unique identifier (alphanumeric, hyphens, underscores; max 64 chars).
  • description: What the function does. The model uses this to decide when to call it.
  • schema: JSON Schema object describing the expected input arguments.

The search tool is executed server-side by the provider. You don't need to handle the tool result yourself.

The codeInterpreter tool is executed server-side by the provider. You don't need to handle the tool result yourself.

Examples 4 examples
Function tool, weather lookup:
"tools": [
  {
    "type": "function",
    "name": "get_weather",
    "description": "Get current weather for a city",
    "schema": {
      "type": "object",
      "properties": {
        "city": { "type": "string", "description": "City name" }
      },
      "required": ["city"]
    }
  }
],
"toolChoice": { "type": "auto" }
Built-in web search:
"tools": [
  { "type": "search" }
]
Built-in code interpreter:
"tools": [
  { "type": "codeInterpreter" }
]
Multiple function tools:
"tools": [
  {
    "type": "function",
    "name": "search_products",
    "description": "Search the product catalog by query and filters.",
    "schema": {
      "type": "object",
      "properties": {
        "query": { "type": "string" },
        "category": { "type": "string" }
      },
      "required": ["query"]
    }
  },
  {
    "type": "function",
    "name": "add_to_cart",
    "description": "Add a product to the user's shopping cart.",
    "schema": {
      "type": "object",
      "properties": {
        "productId": { "type": "string" },
        "quantity": { "type": "integer", "minimum": 1 }
      },
      "required": ["productId"]
    }
  }
]
Properties 4 properties
tools » type

type

string required

The kind of tool to make available to the model. User-defined functions require name and schema, while built-in tools (search, codeInterpreter) are executed server-side by the provider.

Allowed values 3 values
User-defined function tool. The model outputs the tool name and arguments. You execute the function locally and send results back.
Built-in web search. The provider executes search server-side and enriches the response automatically.
Built-in code execution sandbox (Python). The provider runs code server-side and returns results automatically.
tools » name

name

string max: 64

Unique function name. Required for function tools.

tools » description

description

string

Explanation of what the function does, used by the model to decide when to call it.

tools » schema

schema

object

JSON Schema object describing the function's input parameters.