MODEL ID openai-gpt-5-5
coming-soon

GPT-5.5

OpenAI
by OpenAI

GPT-5.5 is OpenAI's newest frontier model for complex professional work, with strong performance in coding, reasoning, and tool-using workflows. It supports a 1,050,000 token context window, 128,000 max output tokens, configurable reasoning effort, image input, and a broad tool stack including web search, file search, code interpreter, hosted shell, apply patch, skills, MCP, tool search, and computer use.

GPT-5.5

API Options

Platform-level options for task execution and delivery.

taskType

string required value: textInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

outputFormat

string default: TEXT

Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.

    Allowed values 2 values

    webhookURL

    string URI

    Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

    Learn more 1 resource

    deliveryMethod

    string default: sync

    Determines how the API delivers task results.

    Allowed values 3 values
    Returns complete results directly in the API response.
    Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
    Streams results token-by-token as they are generated.
    Learn more 1 resource

    includeCost

    boolean default: false

    Include task cost in the response.

    includeUsage

    boolean default: false

    Include token usage statistics in the response.

    numberResults

    integer min: 1 max: 4 default: 1

    Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

    Inputs

    Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

    inputs » images

    images

    array of strings min items: 1

    Array of image inputs (UUID, URL, Data URI, or Base64).

    Generation Parameters

    Core parameters for controlling the generated content.

    model

    string required value: openai-gpt-5-5

    Identifier of the model to use for generation.

    jsonSchema

    object | string

    JSON Schema for structured output. Only honoured when outputFormat is JSON. Accepts the OpenAI envelope ({name, schema, strict}) or a bare JSON Schema; bare schemas are auto-wrapped with name='response' and strict=true.

    messages

    array of objects required min items: 1

    Array of chat messages forming the conversation context.

    Properties 2 properties
    messages » role

    role

    string required

    The role of the message author.

    Allowed values 2 values
    messages » content

    content

    string required min: 1

    The text content of the message.

    Settings

    Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

    settings » systemPrompt

    systemPrompt

    string min: 1 max: 200000

    System-level instruction that guides the model's behavior and output style across the entire generation.

    settings » maxTokens

    maxTokens

    integer min: 1 max: 128000 default: 4096

    Maximum number of tokens to generate in the response.

    settings » thinkingLevel

    thinkingLevel

    string default: medium

    Controls the depth of internal reasoning the model performs before generating a response.

    Allowed values 5 values

    toolChoice

    object

    Controls how the model selects which tool to call. This only takes effect when tools are defined.

    Examples 3 examples

    Let the model decide (default):

    "toolChoice": {
      "type": "auto"
    }

    Force a specific tool call:

    "toolChoice": {
      "type": "tool",
      "name": "get_weather"
    }

    Require any tool call:

    "toolChoice": {
      "type": "any"
    }
    Properties 2 properties
    toolChoice » type

    type

    string required

    Strategy the model uses to decide when and which tools to call.

    Allowed values 4 values
    The model decides whether to call a tool based on the conversation context. This is the recommended default.
    The model must call at least one tool but chooses which one. Useful when you always need structured output.
    The model must call the specific tool identified by name. Use this to force a particular function call.
    The model will not call any tool, even if tools are defined. Useful for forcing a text-only response.
    toolChoice » name

    name

    string

    Name of the specific tool the model must call. Required when type is tool.

    tools

    array of objects min items: 1

    An array of tool definitions that the model may call during generation. The model can invoke one or more tools based on the conversation context, outputting structured calls with arguments instead of (or alongside) free-text.

    For function tools, each definition requires:

    • type: "function"
    • name: Unique identifier (alphanumeric, hyphens, underscores; max 64 chars).
    • description: What the function does. The model uses this to decide when to call it.
    • schema: JSON Schema object describing the expected input arguments.

    The search tool is executed server-side by the provider. You don't need to handle the tool result yourself.

    The codeInterpreter tool is executed server-side by the provider. You don't need to handle the tool result yourself.

    Examples 4 examples
    Function tool, weather lookup:
    "tools": [
      {
        "type": "function",
        "name": "get_weather",
        "description": "Get current weather for a city",
        "schema": {
          "type": "object",
          "properties": {
            "city": { "type": "string", "description": "City name" }
          },
          "required": ["city"]
        }
      }
    ],
    "toolChoice": { "type": "auto" }
    Built-in web search:
    "tools": [
      { "type": "search" }
    ]
    Built-in code interpreter:
    "tools": [
      { "type": "codeInterpreter" }
    ]
    Multiple function tools:
    "tools": [
      {
        "type": "function",
        "name": "search_products",
        "description": "Search the product catalog by query and filters.",
        "schema": {
          "type": "object",
          "properties": {
            "query": { "type": "string" },
            "category": { "type": "string" }
          },
          "required": ["query"]
        }
      },
      {
        "type": "function",
        "name": "add_to_cart",
        "description": "Add a product to the user's shopping cart.",
        "schema": {
          "type": "object",
          "properties": {
            "productId": { "type": "string" },
            "quantity": { "type": "integer", "minimum": 1 }
          },
          "required": ["productId"]
        }
      }
    ]
    Properties 4 properties
    tools » type

    type

    string required

    The kind of tool to make available to the model. User-defined functions require name and schema, while built-in tools (search, codeInterpreter) are executed server-side by the provider.

    Allowed values 3 values
    User-defined function tool. The model outputs the tool name and arguments. You execute the function locally and send results back.
    Built-in web search. The provider executes search server-side and enriches the response automatically.
    Built-in code execution sandbox (Python). The provider runs code server-side and returns results automatically.
    tools » name

    name

    string max: 64

    Unique function name. Required for function tools.

    tools » description

    description

    string

    Explanation of what the function does, used by the model to decide when to call it.

    tools » schema

    schema

    object

    JSON Schema object describing the function's input parameters.