MODEL ID anthropic-claude-haiku-4-5
api-only

Claude Haiku 4.5

Anthropic
by Anthropic

Claude Haiku 4.5 is Anthropic's fastest and most cost-efficient Claude model. It is built for latency-sensitive applications, high-volume agents, sub-agent orchestration, coding assistance, and budget-conscious deployments that still need strong reasoning and multimodal understanding.

Claude Haiku 4.5

API Options

Platform-level options for task execution and delivery.

taskType

string required value: textInference

Identifier for the type of task being performed

taskUUID

string required UUID v4

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

webhookURL

string URI

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

Learn more 1 resource

deliveryMethod

string default: sync

Determines how the API delivers task results.

Allowed values 3 values
Returns complete results directly in the API response.
Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse.
Streams results token-by-token as they are generated.
Learn more 1 resource

includeCost

boolean default: false

Include task cost in the response.

includeUsage

boolean default: false

Include token usage statistics in the response.

numberResults

integer min: 1 max: 4 default: 1

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

Inputs

Input resources for the task (images, audio, etc). These must be nested inside the inputs object.

inputs » documents

documents

array of strings min items: 1

Array of document inputs (UUID, URL, or Base64).

inputs » images

images

array of strings min items: 1

Array of image inputs (UUID, URL, Data URI, or Base64).

Generation Parameters

Core parameters for controlling the generated content.

model

string required value: anthropic-claude-haiku-4-5

Identifier of the model to use for generation.

Learn more 3 resources

messages

array of objects required min items: 1

Array of chat messages forming the conversation context.

Properties 2 properties
messages » role

role

string required

The role of the message author.

Allowed values 2 values
messages » content

content

string required min: 1

The text content of the message.

Settings

Technical parameters to fine-tune the inference process. These must be nested inside the settings object.

settings » systemPrompt

systemPrompt

string min: 1 max: 200000

System-level instruction that guides the model's behavior and output style across the entire generation.

settings » cache

cache

object

Prompt caching configuration. Caches designated parts of the request to reduce cost and latency on repeated calls.

Properties 2 properties
settings » cache » scope

scope

string default: system+history

Controls which parts of the request are cached.

Allowed values 2 values
Cache the system prompt only.
Cache the system prompt and conversation history up to the last user message.
settings » cache » ttl

ttl

string default: 5m

Time-to-live for the cache.

Allowed values 2 values
settings » maxTokens

maxTokens

integer min: 1 max: 64000 default: 4096

Maximum number of tokens to generate in the response.

settings » stopSequences

stopSequences

array of strings min: 1 max: 50 max items: 5

Array of sequences that will cause the model to stop generating further tokens when encountered.

settings » thinkingLevel

thinkingLevel

string

Controls the depth of internal reasoning the model performs before generating a response.

Allowed values 4 values

toolChoice

object

Controls how the model selects which tool to call. This only takes effect when tools are defined.

Examples 3 examples

Let the model decide (default):

"toolChoice": {
  "type": "auto"
}

Force a specific tool call:

"toolChoice": {
  "type": "tool",
  "name": "get_weather"
}

Require any tool call:

"toolChoice": {
  "type": "any"
}
Properties 2 properties
toolChoice » type

type

string required

Strategy the model uses to decide when and which tools to call.

Allowed values 4 values
The model decides whether to call a tool based on the conversation context. This is the recommended default.
The model must call at least one tool but chooses which one. Useful when you always need structured output.
The model must call the specific tool identified by name. Use this to force a particular function call.
The model will not call any tool, even if tools are defined. Useful for forcing a text-only response.
toolChoice » name

name

string

Name of the specific tool the model must call. Required when type is tool.

tools

array of objects min items: 1

An array of tool definitions that the model may call during generation. The model can invoke one or more tools based on the conversation context, outputting structured calls with arguments instead of (or alongside) free-text.

For function tools, each definition requires:

  • type: "function"
  • name: Unique identifier (alphanumeric, hyphens, underscores; max 64 chars).
  • description: What the function does. The model uses this to decide when to call it.
  • schema: JSON Schema object describing the expected input arguments.

The search tool is executed server-side by the provider. You don't need to handle the tool result yourself.

The codeInterpreter tool is executed server-side by the provider. You don't need to handle the tool result yourself.

Examples 4 examples
Function tool, weather lookup:
"tools": [
  {
    "type": "function",
    "name": "get_weather",
    "description": "Get current weather for a city",
    "schema": {
      "type": "object",
      "properties": {
        "city": { "type": "string", "description": "City name" }
      },
      "required": ["city"]
    }
  }
],
"toolChoice": { "type": "auto" }
Built-in web search:
"tools": [
  { "type": "search" }
]
Built-in code interpreter:
"tools": [
  { "type": "codeInterpreter" }
]
Multiple function tools:
"tools": [
  {
    "type": "function",
    "name": "search_products",
    "description": "Search the product catalog by query and filters.",
    "schema": {
      "type": "object",
      "properties": {
        "query": { "type": "string" },
        "category": { "type": "string" }
      },
      "required": ["query"]
    }
  },
  {
    "type": "function",
    "name": "add_to_cart",
    "description": "Add a product to the user's shopping cart.",
    "schema": {
      "type": "object",
      "properties": {
        "productId": { "type": "string" },
        "quantity": { "type": "integer", "minimum": 1 }
      },
      "required": ["productId"]
    }
  }
]
Properties 4 properties
tools » type

type

string required

The kind of tool to make available to the model. User-defined functions require name and schema, while built-in tools (search, codeInterpreter) are executed server-side by the provider.

Allowed values 3 values
User-defined function tool. The model outputs the tool name and arguments. You execute the function locally and send results back.
Built-in web search. The provider executes search server-side and enriches the response automatically.
Built-in code execution sandbox (Python). The provider runs code server-side and returns results automatically.
tools » name

name

string max: 64

Unique function name. Required for function tools.

tools » description

description

string

Explanation of what the function does, used by the model to decide when to call it.

tools » schema

schema

object

JSON Schema object describing the function's input parameters.