---
title: Sora 2 | Runware Docs
url: https://runware.ai/docs/models/openai-sora-2
description: Next generation AI video and audio model from OpenAI
---
# Sora 2

Sora 2 is OpenAI’s flagship generative model for video and audio. It accepts text prompts and generates visually rich clips with synchronized dialogue and sound. It improves physical realism and scene control. It also supports editing and extension of existing video inputs.

- **ID**: `openai:3@1`
- **Status**: live
- **Creator**: OpenAI
- **Release Date**: September 30, 2025
- **Capabilities**: Text to Video, Image to Video, Video to Video, Audio to Video

## Pricing

Each generation will cost $0.1/s for 720p.

- **720p · 8s**: `$0.8`

## Compatibility & Validation

Either provide `inputs.frameImages`, or specify `width/height`.

---

When `inputs.videoId` is provided, `width/height` and `inputs.frameImages` cannot be used.

---

`width` and `height` must be used together.

---

The following dimension combinations are supported:

| Configuration | Dimensions |
| --- | --- |
| `720p (16:9)` | `1280x720` |
| `720p (9:16)` | `720x1280` |

## Request Parameters

**API Options**

Platform-level options for task execution and delivery.

### [taskType](https://runware.ai/docs/models/openai-sora-2#request-tasktype)

- **Type**: `string`
- **Required**: true
- **Value**: `videoInference`

Identifier for the type of task being performed

### [taskUUID](https://runware.ai/docs/models/openai-sora-2#request-taskuuid)

- **Type**: `string`
- **Required**: true
- **Format**: `UUID v4`

UUID v4 identifier for tracking tasks and matching async responses. Must be unique per task.

### [outputType](https://runware.ai/docs/models/openai-sora-2#request-outputtype)

- **Type**: `string`
- **Default**: `URL`

Video output type.

**Allowed values**: `URL`

### [outputFormat](https://runware.ai/docs/models/openai-sora-2#request-outputformat)

- **Type**: `string`
- **Default**: `MP4`

Specifies the file format of the generated output. The available values depend on the task type and the specific model's capabilities.

- \`MP4\`: Widely supported video container (H.264), recommended for general use.
- \`WEBM\`: Optimized for web delivery.
- \`MOV\`: QuickTime format, common in professional workflows (Apple ecosystem).

**Allowed values**: `MP4` `WEBM` `MOV`

### [outputQuality](https://runware.ai/docs/models/openai-sora-2#request-outputquality)

- **Type**: `integer`
- **Min**: `20`
- **Max**: `99`
- **Default**: `95`

Compression quality of the output. Higher values preserve quality but increase file size.

### [webhookURL](https://runware.ai/docs/models/openai-sora-2#request-webhookurl)

- **Type**: `string`
- **Format**: `URI`

Specifies a webhook URL where JSON responses will be sent via HTTP POST when generation tasks complete. For batch requests with multiple results, each completed item triggers a separate webhook call as it becomes available.

**Learn more** (1 resource):

- [Webhooks](https://runware.ai/docs/platform/webhooks) (platform)

### [deliveryMethod](https://runware.ai/docs/models/openai-sora-2#request-deliverymethod)

- **Type**: `string`
- **Default**: `async`

Determines how the API delivers task results.

**Allowed values**:

- `async` Returns an immediate acknowledgment with the task UUID. Poll for results using getResponse. Required for long-running tasks like video generation.

**Learn more** (1 resource):

- [Task Polling](https://runware.ai/docs/platform/task-polling) (platform)

### [uploadEndpoint](https://runware.ai/docs/models/openai-sora-2#request-uploadendpoint)

- **Type**: `string`
- **Format**: `URI`

Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method. The raw binary data of the media file is sent directly as the request body. For secure uploads to cloud storage, use presigned URLs that include temporary authentication credentials.

**Common use cases:**

- **Cloud storage**: Upload directly to S3 buckets, Google Cloud Storage, or Azure Blob Storage using presigned URLs.
- **CDN integration**: Upload to content delivery networks for immediate distribution.

```text
// S3 presigned URL for secure upload
https://your-bucket.s3.amazonaws.com/generated/content.mp4?X-Amz-Signature=abc123&X-Amz-Expires=3600

// Google Cloud Storage presigned URL
https://storage.googleapis.com/your-bucket/content.jpg?X-Goog-Signature=xyz789

// Custom storage endpoint
https://storage.example.com/uploads/generated-image.jpg
```

The content data will be sent as the request body to the specified URL when generation is complete.

### [safety](https://runware.ai/docs/models/openai-sora-2#request-safety)

- **Path**: `safety.checkContent`
- **Type**: `object (2 properties)`

Content safety checking configuration for video generation.

#### [checkContent](https://runware.ai/docs/models/openai-sora-2#request-safety-checkcontent)

- **Path**: `safety.checkContent`
- **Type**: `boolean`
- **Default**: `false`

Enable or disable content safety checking. When enabled, defaults to `fast` mode.

#### [mode](https://runware.ai/docs/models/openai-sora-2#request-safety-mode)

- **Path**: `safety.mode`
- **Type**: `string`
- **Default**: `none`

Safety checking mode for video generation.

**Allowed values**:

- `none` Disables checking.
- `fast` Checks key frames.
- `full` Checks all frames.

### [ttl](https://runware.ai/docs/models/openai-sora-2#request-ttl)

- **Type**: `integer`
- **Min**: `60`

Time-to-live (TTL) in seconds for generated content. Only applies when `outputType` is `URL`.

### [includeCost](https://runware.ai/docs/models/openai-sora-2#request-includecost)

- **Type**: `boolean`
- **Default**: `false`

Include task cost in the response.

### [numberResults](https://runware.ai/docs/models/openai-sora-2#request-numberresults)

- **Type**: `integer`
- **Min**: `1`
- **Max**: `4`
- **Default**: `1`

Number of results to generate. Each result uses a different seed, producing variations of the same parameters.

**Inputs**

Input resources for the task (images, audio, etc). These must be nested inside the \`inputs\` object.

### [frameImages](https://runware.ai/docs/models/openai-sora-2#request-inputs-frameimages)

- **Path**: `inputs.frameImages`
- **Type**: `array of strings or objects`

An array of frame-specific image inputs to guide video generation. Each item can be either a plain image input (UUID, URL, Data URI, or Base64) or an object that pairs an image with a target frame position.

The `frameImages` parameter allows you to constrain specific frames within the video sequence, ensuring that particular visual content appears at designated points. This is different from `referenceImages`, which provide overall visual guidance without constraining specific timeline positions.

When the `frame` parameter is omitted, automatic distribution rules apply:

- **1 image**: Used as the first frame.

**Examples**:

**Shorthand format:** When you don't need to specify a frame position, you can pass a plain image input directly.

```json
"frameImages": [
  "aac49721-1964-481a-ae78-8a4e29b91402"
]
```

**Object format:** When you need to specify a frame position, use an object with `image` and `frame`.

```json
"frameImages": [
  {
    "image": "aac49721-1964-481a-ae78-8a4e29b91402",
    "frame": "first"
  }
]
```

**Format 1: string[]**:

- **Type**: `string`

Image input (UUID, URL, Data URI, or Base64).

**Format 2: object[]**:

#### [image](https://runware.ai/docs/models/openai-sora-2#request-inputs-frameimages-format-2-image)

- **Path**: `inputs.frameImages.image`
- **Type**: `string`
- **Required**: true

Image input (UUID, URL, Data URI, or Base64).

#### [frame](https://runware.ai/docs/models/openai-sora-2#request-inputs-frameimages-format-2-frame)

- **Path**: `inputs.frameImages.frame`
- **Type**: `object`

Target frame position for the image. This model only supports the first frame.

**Allowed values**:

- `first` First frame of the video.
- `0` Frame index 0 (first frame).

### [videoId](https://runware.ai/docs/models/openai-sora-2#request-inputs-videoid)

- **Path**: `inputs.videoId`
- **Type**: `string`

ID of a previously generated video. Used for remixing or extending.

**Generation Parameters**

Core parameters for controlling the generated content.

### [model](https://runware.ai/docs/models/openai-sora-2#request-model)

- **Type**: `string`
- **Required**: true
- **Value**: `openai:3@1`

Identifier of the model to use for generation.

**Learn more** (3 resources):

- [Text To Image: Model Selection The Foundation Of Generation](https://runware.ai/docs/guides/text-to-image#model-selection-the-foundation-of-generation) (guide)
- [Image Inpainting: Model Specialized Inpainting Models](https://runware.ai/docs/guides/image-inpainting#model-specialized-inpainting-models) (guide)
- [Image Outpainting: Other Critical Parameters](https://runware.ai/docs/guides/image-outpainting#other-critical-parameters) (guide)

### [positivePrompt](https://runware.ai/docs/models/openai-sora-2#request-positiveprompt)

- **Type**: `string`
- **Required**: true
- **Min**: `2`
- **Max**: `6000`

Text prompt describing elements to include in the generated output.

**Learn more** (2 resources):

- [Text To Image: Prompts Guiding The Generation](https://runware.ai/docs/guides/text-to-image#prompts-guiding-the-generation) (guide)
- [Image Outpainting: Other Critical Parameters](https://runware.ai/docs/guides/image-outpainting#other-critical-parameters) (guide)

### [width](https://runware.ai/docs/models/openai-sora-2#request-width)

- **Type**: `integer`
- **Required**: true
- **Paired with**: height

Width of the generated media in pixels.

**Learn more** (2 resources):

- [Image To Image: Dimensions Changing Aspect Ratio](https://runware.ai/docs/guides/image-to-image#dimensions-changing-aspect-ratio) (guide)
- [Image Outpainting: Dimensions Critical For Outpainting](https://runware.ai/docs/guides/image-outpainting#dimensions-critical-for-outpainting) (guide)

### [height](https://runware.ai/docs/models/openai-sora-2#request-height)

- **Type**: `integer`
- **Required**: true
- **Paired with**: width

Height of the generated media in pixels.

**Learn more** (2 resources):

- [Image To Image: Dimensions Changing Aspect Ratio](https://runware.ai/docs/guides/image-to-image#dimensions-changing-aspect-ratio) (guide)
- [Image Outpainting: Dimensions Critical For Outpainting](https://runware.ai/docs/guides/image-outpainting#dimensions-critical-for-outpainting) (guide)

### [duration](https://runware.ai/docs/models/openai-sora-2#request-duration)

- **Type**: `float`

Length of the generated video in seconds. The total number of frames produced is determined by duration multiplied by the model's frame rate (fps).

**Allowed values**: `4` `8` `12` `16` `20`

## Response Parameters

### [taskType](https://runware.ai/docs/models/openai-sora-2#response-tasktype)

- **Type**: `string`
- **Required**: true
- **Value**: `videoInference`

Type of the task.

### [taskUUID](https://runware.ai/docs/models/openai-sora-2#response-taskuuid)

- **Type**: `string`
- **Required**: true
- **Format**: `UUID v4`

UUID of the task.

### [videoUUID](https://runware.ai/docs/models/openai-sora-2#response-videouuid)

- **Type**: `string`
- **Required**: true
- **Format**: `UUID v4`

UUID of the output video.

### [videoURL](https://runware.ai/docs/models/openai-sora-2#response-videourl)

- **Type**: `string`
- **Format**: `URI`

URL of the output video.

### [videoBase64Data](https://runware.ai/docs/models/openai-sora-2#response-videobase64data)

- **Type**: `string`

Base64-encoded video data.

### [videoDataURI](https://runware.ai/docs/models/openai-sora-2#response-videodatauri)

- **Type**: `string`
- **Format**: `URI`

Data URI of the output video.

### [seed](https://runware.ai/docs/models/openai-sora-2#response-seed)

- **Type**: `integer`

The seed used for generation. If none was provided, shows the randomly generated seed.

### [NSFWContent](https://runware.ai/docs/models/openai-sora-2#response-nsfwcontent)

- **Type**: `boolean`

Flag indicating if NSFW content was detected.

### [cost](https://runware.ai/docs/models/openai-sora-2#response-cost)

- **Type**: `float`

Task cost in USD. Present when `includeCost` is set to `true` in the request.

### [outputs](https://runware.ai/docs/models/openai-sora-2#response-outputs)

- **Path**: `outputs.videoId`
- **Type**: `object (1 property)`

#### [videoId](https://runware.ai/docs/models/openai-sora-2#response-outputs-videoid)

- **Path**: `outputs.videoId`
- **Type**: `string`

The ID of the generated video. Can be used for remixing or extension.

## Examples

### Bioluminescent Monsoon Market Night (Text to Video)

[Watch video](https://assets.runware.ai/examples/openai-sora-2/d90f4a57-48a0-4e91-a296-1b86d566afec.mp4)

**Request**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "29ac7910-ce46-402c-b27b-4557b3297cdb",
  "model": "openai:3@1",
  "positivePrompt": "A cinematic nighttime market in a flooded tropical megacity, lit by hanging lanterns and glowing bioluminescent fruit. Warm rain falls in sheets, splashing into shallow water that mirrors neon signs in an invented script. A young street cartographer in a translucent jade rain cape stands beneath a patched umbrella, unrolling a luminous map that pulses softly with living ink. Around them, vendors sell glass eels in tanks, steaming dumplings, and tiny mechanical birds; woven awnings ripple in the storm wind. The camera begins with a low tracking shot through puddles and drifting flower petals, then rises into a medium orbit around the cartographer as passing cyclists send arcs of water through frame. Naturalistic motion, convincing wet surfaces, layered depth, subtle handheld feel, atmospheric steam, expressive faces, realistic cloth and hair behavior, highly detailed reflections. Sound design: steady monsoon rain, distant thunder, market chatter, bicycle bells, sizzling food stalls, water sloshing underfoot, soft electrical hum from signs, and a brief whispered line from the cartographer: \"The old canals still remember the stars.\" Moody, lush, dreamlike, premium cinematic quality.",
  "width": 1280,
  "height": 720,
  "duration": 8
}
```

**Response**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "29ac7910-ce46-402c-b27b-4557b3297cdb",
  "videoUUID": "df343c44-b6a2-4218-ac27-ec0e497f2b15",
  "videoURL": "https://vm.runware.ai/video/os/a10d08/ws/5/vi/df343c44-b6a2-4218-ac27-ec0e497f2b15.mp4",
  "seed": 2034548508,
  "cost": 0.8,
  "outputs": {
    "videoId": "video_69c583cdfc108198b5e6f4412b049a8e0b518c415f2be692"
  }
}
```

---

### Moonlit Desert Observatory Caravan (Image to Video)

[Watch video](https://assets.runware.ai/examples/openai-sora-2/1fe5683e-6029-4668-ac86-bade7f990a3b.mp4)

**Request**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "a1f16825-3b03-4586-9aa5-60cd668d2708",
  "model": "openai:3@1",
  "positivePrompt": "Using the provided first-frame image as the opening composition, create an elegant cinematic nighttime sequence in a moonlit desert. The observatory caravan lanterns flicker warmly against cool blue sand and sky. A light wind moves fabric canopies and loose pages of star charts. One stargazer adjusts the brass telescope while another quietly says, 'There, near Orion,' with natural synchronized dialogue. Include soft footsteps on sand, faint metal creaks from the wagon, gentle fabric rustle, and expansive desert ambience. The camera begins on the established wide frame, then slowly pushes in with subtle parallax, preserving the identity and layout of the frame image while adding lifelike motion, atmospheric depth, and realistic lighting.",
  "width": 1280,
  "height": 720,
  "duration": 8,
  "inputs": {
    "frameImages": [
      {
        "image": "https://assets.runware.ai/assets/inputs/79b3290f-376d-4375-97bb-5bc63c5ac27b.jpg",
        "frame": "first"
      }
    ]
  }
}
```

**Response**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "a1f16825-3b03-4586-9aa5-60cd668d2708",
  "videoUUID": "86d42e0f-2a8e-4529-a853-3c0d5579d7c4",
  "videoURL": "https://vm.runware.ai/video/os/a02d21/ws/5/vi/86d42e0f-2a8e-4529-a853-3c0d5579d7c4.mp4",
  "seed": 1416485965,
  "cost": 0.8,
  "outputs": {
    "videoId": "video_69c583d6b9548191a95376266987a99a09d872992f4c9d27"
  }
}
```

---

### Bioluminescent Tidepool Research Station (Text to Video)

[Watch video](https://assets.runware.ai/examples/openai-sora-2/5ac03c20-b9cf-40e6-b1ac-8e3041f5b6d6.mp4)

**Request**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "21c5f6fe-f7ad-45d5-8e55-09a261816398",
  "model": "openai:3@1",
  "positivePrompt": "A cinematic night scene at a remote research station built into black volcanic cliffs above a bioluminescent tidepool. A young marine biologist in a translucent rain cape kneels beside the glowing water, scanning drifting organisms with a handheld instrument that emits soft amber light. Waves roll in below with realistic splashes and foam, illuminating the rocks in electric blue pulses. In the background, slender wind turbines turn slowly through sea mist, red safety beacons blinking faintly. The camera begins with a wide establishing shot, then glides closer in a smooth low tracking move, revealing glass sample vials, wet metal railings, and ripples spreading across the pool. The scientist quietly says, 'It blooms earlier every year,' while distant thunder rumbles, gulls cry overhead, light rain taps on the cape, and the ocean breathes with layered surf ambience. Moody, photoreal, atmospheric, precise reflections, natural motion, believable physics, fine environmental detail, restrained color palette with vivid bioluminescent highlights.",
  "width": 1280,
  "height": 720,
  "duration": 8
}
```

**Response**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "21c5f6fe-f7ad-45d5-8e55-09a261816398",
  "videoUUID": "207e4f92-913f-4fd5-ad63-82bb78a1190a",
  "videoURL": "https://vm.runware.ai/video/os/a04d20/ws/5/vi/207e4f92-913f-4fd5-ad63-82bb78a1190a.mp4",
  "seed": 1146755284,
  "cost": 0.8,
  "outputs": {
    "videoId": "video_69c581b6fefc8190a3e97983013919f30610bb14c4e5f04b"
  }
}
```

---

### Bioluminescent Mangrove Monsoon Chase (Text to Video)

[Watch video](https://assets.runware.ai/examples/openai-sora-2/9701c6d0-979a-4a10-9184-5fa4c05e3480.mp4)

**Request**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "7ad611c9-522d-4c20-a6bf-445ee57ae6d3",
  "model": "openai:3@1",
  "positivePrompt": "A cinematic nighttime monsoon in a bioluminescent mangrove forest on a distant tropical world. A narrow rescue skiff glides fast through shallow floodwater between twisted roots glowing electric teal and amber. Rain lashes the surface, creating dense ripples and spray illuminated by flashes of lightning. The camera begins low over the water, tracking beside the boat, then arcs forward to reveal a teenage navigator in a translucent hooded rain cloak steering with intense focus while a hovering lantern drone flickers nearby. In the background, stilt houses sway gently and paper wind charms clatter in the storm. Realistic water physics, believable boat motion, wet fabric behavior, detailed reflections, cinematic depth, moody contrast, subtle handheld energy. Natural synchronized soundscape: heavy rain, distant thunder, motor hum, water slaps against the hull, rattling wood, breath and gear movement. The navigator shouts over the storm: 'Hold the beacon steady—we're almost there!' End on a wide shot as the skiff emerges into an open lagoon filled with glowing pollen mist and towering mangrove silhouettes.",
  "width": 1280,
  "height": 720,
  "duration": 8
}
```

**Response**:

```json
{
  "taskType": "videoInference",
  "taskUUID": "7ad611c9-522d-4c20-a6bf-445ee57ae6d3",
  "videoUUID": "27ed1df2-bd31-4251-87d4-6f5aae5b64c9",
  "videoURL": "https://vm.runware.ai/video/os/a23d05/ws/5/vi/27ed1df2-bd31-4251-87d4-6f5aae5b64c9.mp4",
  "seed": 1169552849,
  "cost": 0.8,
  "outputs": {
    "videoId": "video_69c5823ad9a88190a42162eca23ab5b2064fad7b92467aaa"
  }
}
```