Python Library
Introduction
The Runware Python SDK provides a WebSocket-based interface built for Python applications that need AI-powered media processing. Using Python's async/await patterns, the SDK handles connection management, authentication, and error recovery while exposing Runware's capabilities through a clean, Pythonic API.
The SDK works particularly well for server-side applications where you need reliable, high-performance AI integration. Whether you're building a web API with FastAPI, processing images in batches, or adding AI features to an existing application, the SDK provides production-ready reliability without complex setup.
Key SDK benefits
Built for Python
The SDK uses async/await patterns that integrate naturally with modern Python frameworks like FastAPI and asyncio-based applications. Your applications stay responsive even during intensive AI operations thanks to the asynchronous architecture.
Comprehensive type hints provide better IDE support, enable static analysis with tools like mypy, and help catch errors before they reach production.
The API follows Python conventions for method naming and error handling, so it feels natural if you're already comfortable with Python development.
Performance advantages
Persistent WebSocket connections eliminate the connection overhead that occurs with traditional HTTP requests. This provides measurable performance improvements when you're doing multiple operations or batch processing.
Concurrent operations work seamlessly with Python's asyncio, letting you handle multiple requests simultaneously without blocking your application.
When to use the Python SDK
Choose the Python SDK for server-side applications that need reliable AI integration. It's especially effective for web APIs, batch processing systems, and applications where you need robust error handling and automatic retry logic.
Installation
Install the SDK using pip:
pip install runware
Basic setup
The Python SDK supports both environment variable and direct API key configuration. For security best practices, use environment variables:
export RUNWARE_API_KEY="your-api-key-here"
Then initialize and use the SDK:
import asyncio
from runware import Runware, IImageInference
async def main():
# SDK reads RUNWARE_API_KEY automatically
runware = Runware()
await runware.connect()
request = IImageInference(
positivePrompt="A serene mountain landscape at sunset",
model="runware:101@1",
width=1024,
height=1024
)
images = await runware.imageInference(requestImage=request)
print(f"Generated image: {images[0].imageURL}")
if __name__ == "__main__":
asyncio.run(main())
The SDK automatically handles connection establishment, authentication, and response parsing, allowing you to focus on your application logic.
Connection management
Automatic connection handling
The Python SDK requires explicit connection establishment but handles reconnection automatically:
from runware import Runware
async def main():
runware = Runware(api_key="your-api-key-here")
# Establish connection before making requests
await runware.connect()
# Perform operations - connection maintained automatically
request = IImageInference(
positivePrompt="A bustling city street at night",
model="runware:101@1",
width=1024,
height=1024
)
images = await runware.imageInference(requestImage=request)
# Connection cleanup (optional - handled automatically)
await runware.disconnect()
Connection configuration
For applications with specific requirements, customize connection behavior:
runware = Runware(
api_key="your-api-key-here",
timeout=120, # Custom timeout for operations
max_retries=5, # Retry attempts for failed requests
retry_delay=2.0 # Delay between retries in seconds
)
Connection lifecycle is managed explicitly in Python, giving you control over when connections are established and terminated. This is particularly useful in server applications where you want to maintain connections across multiple requests.
Concurrent operations
The SDK's async design excels at handling multiple simultaneous operations:
import asyncio
from runware import Runware, IImageInference, IImageUpscale, IImageBackgroundRemoval
async def main():
runware = Runware()
await runware.connect()
# Execute multiple operations concurrently
results = await asyncio.gather(
runware.imageInference(requestImage=IImageInference(
positivePrompt="Abstract digital art",
model="runware:101@1",
width=1024,
height=1024
)),
runware.imageUpscale(upscaleGanPayload=IImageUpscale(
inputImage="existing-image-uuid",
upscaleFactor=4
)),
runware.imageBackgroundRemoval(removeImageBackgroundPayload=IImageBackgroundRemoval(
image_initiator="portrait-path.jpg"
))
)
generated_images, upscaled_image, background_removed = results
This concurrent execution pattern is particularly powerful for batch processing, workflow automation, and applications that need to perform multiple operations on the same or different inputs simultaneously.
Error handling
The SDK provides comprehensive error handling with detailed information for debugging and user feedback:
from runware import Runware
async def main():
runware = Runware()
await runware.connect()
try:
request = IImageInference(
positivePrompt="A detailed architectural rendering",
model="runware:101@1",
width=1024,
height=1024
)
images = await runware.imageInference(requestImage=request)
print(f"Success: {len(images)} images generated")
except Exception as e:
# Error information available for debugging and user feedback
print(f"Generation failed: {e}")
# Handle the error appropriately for your application
Batch operation error handling
When processing multiple operations, handle partial failures gracefully:
async def process_batch(image_requests):
runware = Runware()
await runware.connect()
results = []
for i, request in enumerate(image_requests):
try:
images = await runware.imageInference(requestImage=request)
results.append({"index": i, "success": True, "images": images})
except Exception as e:
results.append({"index": i, "success": False, "error": str(e)})
return results
Integration patterns
FastAPI integration
The SDK integrates seamlessly with FastAPI for building AI-powered web APIs:
from fastapi import FastAPI, HTTPException
from runware import Runware, IImageInference
import asyncio
app = FastAPI()
runware = Runware()
@app.on_event("startup")
async def startup():
await runware.connect()
@app.on_event("shutdown")
async def shutdown():
await runware.disconnect()
@app.post("/generate-image")
async def generate_image(prompt: str):
try:
request = IImageInference(
positivePrompt=prompt,
model="runware:101@1",
width=1024,
height=1024
)
images = await runware.imageInference(requestImage=request)
return {"image_url": images[0].imageURL}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Batch processing workflows
For processing large datasets or multiple files:
import asyncio
from pathlib import Path
from runware import Runware, IImageBackgroundRemoval
async def process_images_batch(image_folder: Path, batch_size: int = 10):
runware = Runware()
await runware.connect()
image_files = list(image_folder.glob("*.jpg"))
for i in range(0, len(image_files), batch_size):
batch = image_files[i:i + batch_size]
# Process batch concurrently
tasks = [
runware.imageBackgroundRemoval(
removeImageBackgroundPayload=IImageBackgroundRemoval(
image_initiator=str(img)
)
) for img in batch
]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Handle results and errors
for j, result in enumerate(results):
if isinstance(result, Exception):
print(f"Failed to process {batch[j]}: {result}")
else:
print(f"Processed {batch[j]}: {result[0].imageURL}")
Configuration options
Environment-based configuration
The recommended approach for production applications:
import os
from runware import Runware
# Reads from RUNWARE_API_KEY environment variable
runware = Runware()
# Or explicitly specify environment variable name
runware = Runware(api_key=os.getenv("CUSTOM_API_KEY_NAME"))
Programmatic configuration
For applications requiring dynamic configuration:
runware = Runware(
api_key="your-api-key-here",
timeout=180, # 3 minutes timeout for complex operations
max_retries=3, # Retry attempts for failed operations
retry_delay=1.5, # Delay between retries
base_url="wss://custom-endpoint.com/v1" # Custom endpoint
)
Timeout configuration is particularly important for applications processing high-resolution images or using complex models that may require extended processing time.
Retry configuration allows you to balance between reliability and responsiveness, with higher retry counts improving success rates in unstable network conditions.
Type safety and IDE support
The SDK provides comprehensive type hints for better development experience:
from runware import (
Runware,
IImageInference,
IImageUpscale
)
from typing import List
async def generate_and_upscale(prompt: str) -> List[str]:
runware = Runware()
await runware.connect()
# Type-safe request construction
request: IImageInference = IImageInference(
positivePrompt=prompt,
model="runware:101@1",
width=512,
height=512
)
images = await runware.imageInference(requestImage=request)
# Upscale the first image
upscale_request: IImageUpscale = IImageUpscale(
inputImage=images[0].imageURL,
upscaleFactor=2
)
upscaled = await runware.imageUpscale(upscaleGanPayload=upscale_request)
return [upscaled[0].imageURL]
Type hints enable static analysis with mypy, better IDE autocompletion, and reduced runtime errors in production applications.
Best practices
Connection lifecycle management
Establish connections early in your application lifecycle, particularly in web applications where you'll handle multiple requests. Reuse connections across requests rather than establishing new ones for each operation.
Handle connection cleanup properly in long-running applications to prevent resource leaks, especially important in server environments.
Error resilience
Implement comprehensive error handling that distinguishes between recoverable and non-recoverable errors. Use the SDK's built-in retry mechanisms for transient failures.
Log errors with context including task UUIDs for debugging and monitoring in production environments.
Performance optimization
Use concurrent operations with asyncio.gather()
when processing multiple independent requests to maximize throughput.
Configure appropriate timeouts based on your operation types and infrastructure requirements.
Batch operations when possible to reduce connection overhead and improve overall system efficiency.