What is Runware and what can I do with it?
Runware is a high-performance AI inference platform for generating images and videos at scale. You can use our API to create media from text, images, or video inputs using hundreds of thousands of models — instantly and affordably.
How fast is Runware compared to other platforms?
Thanks to our Sonic Inference Engine® and custom hardware, we offer industry-leading speeds: LoRA cold starts in 0.1s, checkpoint loading in 0.5s, and image/video generation in under a second. We outperform traditional cloud GPUs by 20x or more.
Is there a free trial or credit to start with?
Yes — you can generate around 1000 images on us to test the platform with no commitment.
How is pricing calculated?
Pricing depends on the model, resolution, and duration of your media. Use the Playground to test different configurations and see exact costs in real time.
Do I need to commit to a monthly plan?
No. Runware operates on a pay-as-you-go model. There are no long-term contracts or minimum commitments.
Can I get volume discounts or credits?
Yes. If you're generating media at scale, we offer up to $250K in bonus credits based on your monthly spend. Contact us to discuss volume pricing.
Why is Runware cheaper than other providers?
We’ve optimized everything — from custom hardware to green energy and software orchestration — to cut costs. Our energy-efficient infrastructure means your usage costs less than just the electricity alone on other platforms.
What kind of image generation can I do with the API?
You can generate images from text prompts, apply ControlNet guidance, use LoRAs, masks, prompt weighting, and more. Everything is configurable — width, height, steps, seed, model version, and more.
Can I use models from the Stable Diffusion ecosystem?
Yes. We support 300K+ models including custom checkpoints, LoRAs, ControlNets, VAEs, and embeddings. You can use any of them instantly via our API.
How do I know which model to use?
Our Playground lets you test any model before deploying. You can filter, preview results, and adjust parameters to find the best fit for your project.
What video models are available?
We offer top-tier models like Kling 2.1, Pixverse, Seedance, Veo 3, and more — all accessible via one unified API. You can generate high-quality short videos from text, images, or video inputs.
What are the generation modes for video?
You can generate video from text (T2V), image (I2V), or video (V2V). Advanced modes include lip-sync, multi-shot, Motion Brush, and reference-based consistency.
How long can the videos be?
It depends on the model — Kling 2.1 supports up to 5s, while others go up to 10s or more. Check each model’s capabilities in the Playground.
Is lip-sync supported in video generation?
Yes. Models like Kling 2.1 support lip-sync, motion guidance, and first-frame anchoring for better narrative control.
Can I upload my own models?
Yes. You can upload custom checkpoints, LoRAs, ControlNets, and VAEs. They’re automatically optimized for our engine and ready to use instantly.
Are my uploaded models private?
You can choose. Models can be private to your organization or made public for community access.
How do I use custom models in the API?
Each uploaded model gets a unique AIR ID. You can reference this in your API request just like any built-in model.
Can I earn money from my models?
Yes. You can license your models through our platform and earn revenue from every API call. You control pricing and terms.
Will Runware scale with my traffic?
Absolutely. Our architecture auto-scales instantly to match demand, and our parallel processing ensures sub-second generation even at peak usage.
How can I test before integrating?
Use our interactive Playground to experiment with models and configurations. It’s live, real-time, and shows exact output and cost.
How hard is it to integrate the API?
It’s designed for developers. The API is well-documented and supports straightforward JSON-based requests. No complex setup or infrastructure is required.