Skip to main content
These docs are in early access. The Development section (runtime, local testing, model building) is production-ready. The Deployment section describes features that are rolling out to partners. Contact us for access.
Reactor is infrastructure for real-time video and world models. You write a Python model, test it locally, and deploy to Reactor’s global GPU network. We handle WebRTC streaming, session management, and delivery.

How it works

1

Build your model

Wrap your inference pipeline with the Reactor Runtime. Yield frames from a Python generator and the runtime streams them to clients over WebRTC.
class MyModel(ReactorPipeline):
    def inference(self):
        while True:
            frame = self.pipe.forward(prompt=self.state.prompt)
            yield MyOutput(main_video=frame)
2

Test locally

Run your model on your own machine and connect a frontend.
pip install reactor-runtime
reactor init my-model
reactor run
3

Deploy

Authenticate, upload weights, push your Docker image, and publish. Your model is live on production GPUs in under 3 minutes.
reactor auth login
reactor model register --name my-model
reactor model weights --model my-model --version v1 --source ./weights/
reactor model publish --model my-model --version v1
4

Connect clients

Use the JavaScript or Python SDK to stream your model’s output to any application.
<ReactorProvider modelName="my-model" autoConnect={true}>
  <ReactorView className="w-full aspect-video" />
</ReactorProvider>

Why Reactor

Sub-50ms streaming

Frames delivered over WebRTC as they are generated. Client inputs received live.

Stateful sessions

Each client gets isolated state, managed from connection to cleanup.

Global GPU network

Nodes in every major region. A client in Tokyo connects to a GPU in Tokyo.

No transport code

You never touch WebRTC, WebSockets, or video encoding. Reactor handles it.

Live in minutes

Publish and your model is running on production GPUs in under 3 minutes.

You own your model

Your weights, your inference logic. Reactor never accesses or trains on your data.

Get started

Development

Install the runtime, build your model, and test locally.

Deployment

Authenticate, upload weights, and go live on Reactor’s GPUs.