Skip to main content
This guide walks through the full deployment flow: installing the CLI, authenticating, registering a model, integrating the runtime, uploading weights, pushing a Docker image, and publishing.

Prerequisites

  • Python 3.10+
  • Docker installed and running
  • A Reactor partner account and API key (request access at reactor.inc)

1. Install the CLI

pip install reactor-runtime
Verify the installation:
reactor --version

2. Authenticate

reactor auth login
You’ll be prompted for your API key. This stores credentials locally and configures access to your private container registry and weight storage. Check your auth status at any time:
reactor auth status

3. Register your model

reactor model register --name my-model
This provisions the infrastructure for your model: a storage prefix for weights and a container registry for Docker images.

4. Set up your project

reactor init --model my-model
This downloads a project template with a Dockerfile, example model code, and configuration files. See Project Setup for details on the generated structure.

5. Integrate the runtime

Build your model using the Reactor Runtime. At minimum, you need an inference() loop that yields output frames:
from reactor_runtime.interface import Output, Video, ReactorPipeline, InputState, InputField
from dataclasses import dataclass

@dataclass
class MyOutput(Output):
    main_video: Video

@dataclass
class MyState(InputState):
    prompt: str = InputField(default="a sunny meadow")

class MyModel(ReactorPipeline):
    state: MyState

    def load(self, config):
        self.pipe = load_my_model(config["checkpoint"])

    def inference(self):
        while True:
            frame = self.pipe.forward(prompt=self.state.prompt)
            yield MyOutput(main_video=frame)
See the Runtime tab for the full tutorial on building models.

6. Upload weights

reactor model weights --model my-model --version v1 --source s3://your-bucket/weights/
This initiates a server-side S3 copy from your bucket to Reactor’s storage. For local files, use direct upload:
reactor model weights --model my-model --version v1 --source ./weights/
See Weights Upload for details on both methods.

7. Test locally (optional)

If you have a local GPU, you can verify your model before deploying:
docker build -t my-model:dev .
docker run --gpus all -v ./weights:/weights -p 8080:8080 my-model:dev
Then connect a frontend using the SDK’s local mode:
import { ReactorProvider, ReactorView } from "@reactor-team/js-sdk";

<ReactorProvider modelName="my-model" local={true} autoConnect={true}>
  <ReactorView className="w-full aspect-video" />
</ReactorProvider>
This connects directly to your local container with no authentication needed. See Local Testing for more details. No local GPU? Skip this and deploy directly.

8. Build and push your Docker image

# Authenticate Docker with your registry
reactor auth docker-login

# Build your image
docker build -t my-model:v1 .

# Tag for your registry
docker tag my-model:v1 <your-registry>/my-model:v1

# Push
docker push <your-registry>/my-model:v1
The reactor init template includes a Dockerfile optimized for Reactor deployments. See Docker Deployment for best practices.

9. Publish

reactor model publish --model my-model --version v1
Reactor provisions GPU nodes, pulls your image and weights, starts your container, and routes traffic. Your model is live and accepting sessions in under 3 minutes. Watch the deployment in real time:
reactor model status --model my-model --watch
See Publishing for details on updates and zero-downtime rollover.

10. Connect your frontend

Once your model is live, connect to it with the Reactor SDK:
import { ReactorProvider, ReactorView } from "@reactor-team/js-sdk";

<ReactorProvider modelName="my-model" autoConnect={true}>
  <ReactorView className="w-full aspect-video" />
</ReactorProvider>
Your model is now streaming real-time video to clients. See the SDK docs for the full API reference.