Skip to main content
This page walks through the model that reactor init generates, line by line. By the end you’ll understand every building block.

The full model

Here’s what reactor init gives you, a complete runnable model in a single file:
pipeline.py
from reactor_runtime.interface import InputField, InputState, Output, ReactorPipeline, Video

@dataclass
class MyOutput(Output):
    main_video: Video


@dataclass
class MyState(InputState):
    brightness: float = InputField(default=1.0, ge=0.0, le=2.0, description="Brightness multiplier")


class MyModel(ReactorPipeline):
    state: MyState
    buffer_size = 8

    def load(self, config: dict[str, Any]) -> None:
        self.width = config.get("width", 640)
        self.height = config.get("height", 480)
        self.chunk_size = config.get("chunk_size", 4)

    def inference(self):
        frame_idx = 0
        while True:
            batch, frame_idx = render_batch(
                self.width, self.height, self.chunk_size,
                frame_idx, self.state.brightness,
            )
            yield MyOutput(main_video=batch)

def render_batch(width, height, chunk_size, start_idx, brightness):
    # ... generates a batch of animated gradient frames with brightness applied
    # returns (np.ndarray of shape (N, H, W, 3), next_frame_index)
Let’s break it down.

Output tracks

@dataclass
class MyOutput(Output):
    main_video: Video
This declares what media the model sends to clients. Each field is a track, a named media channel. Video means the field carries video frames. You can have multiple tracks:
@dataclass
class MyOutput(Output):
    main_video: Video
    main_audio: Audio
The field names (main_video, main_audio) become the track identifiers that clients use.

Client state

@dataclass
class MyState(InputState):
    brightness: float = InputField(default=1.0, ge=0.0, le=2.0, description="Brightness multiplier")
This declares parameters that clients can change while the model is running. Your model reads them with self.state.brightness inside inference(). The runtime takes care of the rest. We’ll cover how in the Interactive State page.

The model class

class MyModel(ReactorPipeline):
    state: MyState
state: MyState tells the pipeline what is the shape of the input state. You get self.state with full IDE autocomplete.

Loading

def load(self, config: dict[str, Any]) -> None:
    self.width = config.get("width", 640)
    self.height = config.get("height", 480)
    self.chunk_size = config.get("chunk_size", 4)
Called once at startup, before any client can connect. The config dict comes from config.yml. This is where you load checkpoints, allocate GPU memory, and set up your model inference configuration.

The inference generator

def inference(self):
    frame_idx = 0
    while True:
        batch, frame_idx = render_batch(
            self.width, self.height, self.chunk_size,
            frame_idx, self.state.brightness,
        )
        yield MyOutput(main_video=batch)
This is the heart of the model. It’s a Python generator that runs continuously. Each iteration produces one frame, and yield sends it to the client. The runtime manages the generator’s lifecycle, drives it between client connections, and delivers state updates between iterations. The next page covers inference() in detail: sync vs async, batch yields, skipping frames, and how the lifecycle works.

The manifest

reactor.yaml
model: pipeline:MyModel
name: my-model
config: config.yml
  • model: import path in module:ClassName format.
  • name: human-readable identifier, used in logging and routing.
  • config: path to the model config YAML.

Beyond the generator

ReactorPipeline handles the connection loop and state lifecycle for you. If your model doesn’t fit the generator pattern, or you want full control over what happens between client connections, see ReactorModel.

Next

Now that you understand the pieces, let’s dive deeper into the inference generator.

The Inference Generator

Sync vs async, batch yields, lifecycle, and state consistency.

Interactive State

Add richer parameters with validation and custom logic.