Skip to main content
Many models need input from the client: a webcam feed for style transfer, a camera stream for a world model, audio for speech processing. In Reactor, you declare Input tracks the same way you declare Output tracks. Video frames are np.ndarray with shape (height, width, 3), dtype uint8, in RGB channel order. Input buffers are cleaned up automatically between client sessions, so you don’t need to handle disconnection or reset anything.

Declaring input tracks

from dataclasses import dataclass
from reactor_runtime.interface import Input, Video

@dataclass
class MyInput(Input):
    webcam: Video
Each field is a named media channel. Annotate it on your model class to wire it up:
class MyModel(ReactorPipeline):
    input: MyInput
    state: MyState
When a client connects and sends their webcam stream, frames arrive in the webcam buffer automatically. You access them via self.input.webcam.

Reading frames

Use try_read() to read input frames inside inference(). It always returns the most recent frames and discards any older backlog, so your model always processes the freshest input.

try_read()

Returns a list of the most recent n frames, or None if fewer than n are available. When None is returned the buffer is left untouched, so nothing is lost.
def inference(self):
    while True:
        frames = self.input.webcam.try_read()
        while frames is None:
            yield Idle
            frames = self.input.webcam.try_read()
        result = self.pipe.forward(frames[0])
        yield MyOutput(main_video=result)
When no frames are available, the inner while loop yields Idle until a frame arrives. Each yield Idle hands control back to the runtime so events can be delivered and the output stream stays smooth. Without it, the generator would spin without yielding, starving the event loop. By default try_read() reads one frame (n=1). Pass n to read multiple frames at once (e.g. self.input.webcam.try_read(n=4)). The return value is always a list when frames are available, even for n=1.

Full example: video-to-video

from dataclasses import dataclass
from reactor_runtime.interface import (
    Input, Output, Video, ReactorPipeline, InputState, InputField, Idle,
)

@dataclass
class V2VInput(Input):
    camera: Video

@dataclass
class V2VOutput(Output):
    main_video: Video

@dataclass
class V2VState(InputState):
    style: str = InputField(default="none", choices=["none", "oil_paint", "sketch"])

class V2VModel(ReactorPipeline):
    input: V2VInput
    state: V2VState
    fps = 30

    def load(self, config):
        self.pipe = load_style_model(config["checkpoint"])

    def inference(self):
        while True:
            frames = self.input.camera.try_read()
            while frames is None:
                yield Idle
                frames = self.input.camera.try_read()
            result = self.pipe.apply(frames[0], style=self.state.style)
            yield V2VOutput(main_video=result)

Next

Bring Your Own Model

Convert an existing research pipeline to Reactor.