Skip to main content
Reactor handles the entire media pipeline between your model and connected clients. You yield frames and audio samples from Python. Reactor encodes, packetizes, and delivers them over WebRTC with adaptive bitrate and low latency.

Transport

All media is delivered over WebRTC. Clients connect via standard SDP offer/answer negotiation, and frames stream over RTP with sub-50ms delivery latency. The runtime manages the full WebRTC lifecycle: ICE, DTLS, SRTP, and congestion control. Commands and messages between the client and model travel over a WebRTC data channel alongside the media streams.

Video codecs

Reactor supports multiple video codecs, negotiated automatically with each client via SDP:
CodecHardware accelerationNotes
VP9Some GPUsDefault. Good compression at real-time rates
VP8NoBroad browser compatibility
H.264NVENCConstrained Baseline profile. Widest device support
AV1NoBest compression. Growing browser support
The runtime selects the best codec the client supports. VP9 is preferred when available for its balance of quality and compression. H.264 is used as a fallback for maximum device compatibility. On NVIDIA GPUs, H.264 and H.265 encoding is hardware-accelerated via NVENC, with automatic software fallback.

Audio codec

Audio is encoded with Opus at 48 kHz. Opus is the standard WebRTC audio codec and is supported by all modern browsers.
PropertyValue
CodecOpus
Sample rate48 kHz
Frame size20 ms (960 samples)
FECEnabled by default

Resolution

There are no hardcoded resolution limits. The runtime encodes and streams whatever resolution your model produces. If your model yields 720p frames, the stream is 720p. If it yields 4K, the stream is 4K.

Bitrate

The runtime uses adaptive bitrate control based on network conditions. It monitors transport-level feedback and adjusts encoding bitrate dynamically, scaling between 500 kbps and 10 Mbps depending on available bandwidth.

Multiple tracks

A model can output any combination of video and audio tracks. Each field on your Output dataclass becomes a separate media track:
@dataclass
class MyOutput(Output):
    main_video: Video
    secondary_video: Video
    main_audio: Audio
Track names (main_video, secondary_video, main_audio) are the identifiers clients use to subscribe to specific streams.

Packet loss recovery

The runtime enables RTP retransmission (RTX) by default. Lost packets are retransmitted from a short buffer (200 ms), keeping streams clean without adding latency for packets that arrive normally.