init command generates a project template with everything you need to build and deploy a model on Reactor.
Usage
What it generates
Dockerfile
A multi-stage Dockerfile configured for GPU-based inference. It includes CUDA base images, Python dependency installation, and the correct entrypoint for Reactor’s runtime.model.py
A minimalReactorPipeline implementation you can use as a starting point. Replace the inference logic with your own model.
reactor.yaml
Configuration file specifying your model name, version, and runtime settings. The CLI reads this file for commands likemodel publish.
Next steps
Runtime Overview
Learn how to build models with Reactor Runtime.
Docker Deployment
Build and push your Docker image.