Upload methods
Option 1: S3 import (recommended)
Best for large weight files. The CLI generates a presigned URL from your S3 bucket, and Reactor copies the data server-side. Nothing transits through your local machine.- CLI uses your local AWS credentials to generate a presigned GET URL (1-hour expiry)
- CLI sends the presigned URL to the Reactor backend
- Reactor copies directly from your S3 bucket to its own storage
- CLI returns a job ID for tracking progress
Setting up S3 access
The CLI generates presigned URLs using your own AWS credentials, so no bucket policy change is needed in most cases. If your bucket has restrictive policies (VPC-only access, IP allowlists, oraws:SourceVpce conditions), you need to allow Reactor’s backend to fetch via the presigned URL. Add this statement to your bucket policy:
If you can’t modify your bucket policy, use direct upload instead. You can also copy weights to a less restrictive staging bucket first.
Troubleshooting S3 imports
| Error | Fix |
|---|---|
| Access denied during import | Add the bucket policy above, or use direct upload instead. |
| Key not found | Verify the S3 path with aws s3 ls. |
| Import stuck in pending | Large transfers take time. Check progress with reactor model weights --status <job-id>. |
| Expired token | Retry the command. The CLI generates a fresh presigned URL each time. |
Option 2: Direct upload
Best for smaller files or when weights are only available locally.Versioning
Weights are versioned using the formatmodel-name:version. Versions are arbitrary strings, but we recommend semantic tags:
Tracking upload progress
Both methods are asynchronous. Track progress with:| Status | Meaning |
|---|---|
| pending | Job is queued |
| in_progress | Transfer is running |
| completed | Weights are stored and ready |
| failed | Transfer failed. Check the error message in the output. |
Deleting weights
Tips
- Use S3 import when your weights are already in an AWS S3 bucket. It avoids bandwidth costs and is significantly faster for large files.
- Large uploads (50GB+) benefit from the S3 import path since the transfer runs entirely server-side.
- Direct upload works from anywhere. No AWS account required.