Tetra is a Python library for distributed inference and serving of machine learning models. It provides a flexible and efficient way to run inference across multiple machines.
- Python 3.9 or higher
- Poetry (for dependency management)
- Docker (optional, for containerized deployment)
- RunPod account and API key
- Clone the repository:
gh repo clone runpod/Tetra && cd Tetra
- Install dependencies:
For core Tetra functionality only:
poetry install
For running examples (includes additional ML dependencies):
poetry install --with examples
- Set up your virtual environment:
poetry shell
- Run an example:
python examples/example.py
tetra/
├── tetra/ # Main package directory
├── protos/ # Protocol buffer definitions
├── examples/ # Example usage and demos
│ ├── example.py
│ └── image_gen.py
├── pyproject.toml # Project dependencies and metadata
└── Dockerfile # Container definition for deployment
The examples/
directory contains sample code demonstrating how to use Tetra:
example.py
: Basic usage exampleimage_gen.py
: Example of image generation using Tetra
To contribute to Tetra:
- Create a virtual environment and install dependencies:
poetry install
- Run tests:
poetry run pytest
Build the Docker image:
docker build -t tetra .
Run the container with your RunPod API key:
docker run -it --env-file .env tetra
If you encounter platform compatibility issues, you can try:
-
Specifying the platform explicitly:
docker build --platform linux/amd64 -t tetra .
-
Running with platform specified:
docker run --platform linux/amd64 -it --env-file .env tetra
-
For M1/M2 Mac users:
- The PyTorch image is built for AMD64, but will run on ARM64 with emulation
- You may see a warning about platform mismatch, but it should still work
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
For support, please open an issue on GitHub.
The examples and many features of Tetra require a RunPod API key to function. To set this up:
- Create a RunPod account at runpod.io
- Go to your account settings and generate an API key
- Create a
.env
file in the project root:
echo "RUNPOD_API_KEY=your_api_key_here" > .env
- Replace
your_api_key_here
with your actual RunPod API key
If you see the error:
Failed to provision resource: RunPod API key not provided in config or environment
This means either:
- The
.env
file is missing - The API key is not set correctly in the
.env
file - The
.env
file is not being loaded properly