Skip to content

A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.

License

Notifications You must be signed in to change notification settings

NVIDIA/TensorRT-Model-Optimizer

Error
Looks like something went wrong!

About

A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.

Resources

License

Security policy

Stars

Watchers

Forks

Languages