Triton Inference Server with Ultralytics YOLO11
The Triton Inference Server (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLO11 with Triton Inference Server allows you to deploy scalable, high-performance deep learning inference workloads. This guide provides steps to set up and test the integration.
观看: NVIDIA Triton Inference Server 入门。
什么是Triton Inference Server?
Triton Inference Server is designed to deploy a variety of AI models in production. It supports a wide range of deep learning and machine learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, and many others. Its primary use cases are:
- 从一个服务器实例为多个模型提供服务。
- 动态加载和卸载模型,无需重启服务器。
- 集合推理,允许同时使用多个模型来获得结果。
- 模型版本化,用于 A/B 测试和滚动更新。
先决条件
确保在继续之前具备以下先决条件:
- 您的计算机上安装了 Docker。
- 安装
tritonclient
:
Exporting YOLO11 to ONNX Format
在Triton 上部署模型之前,必须将其导出为ONNX 格式。ONNX (Open Neural Network Exchange)是一种允许在不同深度学习框架之间传输模型的格式。使用 export
功能中的 YOLO
类:
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n.pt") # load an official model
# Export the model
onnx_file = model.export(format="onnx", dynamic=True)
设置Triton 模型库
Triton 模型库是Triton 可以访问和加载模型的存储位置。
创建必要的目录结构:
将导出的ONNX 模型移至Triton 资源库:
from pathlib import Path # Move ONNX model to Triton Model path Path(onnx_file).rename(triton_model_path / "1" / "model.onnx") # Create config file (triton_model_path / "config.pbtxt").touch() # (Optional) Enable TensorRT for GPU inference # First run will be slow due to TensorRT engine conversion data = """ optimization { execution_accelerators { gpu_execution_accelerator { name: "tensorrt" parameters { key: "precision_mode" value: "FP16" } parameters { key: "max_workspace_size_bytes" value: "3221225472" } parameters { key: "trt_engine_cache_enable" value: "1" } parameters { key: "trt_engine_cache_path" value: "/models/yolo/1" } } } } """ with open(triton_model_path / "config.pbtxt", "w") as f: f.write(data)
运行Triton 推断服务器
使用 Docker 运行Triton Inference Server:
import contextlib
import subprocess
import time
from tritonclient.http import InferenceServerClient
# Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
tag = "nvcr.io/nvidia/tritonserver:24.09-py3" # 8.57 GB
# Pull the image
subprocess.call(f"docker pull {tag}", shell=True)
# Run the Triton server and capture the container ID
container_id = (
subprocess.check_output(
f"docker run -d --rm --gpus 0 -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
shell=True,
)
.decode("utf-8")
.strip()
)
# Wait for the Triton server to start
triton_client = InferenceServerClient(url="localhost:8000", verbose=False, ssl=False)
# Wait until model is ready
for _ in range(10):
with contextlib.suppress(Exception):
assert triton_client.is_model_ready(model_name)
break
time.sleep(1)
然后使用Triton 服务器模型运行推理:
from ultralytics import YOLO
# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")
# Run inference on the server
results = model("path/to/image.jpg")
清理容器:
# Kill and remove the container at the end of the test
subprocess.call(f"docker kill {container_id}", shell=True)
By following the above steps, you can deploy and run Ultralytics YOLO11 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the official Triton documentation or reach out to the Ultralytics community for support.
常见问题
How do I set up Ultralytics YOLO11 with NVIDIA Triton Inference Server?
Setting up Ultralytics YOLO11 with NVIDIA Triton Inference Server involves a few key steps:
Export YOLO11 to ONNX format:
建立Triton 模型库:
from pathlib import Path # Define paths model_name = "yolo" triton_repo_path = Path("tmp") / "triton_repo" triton_model_path = triton_repo_path / model_name # Create directories (triton_model_path / "1").mkdir(parents=True, exist_ok=True) Path(onnx_file).rename(triton_model_path / "1" / "model.onnx") (triton_model_path / "config.pbtxt").touch()
运行Triton 服务器:
import contextlib import subprocess import time from tritonclient.http import InferenceServerClient # Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver tag = "nvcr.io/nvidia/tritonserver:24.09-py3" subprocess.call(f"docker pull {tag}", shell=True) container_id = ( subprocess.check_output( f"docker run -d --rm --gpus 0 -v {triton_repo_path}/models -p 8000:8000 {tag} tritonserver --model-repository=/models", shell=True, ) .decode("utf-8") .strip() ) triton_client = InferenceServerClient(url="localhost:8000", verbose=False, ssl=False) for _ in range(10): with contextlib.suppress(Exception): assert triton_client.is_model_ready(model_name) break time.sleep(1)
This setup can help you efficiently deploy YOLO11 models at scale on Triton Inference Server for high-performance AI model inference.
What benefits does using Ultralytics YOLO11 with NVIDIA Triton Inference Server offer?
Integrating Ultralytics YOLO11 with NVIDIA Triton Inference Server provides several advantages:
- 可扩展的人工智能推理:Triton 允许从单个服务器实例为多个模型提供服务,支持动态模型加载和卸载,使其具有高度可扩展性,适用于各种人工智能工作负载。
- High Performance: Optimized for NVIDIA GPUs, Triton Inference Server ensures high-speed inference operations, perfect for real-time applications such as object detection.
- 集合和模型版本化:Triton 的集合模式可将多个模型结合起来以改进结果,其模型版本化支持 A/B 测试和滚动更新。
For detailed instructions on setting up and running YOLO11 with Triton, you can refer to the setup guide.
Why should I export my YOLO11 model to ONNX format before using Triton Inference Server?
Using ONNX (Open Neural Network Exchange) format for your Ultralytics YOLO11 model before deploying it on NVIDIA Triton Inference Server offers several key benefits:
- 互操作性:ONNX 格式支持不同深度学习框架(如PyTorch,TensorFlow )之间的传输,确保更广泛的兼容性。
- 优化:包括Triton 在内的许多部署环境都对ONNX 进行了优化,从而实现更快的推理和更好的性能。
- 易于部署:ONNX 广泛支持各种框架和平台,简化了各种操作系统和硬件配置的部署过程。
要导出模型,请使用
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
onnx_file = model.export(format="onnx", dynamic=True)
您可以按照导出指南中的步骤完成这一过程。
Can I run inference using the Ultralytics YOLO11 model on Triton Inference Server?
Yes, you can run inference using the Ultralytics YOLO11 model on NVIDIA Triton Inference Server. Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
from ultralytics import YOLO
# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")
# Run inference on the server
results = model("path/to/image.jpg")
For an in-depth guide on setting up and running Triton Server with YOLO11, refer to the running triton inference server section.
How does Ultralytics YOLO11 compare to TensorFlow and PyTorch models for deployment?
Ultralytics YOLO11 offers several unique advantages compared to TensorFlow and PyTorch models for deployment:
- Real-time Performance: Optimized for real-time object detection tasks, YOLO11 provides state-of-the-art accuracy and speed, making it ideal for applications requiring live video analytics.
- Ease of Use: YOLO11 integrates seamlessly with Triton Inference Server and supports diverse export formats (ONNX, TensorRT, CoreML), making it flexible for various deployment scenarios.
- Advanced Features: YOLO11 includes features like dynamic model loading, model versioning, and ensemble inference, which are crucial for scalable and reliable AI deployments.
有关详细信息,请比较模型部署指南中的部署选项。