YOLO11 Model Export to TorchScript for Quick Deployment
Deploying computer vision models across different environments, including embedded systems, web browsers, or platforms with limited Python support, requires a flexible and portable solution. TorchScript focuses on portability and the ability to run models in environments where the entire Python framework is unavailable. This makes it ideal for scenarios where you need to deploy your computer vision capabilities across various devices or platforms.
Export to Torchscript to serialize your Ultralytics YOLO11 models for cross-platform compatibility and streamlined deployment. In this guide, we'll show you how to export your YOLO11 models to the TorchScript format, making it easier for you to use them across a wider range of applications.
为什么要导出到TorchScript ?
Developed by the creators of PyTorch, TorchScript is a powerful tool for optimizing and deploying PyTorch models across a variety of platforms. Exporting YOLO11 models to TorchScript is crucial for moving from research to real-world applications. TorchScript, part of the PyTorch framework, helps make this transition smoother by allowing PyTorch models to be used in environments that don't support Python.
The process involves two techniques: tracing and scripting. Tracing records operations during model execution, while scripting allows for the definition of models using a subset of Python. These techniques ensure that models like YOLO11 can still work their magic even outside their usual Python environment.
TorchScript 此外,还可以通过运算符融合和改进内存使用等技术对模型进行优化,确保高效执行。导出到TorchScript 的另一个优势是它可以加速模型在各种硬件平台上的执行。它为PyTorch 模型创建了一个独立的、生产就绪的表示,可以集成到 C++ 环境、嵌入式系统中,或部署到网络或移动应用程序中。
TorchScript 机型的主要特点
TorchScript, a key part of the PyTorch ecosystem, provides powerful features for optimizing and deploying deep learning models.
以下是TorchScript 成为开发人员宝贵工具的主要功能:
静态图执行:TorchScript 使用模型计算的静态图表示,这与PyTorch 的动态图执行不同。在静态图执行中,计算图在实际执行前定义和编译一次,从而提高了推理过程中的性能。
模型序列化:TorchScript 允许您将PyTorch 模型序列化为与平台无关的格式。序列化后的模型无需原始Python 代码即可加载,从而可在不同的运行环境中运行。
JIT 编译:TorchScript 使用即时 (JIT) 编译将PyTorch 模型转换为优化的中间表示。JIT 可编译模型的计算图,从而在目标设备上高效执行。
跨语言集成:通过TorchScript ,您可以将PyTorch 模型导出为其他语言,如 C++、Java 和 JavaScript。这使PyTorch 模型更容易集成到用不同语言编写的现有软件系统中。
逐步转换:TorchScript 提供了一种逐步转换方法,允许您逐步将PyTorch 模型的部分内容转换为TorchScript 。这种灵活性在处理复杂模型或优化特定部分代码时特别有用。
中的部署选项TorchScript
Before we look at the code for exporting YOLO11 models to the TorchScript format, let's understand where TorchScript models are normally used.
TorchScript offers various deployment options for machine learning models, such as:
C++ API:TorchScript 最常见的用例是其 C++ API,它允许您在 C++ 应用程序中直接加载和执行经过优化的TorchScript 模型。这非常适合于Python 可能不适合或不可用的生产环境。C++ API 可以低开销、高效地执行TorchScript 模型,最大限度地发挥性能潜力。
Mobile Deployment: TorchScript offers tools for converting models into formats readily deployable on mobile devices. PyTorch Mobile provides a runtime for executing these models within iOS and Android apps. This enables low-latency, offline inference capabilities, enhancing user experience and data privacy.
云部署:可使用 TorchServe 等解决方案将TorchScript 模型部署到基于云的服务器上。它提供了模型版本化、批处理和指标监控等功能,可在生产环境中进行可扩展的部署。通过TorchScript 进行云部署,可以通过 API 或其他网络服务访问您的模型。
Export to TorchScript: Converting Your YOLO11 Model
Exporting YOLO11 models to TorchScript makes it easier to use them in different places and helps them run faster and more efficiently. This is great for anyone looking to use deep learning models more effectively in real-world applications.
安装
要安装所需的软件包,请运行
For detailed instructions and best practices related to the installation process, check our Ultralytics Installation guide. While installing the required packages for YOLO11, if you encounter any difficulties, consult our Common Issues guide for solutions and tips.
使用方法
Before diving into the usage instructions, it's important to note that while all Ultralytics YOLO11 models are available for exporting, you can ensure that the model you select supports export functionality here.
使用方法
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to TorchScript format
model.export(format="torchscript") # creates 'yolo11n.torchscript'
# Load the exported TorchScript model
torchscript_model = YOLO("yolo11n.torchscript")
# Run inference
results = torchscript_model("https://ultralytics.com/images/bus.jpg")
有关导出过程的更多详情,请访问Ultralytics 有关导出的文档页面。
Deploying Exported YOLO11 TorchScript Models
After successfully exporting your Ultralytics YOLO11 models to TorchScript format, you can now deploy them. The primary and recommended first step for running a TorchScript model is to utilize the YOLO("model.torchscript") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your TorchScript models in various other settings, take a look at the following resources:
Explore Mobile Deployment: The PyTorch Mobile Documentation provides comprehensive guidelines for deploying models on mobile devices, ensuring your applications are efficient and responsive.
主服务器端部署:了解如何使用 TorchServe 在服务器端部署模型,为可扩展的高效模型服务提供分步教程。
实施 C++ 部署:深入学习在 C++ 中加载TorchScript 模型的教程,便于将TorchScript 模型集成到 C++ 应用程序中,从而提高性能和通用性。
摘要
In this guide, we explored the process of exporting Ultralytics YOLO11 models to the TorchScript format. By following the provided instructions, you can optimize YOLO11 models for performance and gain the flexibility to deploy them across various platforms and environments.
有关使用的详细信息,请访问TorchScript 的官方文档。
Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our integration guide page. You'll find plenty of useful resources and insights there.
常见问题
What is Ultralytics YOLO11 model export to TorchScript?
Exporting an Ultralytics YOLO11 model to TorchScript allows for flexible, cross-platform deployment. TorchScript, a part of the PyTorch ecosystem, facilitates the serialization of models, which can then be executed in environments that lack Python support. This makes it ideal for deploying models on embedded systems, C++ environments, mobile applications, and even web browsers. Exporting to TorchScript enables efficient performance and wider applicability of your YOLO11 models across diverse platforms.
How can I export my YOLO11 model to TorchScript using Ultralytics?
To export a YOLO11 model to TorchScript, you can use the following example code:
使用方法
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to TorchScript format
model.export(format="torchscript") # creates 'yolo11n.torchscript'
# Load the exported TorchScript model
torchscript_model = YOLO("yolo11n.torchscript")
# Run inference
results = torchscript_model("https://ultralytics.com/images/bus.jpg")
有关导出过程的详细信息,请参阅Ultralytics 有关导出的文档。
Why should I use TorchScript for deploying YOLO11 models?
Using TorchScript for deploying YOLO11 models offers several advantages:
- 可移植性:导出的模型可以在不需要Python 的环境中运行,如 C++ 应用程序、嵌入式系统或移动设备。
- 优化:TorchScript 支持静态图执行和即时编译(JIT),可优化模型性能。
- 跨语言集成:TorchScript 模型可集成到其他编程语言中,增强了灵活性和可扩展性。
- 序列化:模型可以序列化,从而实现与平台无关的加载和推理。
有关部署的更多信息,请访问PyTorch Mobile Documentation、TorchServe Documentation 和C++ Deployment Guide。
What are the installation steps for exporting YOLO11 models to TorchScript?
To install the required package for exporting YOLO11 models, use the following command:
有关详细说明,请访问Ultralytics 安装指南。如果在安装过程中出现任何问题,请查阅常见问题指南。
How do I deploy my exported TorchScript YOLO11 models?
After exporting YOLO11 models to the TorchScript format, you can deploy them across a variety of platforms:
- C++ 应用程序接口:是低成本、高效率生产环境的理想选择。
- 移动部署:将PyTorch Mobile用于iOS 和Android 应用程序。
- 云部署:利用TorchServe等服务进行可扩展的服务器端部署。
探索在这些环境中部署模型的综合指南,以充分利用TorchScript 的功能。