跳至内容

ONNX Export for YOLO11 Models

Often, when deploying computer vision models, you'll need a model format that's both flexible and compatible with multiple platforms.

Exporting Ultralytics YOLO11 models to ONNX format streamlines deployment and ensures optimal performance across various environments. This guide will show you how to easily convert your YOLO11 models to ONNX and enhance their scalability and effectiveness in real-world applications.

ONNX 和ONNX Runtime

ONNX, which stands for Open Neural Network Exchange, is a community project that Facebook and Microsoft initially developed. The ongoing development of ONNX is a collaborative effort supported by various organizations like IBM, Amazon (through AWS), and Google. The project aims to create an open file format designed to represent machine learning models in a way that allows them to be used across different AI frameworks and hardware.

ONNX models can be used to transition between different frameworks seamlessly. For instance, a deep learning model trained in PyTorch can be exported to ONNX format and then easily imported into TensorFlow.

ONNX

Alternatively, ONNX models can be used with ONNX Runtime. ONNX Runtime is a versatile cross-platform accelerator for machine learning models that is compatible with frameworks like PyTorch, TensorFlow, TFLite, scikit-learn, etc.

ONNX Runtime 可利用特定硬件功能优化ONNX 模型的执行。通过这种优化,模型可以在各种硬件平台(包括 CPU、GPU 和专用加速器)上高效、高性能地运行。

ONNX  ONNX 运行时

Whether used independently or in tandem with ONNX Runtime, ONNX provides a flexible solution for machine learning model deployment and compatibility.

ONNX 机型的主要特点

ONNX 能够处理各种格式,主要归功于以下主要功能:

  • 通用模型表示法:ONNX 定义了一套通用运算符(如卷积、层等)和一种标准数据格式。当一个模型转换为ONNX 格式时,其架构和权重就会转换为这种通用表示法。这种统一性可确保任何支持ONNX 的框架都能理解模型。

  • 版本和向后兼容性:ONNX 为其操作员维护一个版本系统。这确保了即使标准在发展,旧版本创建的模型仍可继续使用。向后兼容性是防止模型迅速过时的重要功能。

  • 基于图形的模型表示法:ONNX 将模型表示为计算图。这种基于图的结构是表示机器学习模型的通用方法,其中节点表示操作或计算,边表示它们之间流动的张量。这种格式很容易适应各种同样以图表示模型的框架。

  • 工具和生态系统:ONNX 周围有丰富的工具生态系统,可帮助进行模型转换、可视化和优化。这些工具使开发人员更容易使用ONNX 模型,并在不同框架之间无缝转换模型。

常见用法ONNX

Before we jump into how to export YOLO11 models to the ONNX format, let's take a look at where ONNX models are usually used.

CPU 部署

ONNX 由于与ONNX Runtime 兼容,模型通常部署在 CPU 上。该运行时针对CPU 的执行进行了优化。它大大提高了推理速度,使CPU 的实时部署变得可行。

支持的部署选项

ONNX 模型通常用于 CPU,但也可部署在以下平台上:

  • GPU 加速:ONNX 完全支持GPU 加速,尤其是NVIDIA CUDA 。这样就能在NVIDIA GPU 上高效执行需要高计算能力的任务。

  • 边缘和移动设备:ONNX 可扩展到边缘和移动设备,非常适合在设备上进行实时推理。它重量轻,与边缘硬件兼容。

  • 网络浏览器:ONNX 可直接在网络浏览器中运行,为基于网络的交互式动态人工智能应用提供动力。

Exporting YOLO11 Models to ONNX

You can expand model compatibility and deployment flexibility by converting YOLO11 models to ONNX format.

安装

要安装所需的软件包,请运行

安装

# Install the required package for YOLO11
pip install ultralytics

For detailed instructions and best practices related to the installation process, check our YOLO11 Installation guide. While installing the required packages for YOLO11, if you encounter any difficulties, consult our Common Issues guide for solutions and tips.

使用方法

Before diving into the usage instructions, be sure to check out the range of YOLO11 models offered by Ultralytics. This will help you choose the most appropriate model for your project requirements.

使用方法

from ultralytics import YOLO

# Load the YOLO11 model
model = YOLO("yolo11n.pt")

# Export the model to ONNX format
model.export(format="onnx")  # creates 'yolo11n.onnx'

# Load the exported ONNX model
onnx_model = YOLO("yolo11n.onnx")

# Run inference
results = onnx_model("https://ultralytics.com/images/bus.jpg")
# Export a YOLO11n PyTorch model to ONNX format
yolo export model=yolo11n.pt format=onnx  # creates 'yolo11n.onnx'

# Run inference with the exported model
yolo predict model=yolo11n.onnx source='https://ultralytics.com/images/bus.jpg'

有关导出过程的更多详情,请访问Ultralytics 有关导出的文档页面

Deploying Exported YOLO11 ONNX Models

Once you've successfully exported your Ultralytics YOLO11 models to ONNX format, the next step is deploying these models in various environments. For detailed instructions on deploying your ONNX models, take a look at the following resources:

摘要

In this guide, you've learned how to export Ultralytics YOLO11 models to ONNX format to increase their interoperability and performance across various platforms. You were also introduced to the ONNX Runtime and ONNX deployment options.

有关使用的详细信息,请访问ONNX 官方文档

Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our integration guide page. You'll find plenty of useful resources and insights there.

常见问题

How do I export YOLO11 models to ONNX format using Ultralytics?

To export your YOLO11 models to ONNX format using Ultralytics, follow these steps:

使用方法

from ultralytics import YOLO

# Load the YOLO11 model
model = YOLO("yolo11n.pt")

# Export the model to ONNX format
model.export(format="onnx")  # creates 'yolo11n.onnx'

# Load the exported ONNX model
onnx_model = YOLO("yolo11n.onnx")

# Run inference
results = onnx_model("https://ultralytics.com/images/bus.jpg")
# Export a YOLO11n PyTorch model to ONNX format
yolo export model=yolo11n.pt format=onnx  # creates 'yolo11n.onnx'

# Run inference with the exported model
yolo predict model=yolo11n.onnx source='https://ultralytics.com/images/bus.jpg'

有关详细信息,请访问出口文件

What are the advantages of using ONNX Runtime for deploying YOLO11 models?

Using ONNX Runtime for deploying YOLO11 models offers several advantages:

  • 跨平台兼容性:ONNX Runtime 支持 Windows、macOS 和 Linux 等各种平台,确保您的模型在不同环境下流畅运行。
  • 硬件加速:ONNX 运行时可利用针对 CPU、GPU 和专用加速器的特定硬件优化,提供高性能推理。
  • Framework interoperability: Models trained in popular frameworks like PyTorch or TensorFlow can be easily converted to ONNX format and run using ONNX Runtime.

如需了解更多信息,请查看ONNX Runtime 文档

What deployment options are available for YOLO11 models exported to ONNX?

YOLO11 models exported to ONNX can be deployed on various platforms including:

  • 中央处理器:利用ONNX Runtime 优化CPU 推论。
  • 图形处理器:利用NVIDIA CUDA 实现高性能GPU 加速。
  • 边缘设备:在边缘设备和移动设备上运行轻量级模型,实现设备上的实时推理。
  • 网络浏览器:在网络浏览器中直接执行模型,实现交互式网络应用。

如需了解更多信息,请参阅我们的模型部署选项指南。

Why should I use ONNX format for Ultralytics YOLO11 models?

Using ONNX format for Ultralytics YOLO11 models provides numerous benefits:

  • 互操作性:ONNX 允许在不同机器学习框架之间无缝传输模型。
  • 性能优化:ONNX 运行时可利用特定硬件的优化功能提高模型性能。
  • 灵活性:ONNX 支持各种部署环境,使您无需修改即可在不同平台上使用相同的模型。

Refer to the comprehensive guide on exporting YOLO11 models to ONNX.

How can I troubleshoot issues when exporting YOLO11 models to ONNX?

When exporting YOLO11 models to ONNX, you might encounter common issues such as mismatched dependencies or unsupported operations. To troubleshoot these problems:

  1. 确认已安装所需依赖项的正确版本。
  2. 有关支持的操作员和功能,请查看ONNX 官方 文档
  3. 查看错误信息以寻找线索,并查阅Ultralytics 常见问题指南

如果问题仍然存在,请联系Ultralytics 支持部门寻求进一步帮助。


📅 Created 8 months ago ✏️ Updated 11 days ago

评论