跳至内容

Understand How to Export to TF SavedModel Format From YOLO11

Deploying machine learning models can be challenging. However, using an efficient and flexible model format can make your job easier. TF SavedModel is an open-source machine-learning framework used by TensorFlow to load machine-learning models in a consistent way. It is like a suitcase for TensorFlow models, making them easy to carry and use on different devices and systems.

Learning how to export to TF SavedModel from Ultralytics YOLO11 models can help you deploy models easily across different platforms and environments. In this guide, we'll walk through how to convert your models to the TF SavedModel format, simplifying the process of running inferences with your models on different devices.

为什么要导出到TF SavedModel ?

TensorFlow SavedModel 格式是Google 开发的TensorFlow 生态系统的一部分,如下图所示。它旨在无缝保存和序列化TensorFlow 模型。它封装了模型的完整细节,如架构、权重甚至编译信息。这样就可以在不同环境中直接共享、部署和继续训练。

TF SavedModel

TF SavedModel 有一个关键优势:兼容性。它可与TensorFlow Serving、TensorFlow Lite 和TensorFlow.js 配合使用。这种兼容性使得在各种平台(包括网络和移动应用程序)上共享和部署模型变得更加容易。TF SavedModel 格式对研究和生产都很有用。它提供了一种统一的方法来管理您的模型,确保它们可用于任何应用。

TF SavedModels 的主要功能

以下是TF SavedModel 成为人工智能开发人员最佳选择的主要功能:

  • 可移植性:TF SavedModel 提供了一种语言中立、可恢复、密封的序列化格式。它们使更高层次的系统和工具能够生成、消费和转换TensorFlow 模型。SavedModels 可在不同平台和环境中轻松共享和部署。

  • 易于部署:TF SavedModel 将计算图、训练参数和必要的元数据捆绑到一个包中。它们可以轻松加载并用于推理,而不需要构建模型的原始代码。这使得TensorFlow 模型在各种生产环境中的部署变得简单高效。

  • Asset Management: TF SavedModel supports the inclusion of external assets such as vocabularies, embeddings, or lookup tables. These assets are stored alongside the graph definition and variables, ensuring they are available when the model is loaded. This feature simplifies the management and distribution of models that rely on external resources.

部署选项TF SavedModel

Before we dive into the process of exporting YOLO11 models to the TF SavedModel format, let's explore some typical deployment scenarios where this format is used.

TF SavedModel 提供了一系列部署机器学习模型的选项:

  • TensorFlow 供餐系统 TensorFlow Serving 是专为生产环境设计的灵活、高性能服务系统。它原生支持TF SavedModels,可轻松在云平台、内部服务器或边缘设备上部署和提供模型。

  • 云平台: Google Cloud Platform (GCP)、Amazon Web Services (AWS) 和Microsoft Azure 等主要云提供商都提供用于部署和运行TensorFlow 模型的服务,包括TF SavedModels。这些服务提供可扩展和可管理的基础设施,使您可以轻松部署和扩展模型。

  • 移动和嵌入式设备: TensorFlow Lite 是在移动、嵌入式和物联网设备上运行机器学习模型的轻量级解决方案,支持将TF SavedModels 转换为TensorFlow Lite 格式。这样,您就可以在从智能手机和平板电脑到微控制器和边缘设备等各种设备上部署模型。

  • TensorFlow 运行时间 TensorFlow 运行时间 (tfrt) is a high-performance runtime for executing TensorFlow graphs. It provides lower-level APIs for loading and running TF SavedModels in C++ environments. TensorFlow Runtime offers better performance compared to the standard TensorFlow runtime. It is suitable for deployment scenarios that require low-latency inference and tight integration with existing C++ codebases.

Exporting YOLO11 Models to TF SavedModel

By exporting YOLO11 models to the TF SavedModel format, you enhance their adaptability and ease of deployment across various platforms.

安装

要安装所需的软件包,请运行

安装

# Install the required package for YOLO11
pip install ultralytics

For detailed instructions and best practices related to the installation process, check our Ultralytics Installation guide. While installing the required packages for YOLO11, if you encounter any difficulties, consult our Common Issues guide for solutions and tips.

使用方法

Before diving into the usage instructions, it's important to note that while all Ultralytics YOLO11 models are available for exporting, you can ensure that the model you select supports export functionality here.

使用方法

from ultralytics import YOLO

# Load the YOLO11 model
model = YOLO("yolo11n.pt")

# Export the model to TF SavedModel format
model.export(format="saved_model")  # creates '/yolo11n_saved_model'

# Load the exported TF SavedModel model
tf_savedmodel_model = YOLO("./yolo11n_saved_model")

# Run inference
results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg")
# Export a YOLO11n PyTorch model to TF SavedModel format
yolo export model=yolo11n.pt format=saved_model  # creates '/yolo11n_saved_model'

# Run inference with the exported model
yolo predict model='./yolo11n_saved_model' source='https://ultralytics.com/images/bus.jpg'

有关支持的导出选项的详细信息,请访问Ultralytics 部署选项文档页面

Deploying Exported YOLO11 TF SavedModel Models

Now that you have exported your YOLO11 model to the TF SavedModel format, the next step is to deploy it. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("./yolo11n_saved_model") method, as previously shown in the usage code snippet.

不过,有关部署TF SavedModel 模型的深入说明,请参阅以下资源:

摘要

In this guide, we explored how to export Ultralytics YOLO11 models to the TF SavedModel format. By exporting to TF SavedModel, you gain the flexibility to optimize, deploy, and scale your YOLO11 models on a wide range of platforms.

有关使用的更多详情,请访问TF SavedModel 官方文档

For more information on integrating Ultralytics YOLO11 with other platforms and frameworks, don't forget to check out our integration guide page. It's packed with great resources to help you make the most of YOLO11 in your projects.

常见问题

如何将Ultralytics YOLO 模型导出为TensorFlow SavedModel 格式?

将Ultralytics YOLO 模型导出为TensorFlow SavedModel 格式非常简单。您可以使用Python 或CLI 来实现这一目标:

Exporting YOLO11 to TF SavedModel

from ultralytics import YOLO

# Load the YOLO11 model
model = YOLO("yolo11n.pt")

# Export the model to TF SavedModel format
model.export(format="saved_model")  # creates '/yolo11n_saved_model'

# Load the exported TF SavedModel for inference
tf_savedmodel_model = YOLO("./yolo11n_saved_model")
results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg")
# Export the YOLO11 model to TF SavedModel format
yolo export model=yolo11n.pt format=saved_model  # creates '/yolo11n_saved_model'

# Run inference with the exported model
yolo predict model='./yolo11n_saved_model' source='https://ultralytics.com/images/bus.jpg'

详情请参考Ultralytics Export 文档

为什么要使用TensorFlow SavedModel 格式?

The TensorFlow SavedModel format offers several advantages for model deployment:

  • 可移植性:它提供了一种语言中立的格式,便于在不同环境中共享和部署模型。
  • 兼容性:可与TensorFlow Serving、TensorFlow Lite 和TensorFlow.js 等工具无缝集成,这些工具对于在包括网络和移动应用程序在内的各种平台上部署模型至关重要。
  • 完全封装:对模型结构、权重和编译信息进行编码,可直接共享和继续训练。

有关更多优势和部署选项,请查看Ultralytics YOLO 模型部署选项

TF SavedModel 的典型部署方案是什么?

TF SavedModel 可在各种环境中部署,包括

  • TensorFlow 服务:非常适合需要可扩展和高性能模型服务的生产环境。
  • 云平台:支持主要的云服务,如Google Cloud Platform (GCP)、Amazon Web Services (AWS) 和Microsoft Azure,以实现可扩展的模型部署。
  • 移动和嵌入式设备:使用TensorFlow Lite 转换TF SavedModels 可在移动设备、物联网设备和微控制器上部署。
  • TensorFlow 运行时适用于需要低延迟推理和更高性能的 C++ 环境。

有关详细的部署选项,请访问有关部署TensorFlow 型号的官方指南。

How can I install the necessary packages to export YOLO11 models?

To export YOLO11 models, you need to install the ultralytics 软件包。在终端运行以下命令

pip install ultralytics

有关更详细的安装说明和最佳实践,请参阅我们的Ultralytics 安装指南。如果遇到任何问题,请查阅我们的常见问题指南

TensorFlow SavedModel 格式的主要特点是什么?

TF SavedModel 该格式具有以下特点,有利于人工智能开发人员:

  • 便携性:可在各种环境中轻松共享和部署。
  • 易于部署:将计算图、训练参数和元数据封装成一个包,从而简化了加载和推理过程。
  • 资产管理:支持词汇表等外部资产,确保模型加载时这些资产可用。

欲了解更多详情,请浏览 TensorFlow 官方文档


📅 Created 6 months ago ✏️ Updated 12 days ago

评论