跳至内容

Coral Edge TPU on a Raspberry Pi with Ultralytics YOLO11 🚀

配备 USB EdgeTPU 加速器的 Raspberry Pi 单板计算机

什么是珊瑚边TPU ?

The Coral Edge TPU is a compact device that adds an Edge TPU coprocessor to your system. It enables low-power, high-performance ML inference for TensorFlow Lite models. Read more at the Coral Edge TPU home page.



观看: How to Run Inference on Raspberry Pi using Google Coral Edge TPU

利用 Coral Edge 提升树莓派模型性能TPU

Many people want to run their models on an embedded or mobile device such as a Raspberry Pi, since they are very power efficient and can be used in many different applications. However, the inference performance on these devices is usually poor even when using formats like ONNX or OpenVINO. The Coral Edge TPU is a great solution to this problem, since it can be used with a Raspberry Pi and accelerate inference performance greatly.

利用TensorFlow Lite 在 Raspberry Pi 上实现边缘TPU (新)⭐

Coral现有的关于如何在 Raspberry Pi 上使用 EdgeTPU 的指南已经过时,而且当前的 Coral EdgeTPU 运行时构建已无法与当前的TensorFlow Lite 运行时版本配合使用。此外,Google 似乎已经完全放弃了 Coral 项目,在 2021 年至 2024 年期间没有任何更新。本指南将向您介绍如何在 Raspberry Pi 单板计算机(SBC)上使用最新版本的TensorFlow Lite 运行时和更新的 Coral EdgeTPU 运行时运行 EdgeTPU 。

先决条件

安装攻略

本指南假定您已经安装了可正常工作的 Raspberry Pi 操作系统,并且已经安装了 ultralytics 和所有依赖项。获取 ultralytics 安装,请访问 快速入门指南 进行设置后再继续。

安装 EdgeTPU 运行时

首先,我们需要安装 EdgeTPU 运行时。目前有许多不同的版本,因此需要根据操作系统选择合适的版本。

树莓派操作系统高频模式下载版本
牛眼 32 位没有libedgetpu1-std_ ... .bullseye_armhf.deb
牛眼 64 位没有libedgetpu1-std_ ... .bullseye_arm64.deb
牛眼 32 位libedgetpu1-max_ ... .bullseye_armhf.deb
牛眼 64 位libedgetpu1-max_ ... .bullseye_arm64.deb
书虫 32 位没有libedgetpu1-std_ ... .bookworm_armhf.deb
书虫 64bit没有libedgetpu1-std_ ... .bookworm_arm64.deb
书虫 32 位libedgetpu1-max_ ... .bookworm_armhf.deb
书虫 64bitlibedgetpu1-max_ ... .bookworm_arm64.deb

从这里下载最新版本

下载文件后,可以使用以下命令进行安装:

sudo dpkg -i path/to/package.deb

安装运行时后,您需要将 Coral EdgeTPU 插入 Raspberry Pi 的 USB 3.0 端口。这是因为,根据官方指南,一个新的 udev 规则需要在安装后生效。

重要

如果已经安装了 Coral EdgeTPU 运行时,请使用以下命令卸载它。

# If you installed the standard version
sudo apt remove libedgetpu1-std

# If you installed the high frequency version
sudo apt remove libedgetpu1-max

将模型导出为与 EdgeTPU 兼容的模型

要使用 EdgeTPU ,您需要将模型转换为兼容格式。建议在Google Colab、x86_64 Linux 机器上使用官方Ultralytics Docker 容器Ultralytics HUB 运行 export,因为 Edge TPU 编译器在 ARM 上不可用。有关可用参数,请参阅 "导出模式"。

Exporting the model

from ultralytics import YOLO

# Load a model
model = YOLO("path/to/model.pt")  # Load an official model or custom model

# Export the model
model.export(format="edgetpu")
yolo export model=path/to/model.pt format=edgetpu  # Export an official model or custom model

导出的模型将保存在 <model_name>_saved_model/ 文件夹,其名称为 <model_name>_full_integer_quant_edgetpu.tflite. It is important that your model ends with the suffix _edgetpu.tflite, otherwise ultralytics doesn't know that you're using a Edge TPU model.

运行模型

Before you can actually run the model, you will need to install the correct libraries.

如果 tensorflow 时,请使用以下命令卸载tensorflow :

pip uninstall tensorflow tensorflow-aarch64

然后安装/更新 tflite-runtime:

pip install -U tflite-runtime

Now you can run inference using the following code:

运行模型

from ultralytics import YOLO

# Load a model
model = YOLO("path/to/<model_name>_full_integer_quant_edgetpu.tflite")  # Load an official model or custom model

# Run Prediction
model.predict("path/to/source.png")
yolo predict model=path/to/<model_name>_full_integer_quant_edgetpu.tflite source=path/to/source.png  # Load an official model or custom model

有关预测模式的详细信息,请访问 "预测 "页面。

Inference with multiple Edge TPUs

If you have multiple Edge TPUs you can use the following code to select a specific TPU.

from ultralytics import YOLO

# Load a model
model = YOLO("path/to/<model_name>_full_integer_quant_edgetpu.tflite")  # Load an official model or custom model

# Run Prediction
model.predict("path/to/source.png")  # Inference defaults to the first TPU

model.predict("path/to/source.png", device="tpu:0")  # Select the first TPU

model.predict("path/to/source.png", device="tpu:1")  # Select the second TPU

常见问题

What is a Coral Edge TPU and how does it enhance Raspberry Pi's performance with Ultralytics YOLO11?

The Coral Edge TPU is a compact device designed to add an Edge TPU coprocessor to your system. This coprocessor enables low-power, high-performance machine learning inference, particularly optimized for TensorFlow Lite models. When using a Raspberry Pi, the Edge TPU accelerates ML model inference, significantly boosting performance, especially for Ultralytics YOLO11 models. You can read more about the Coral Edge TPU on their home page.

如何在 Raspberry Pi 上安装 Coral EdgeTPU 运行时?

要在 Raspberry Pi 上安装 Coral EdgeTPU 运行时,请下载相应的 .deb 为您的 Raspberry Pi 操作系统版本从 此链接.下载后,使用以下命令进行安装:

sudo dpkg -i path/to/package.deb

请确保按照 "安装攻略"部分所述步骤卸载之前的任何 Coral EdgeTPU 运行时版本。

Can I export my Ultralytics YOLO11 model to be compatible with Coral Edge TPU?

Yes, you can export your Ultralytics YOLO11 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the Ultralytics Docker container. You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:

Exporting the model

from ultralytics import YOLO

# Load a model
model = YOLO("path/to/model.pt")  # Load an official model or custom model

# Export the model
model.export(format="edgetpu")
yolo export model=path/to/model.pt format=edgetpu  # Export an official model or custom model

有关详细信息,请参阅导出模式文档。

如果 Raspberry Pi 上已经安装了TensorFlow ,但我想用 tflite-runtime 代替,该怎么办?

如果您已经在 Raspberry Pi 上安装了TensorFlow ,并需要切换到 tflite-runtime您需要先卸载TensorFlow :

pip uninstall tensorflow tensorflow-aarch64

然后,安装或更新 tflite-runtime 使用以下命令:

pip install -U tflite-runtime

对于特定的轮子,例如TensorFlow 2.15.0 tflite-runtime您可以从 此链接 并使用 pip.详细说明见运行模型部分 运行模型.

How do I run inference with an exported YOLO11 model on a Raspberry Pi using the Coral Edge TPU?

After exporting your YOLO11 model to an Edge TPU-compatible format, you can run inference using the following code snippets:

运行模型

from ultralytics import YOLO

# Load a model
model = YOLO("path/to/edgetpu_model.tflite")  # Load an official model or custom model

# Run Prediction
model.predict("path/to/source.png")
yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png  # Load an official model or custom model

有关预测模式全部功能的详细信息,请参阅 "预测 "页面

📅 Created 9 months ago ✏️ Updated 29 days ago

评论