跳至内容

Optimizing YOLO11 Inferences with Neural Magic's DeepSparse Engine

When deploying object detection models like Ultralytics YOLO11 on various hardware, you can bump into unique issues like optimization. This is where YOLO11's integration with Neural Magic's DeepSparse Engine steps in. It transforms the way YOLO11 models are executed and enables GPU-level performance directly on CPUs.

This guide shows you how to deploy YOLO11 using Neural Magic's DeepSparse, how to run inferences, and also how to benchmark performance to ensure it is optimized.

Neural Magic的 DeepSparse

Neural Magic的 DeepSparse 概述

Neural Magic's DeepSparse is an inference run-time designed to optimize the execution of neural networks on CPUs. It applies advanced techniques like sparsity, pruning, and quantization to dramatically reduce computational demands while maintaining accuracy. DeepSparse offers an agile solution for efficient and scalable neural network execution across various devices.

Benefits of Integrating Neural Magic's DeepSparse with YOLO11

在深入了解如何部署之前 YOLOV8 使用 DeepSparse,让我们了解一下使用 DeepSparse 的好处。一些主要优势包括:

  • Enhanced Inference Speed: Achieves up to 525 FPS (on YOLO11n), significantly speeding up YOLO11's inference capabilities compared to traditional methods.

提高推理速度

  • Optimized Model Efficiency: Uses pruning and quantization to enhance YOLO11's efficiency, reducing model size and computational requirements while maintaining accuracy.

优化模型效率

  • 在标准 CPU 上实现高性能:在 CPU 上提供类似GPU 的性能,为各种应用提供了更方便、更经济的选择。

  • Streamlined Integration and Deployment: Offers user-friendly tools for easy integration of YOLO11 into applications, including image and video annotation features.

  • Support for Various Model Types: Compatible with both standard and sparsity-optimized YOLO11 models, adding deployment flexibility.

  • 经济高效、可扩展的解决方案:降低运营成本,提供可扩展的高级对象检测模型部署。

Neural Magic 的 DeepSparse 技术如何工作?

Neural Magic的 Deep Sparse 技术的灵感来自人脑在神经网络计算中的效率。它采用大脑的两个关键原则,如下所示:

  • Sparsity: The process of sparsification involves pruning redundant information from deep learning networks, leading to smaller and faster models without compromising accuracy. This technique reduces the network's size and computational needs significantly.

  • 参照位置:DeepSparse 采用独特的执行方法,将网络分解为Tensor 列。这些列按深度执行,完全位于CPU 的缓存中。这种方法模仿了大脑的效率,最大限度地减少了数据移动,并最大限度地利用了CPU 的缓存。

Neural Magic 的 DeepSparse 技术如何工作

有关Neural Magic DeepSparse 技术工作原理的详细信息,请查看他们的博客文章

Creating A Sparse Version of YOLO11 Trained on a Custom Dataset

SparseZoo, an open-source model repository by Neural Magic, offers a collection of pre-sparsified YOLO11 model checkpoints. With SparseML, seamlessly integrated with Ultralytics, users can effortlessly fine-tune these sparse checkpoints on their specific datasets using a straightforward command-line interface.

Checkout Neural Magic's SparseML YOLO11 documentation for more details.

使用方法:使用 DeepSparse 部署YOLOV8

Deploying YOLO11 with Neural Magic's DeepSparse involves a few straightforward steps. Before diving into the usage instructions, be sure to check out the range of YOLO11 models offered by Ultralytics. This will help you choose the most appropriate model for your project requirements. Here's how you can get started.

步骤 1:安装

要安装所需的软件包,请运行

安装

# Install the required packages
pip install deepsparse[yolov8]

Step 2: Exporting YOLO11 to ONNX Format

DeepSparse Engine requires YOLO11 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLO11 models:

出口型号

# Export YOLO11 model to ONNX format
yolo task=detect mode=export model=yolo11n.pt format=onnx opset=13

该命令将保存 yolo11n.onnx 模型到磁盘中。

步骤 3:部署和运行推理

With your YOLO11 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:

部署和运行推理

from deepsparse import Pipeline

# Specify the path to your YOLO11 ONNX model
model_path = "path/to/yolo11n.onnx"

# Set up the DeepSparse Pipeline
yolo_pipeline = Pipeline.create(task="yolov8", model_path=model_path)

# Run the model on your images
images = ["path/to/image.jpg"]
pipeline_outputs = yolo_pipeline(images=images)

步骤 4:绩效基准

It's important to check that your YOLO11 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:

制定基准

# Benchmark performance
deepsparse.benchmark model_path="path/to/yolo11n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"

步骤 5:附加功能

DeepSparse provides additional features for practical integration of YOLO11 in applications, such as image annotation and dataset evaluation.

附加功能

# For image annotation
deepsparse.yolov8.annotate --source "path/to/image.jpg" --model_filepath "path/to/yolo11n.onnx"

# For evaluating model performance on a dataset
deepsparse.yolov8.eval --model_path "path/to/yolo11n.onnx"

运行注释命令可处理指定图像、检测对象并保存带有边框和分类的注释图像。注释后的图像将保存在注释结果文件夹中。这有助于直观地展示模型的检测能力。

图像注释功能

After running the eval command, you will receive detailed output metrics such as precision, recall, and mAP (mean Average Precision). This provides a comprehensive view of your model's performance on the dataset. This functionality is particularly useful for fine-tuning and optimizing your YOLO11 models for specific use cases, ensuring high accuracy and efficiency.

摘要

This guide explored integrating Ultralytics' YOLO11 with Neural Magic's DeepSparse Engine. It highlighted how this integration enhances YOLO11's performance on CPU platforms, offering GPU-level efficiency and advanced neural network sparsity techniques.

For more detailed information and advanced usage, visit Neural Magic's DeepSparse documentation. Also, check out Neural Magic's documentation on the integration with YOLO11 here and watch a great session on it here.

Additionally, for a broader understanding of various YOLO11 integrations, visit the Ultralytics integration guide page, where you can discover a range of other exciting integration possibilities.

常见问题

What is Neural Magic's DeepSparse Engine and how does it optimize YOLO11 performance?

Neural Magic's DeepSparse Engine is an inference runtime designed to optimize the execution of neural networks on CPUs through advanced techniques such as sparsity, pruning, and quantization. By integrating DeepSparse with YOLO11, you can achieve GPU-like performance on standard CPUs, significantly enhancing inference speed, model efficiency, and overall performance while maintaining accuracy. For more details, check out the Neural Magic's DeepSparse section.

How can I install the needed packages to deploy YOLO11 using Neural Magic's DeepSparse?

Installing the required packages for deploying YOLO11 with Neural Magic's DeepSparse is straightforward. You can easily install them using the CLI. Here's the command you need to run:

pip install deepsparse[yolov8]

Once installed, follow the steps provided in the Installation section to set up your environment and start using DeepSparse with YOLO11.

How do I convert YOLO11 models to ONNX format for use with DeepSparse?

To convert YOLO11 models to the ONNX format, which is required for compatibility with DeepSparse, you can use the following CLI command:

yolo task=detect mode=export model=yolo11n.pt format=onnx opset=13

This command will export your YOLO11 model (yolo11n.pt格式 (yolo11n.onnx),可被 DeepSparse 引擎使用。有关模型导出的更多信息,请参阅 模型导出部分.

How do I benchmark YOLO11 performance on the DeepSparse Engine?

Benchmarking YOLO11 performance on DeepSparse helps you analyze throughput and latency to ensure your model is optimized. You can use the following CLI command to run a benchmark:

deepsparse.benchmark model_path="path/to/yolo11n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"

该命令将为您提供重要的性能指标。有关详细信息,请参阅 "性能基准 "部分

Why should I use Neural Magic's DeepSparse with YOLO11 for object detection tasks?

Integrating Neural Magic's DeepSparse with YOLO11 offers several benefits:

  • Enhanced Inference Speed: Achieves up to 525 FPS, significantly speeding up YOLO11's capabilities.
  • 优化模型效率:利用稀疏性、剪枝和量化技术,在保持准确性的同时,缩小模型大小,减少计算需求。
  • 在标准 CPU 上实现高性能:在经济高效的CPU 硬件上提供类似GPU 的性能。
  • 简化集成:用户友好型工具,便于部署和集成。
  • Flexibility: Supports both standard and sparsity-optimized YOLO11 models.
  • 成本效益高:有效利用资源,降低运营成本。

For a deeper dive into these advantages, visit the Benefits of Integrating Neural Magic's DeepSparse with YOLO11 section.

📅 Created 10 months ago ✏️ Updated 1 month ago

评论