Skip to content

Optimizing YOLO11 Inferences with Neural Magic's DeepSparse Engine

When deploying object detection models like Ultralytics YOLO11 on various hardware, you can bump into unique issues like optimization. This is where YOLO11's integration with Neural Magic's DeepSparse Engine steps in. It transforms the way YOLO11 models are executed and enables GPU-level performance directly on CPUs.

This guide shows you how to deploy YOLO11 using Neural Magic's DeepSparse, how to run inferences, and also how to benchmark performance to ensure it is optimized.

Neural MagicDeepSparse de

Neural MagicPrésentation de DeepSparse

Neural Magic's DeepSparse is an inference run-time designed to optimize the execution of neural networks on CPUs. It applies advanced techniques like sparsity, pruning, and quantization to dramatically reduce computational demands while maintaining accuracy. DeepSparse offers an agile solution for efficient and scalable neural network execution across various devices.

Benefits of Integrating Neural Magic's DeepSparse with YOLO11

Avant de plonger dans le déploiement YOLOV8 en utilisant DeepSparse, comprenons les avantages de l’utilisation de DeepSparse. Voici quelques avantages clés :

  • Enhanced Inference Speed: Achieves up to 525 FPS (on YOLO11n), significantly speeding up YOLO11's inference capabilities compared to traditional methods.

Vitesse d'inférence améliorée

  • Optimized Model Efficiency: Uses pruning and quantization to enhance YOLO11's efficiency, reducing model size and computational requirements while maintaining accuracy.

Efficacité du modèle optimisé

  • Haute performance sur les unitĂ©s centrales standard: Offre des performances comparables Ă  celles du site GPU sur les unitĂ©s centrales, ce qui constitue une option plus accessible et plus rentable pour diverses applications.

  • Streamlined Integration and Deployment: Offers user-friendly tools for easy integration of YOLO11 into applications, including image and video annotation features.

  • Support for Various Model Types: Compatible with both standard and sparsity-optimized YOLO11 models, adding deployment flexibility.

  • Solution rentable et Ă©volutive: RĂ©duit les dĂ©penses opĂ©rationnelles et offre un dĂ©ploiement Ă©volutif des modèles de dĂ©tection d'objets avancĂ©s.

Comment fonctionne la technologie DeepSparse de Neural Magic?

Neural MagicLa technologie Deep Sparse s’inspire de l’efficacité du cerveau humain dans le calcul des réseaux neuronaux. Il adopte deux principes clés du cerveau comme suit :

  • Sparsity: The process of sparsification involves pruning redundant information from deep learning networks, leading to smaller and faster models without compromising accuracy. This technique reduces the network's size and computational needs significantly.

  • LocalitĂ© de rĂ©fĂ©rence: DeepSparse utilise une mĂ©thode d'exĂ©cution unique en dĂ©composant le rĂ©seau en Tensor Columns. Ces colonnes sont exĂ©cutĂ©es en profondeur et s'insèrent entièrement dans la mĂ©moire cache de CPU. Cette approche imite l'efficacitĂ© du cerveau, en minimisant le mouvement des donnĂ©es et en maximisant l'utilisation de la mĂ©moire cache de CPU.

Comment fonctionne la technologie DeepSparse de Neural Magic?

Pour plus de détails sur le fonctionnement de la technologie DeepSparse de Neural Magic, consulte leur article de blog.

Creating A Sparse Version of YOLO11 Trained on a Custom Dataset

SparseZoo, an open-source model repository by Neural Magic, offers a collection of pre-sparsified YOLO11 model checkpoints. With SparseML, seamlessly integrated with Ultralytics, users can effortlessly fine-tune these sparse checkpoints on their specific datasets using a straightforward command-line interface.

Checkout Neural Magic's SparseML YOLO11 documentation for more details.

Utilisation : DĂ©ploiement de YOLOV8 Ă  l'aide de DeepSparse

Deploying YOLO11 with Neural Magic's DeepSparse involves a few straightforward steps. Before diving into the usage instructions, be sure to check out the range of YOLO11 models offered by Ultralytics. This will help you choose the most appropriate model for your project requirements. Here's how you can get started.

Étape 1 : Installation

Pour installer les paquets nécessaires, exécute :

Installation

# Install the required packages
pip install deepsparse[yolov8]

Step 2: Exporting YOLO11 to ONNX Format

DeepSparse Engine requires YOLO11 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLO11 models:

Modèle Export

# Export YOLO11 model to ONNX format
yolo task=detect mode=export model=yolo11n.pt format=onnx opset=13

Cette commande permet d'enregistrer le yolo11n.onnx modèle sur ton disque.

Étape 3 : Déploiement et exécution des inférences

With your YOLO11 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:

Déployer et exécuter les inférences

from deepsparse import Pipeline

# Specify the path to your YOLO11 ONNX model
model_path = "path/to/yolo11n.onnx"

# Set up the DeepSparse Pipeline
yolo_pipeline = Pipeline.create(task="yolov8", model_path=model_path)

# Run the model on your images
images = ["path/to/image.jpg"]
pipeline_outputs = yolo_pipeline(images=images)

Étape 4 : Analyse comparative des performances

It's important to check that your YOLO11 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:

Benchmarking

# Benchmark performance
deepsparse.benchmark model_path="path/to/yolo11n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"

Étape 5 : Fonctionnalités supplémentaires

DeepSparse provides additional features for practical integration of YOLO11 in applications, such as image annotation and dataset evaluation.

Caractéristiques supplémentaires

# For image annotation
deepsparse.yolov8.annotate --source "path/to/image.jpg" --model_filepath "path/to/yolo11n.onnx"

# For evaluating model performance on a dataset
deepsparse.yolov8.eval --model_path "path/to/yolo11n.onnx"

En exécutant la commande annotate, tu traites l'image que tu as spécifiée, tu détectes les objets et tu enregistres l'image annotée avec les boîtes de délimitation et les classifications. L'image annotée sera stockée dans un dossier annotation-results. Cela permet de fournir une représentation visuelle des capacités de détection du modèle.

Fonction d'annotation d'image

After running the eval command, you will receive detailed output metrics such as precision, recall, and mAP (mean Average Precision). This provides a comprehensive view of your model's performance on the dataset. This functionality is particularly useful for fine-tuning and optimizing your YOLO11 models for specific use cases, ensuring high accuracy and efficiency.

Résumé

This guide explored integrating Ultralytics' YOLO11 with Neural Magic's DeepSparse Engine. It highlighted how this integration enhances YOLO11's performance on CPU platforms, offering GPU-level efficiency and advanced neural network sparsity techniques.

For more detailed information and advanced usage, visit Neural Magic's DeepSparse documentation. Also, check out Neural Magic's documentation on the integration with YOLO11 here and watch a great session on it here.

Additionally, for a broader understanding of various YOLO11 integrations, visit the Ultralytics integration guide page, where you can discover a range of other exciting integration possibilities.

FAQ

What is Neural Magic's DeepSparse Engine and how does it optimize YOLO11 performance?

Neural Magic's DeepSparse Engine is an inference runtime designed to optimize the execution of neural networks on CPUs through advanced techniques such as sparsity, pruning, and quantization. By integrating DeepSparse with YOLO11, you can achieve GPU-like performance on standard CPUs, significantly enhancing inference speed, model efficiency, and overall performance while maintaining accuracy. For more details, check out the Neural Magic's DeepSparse section.

How can I install the needed packages to deploy YOLO11 using Neural Magic's DeepSparse?

Installing the required packages for deploying YOLO11 with Neural Magic's DeepSparse is straightforward. You can easily install them using the CLI. Here's the command you need to run:

pip install deepsparse[yolov8]

Once installed, follow the steps provided in the Installation section to set up your environment and start using DeepSparse with YOLO11.

How do I convert YOLO11 models to ONNX format for use with DeepSparse?

To convert YOLO11 models to the ONNX format, which is required for compatibility with DeepSparse, you can use the following CLI command:

yolo task=detect mode=export model=yolo11n.pt format=onnx opset=13

This command will export your YOLO11 model (yolo11n.pt) à un format (yolo11n.onnx) qui peuvent être utilisés par le moteur DeepSparse. Tu trouveras plus d'informations sur l'exportation de modèles dans la rubrique Section Exportation de modèles.

How do I benchmark YOLO11 performance on the DeepSparse Engine?

Benchmarking YOLO11 performance on DeepSparse helps you analyze throughput and latency to ensure your model is optimized. You can use the following CLI command to run a benchmark:

deepsparse.benchmark model_path="path/to/yolo11n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"

Cette commande te fournira des mesures de performance essentielles. Pour plus de détails, voir la section Analyse comparative des performances.

Why should I use Neural Magic's DeepSparse with YOLO11 for object detection tasks?

Integrating Neural Magic's DeepSparse with YOLO11 offers several benefits:

  • Enhanced Inference Speed: Achieves up to 525 FPS, significantly speeding up YOLO11's capabilities.
  • EfficacitĂ© optimisĂ©e du modèle : Utilise des techniques d'Ă©parpillement, d'Ă©lagage et de quantification pour rĂ©duire la taille du modèle et les besoins de calcul tout en maintenant la prĂ©cision.
  • Haute performance sur les unitĂ©s centrales standard : Offre des performances comparables Ă  celles de GPU sur du matĂ©riel CPU Ă©conomique.
  • IntĂ©gration simplifiĂ©e : Des outils conviviaux pour un dĂ©ploiement et une intĂ©gration faciles.
  • Flexibility: Supports both standard and sparsity-optimized YOLO11 models.
  • Rentable : RĂ©duit les dĂ©penses opĂ©rationnelles grâce Ă  une utilisation efficace des ressources.

For a deeper dive into these advantages, visit the Benefits of Integrating Neural Magic's DeepSparse with YOLO11 section.


📅 Created 9 months ago ✏️ Updated 12 days ago

Commentaires