Optimizing YOLO11 Inferences with Neural Magic's DeepSparse Engine
When deploying object detection models like Ultralytics YOLO11 on various hardware, you can bump into unique issues like optimization. This is where YOLO11's integration with Neural Magic's DeepSparse Engine steps in. It transforms the way YOLO11 models are executed and enables GPU-level performance directly on CPUs.
This guide shows you how to deploy YOLO11 using Neural Magic's DeepSparse, how to run inferences, and also how to benchmark performance to ensure it is optimized.
Neural MagicDeepSparse de
Neural Magic's DeepSparse is an inference run-time designed to optimize the execution of neural networks on CPUs. It applies advanced techniques like sparsity, pruning, and quantization to dramatically reduce computational demands while maintaining accuracy. DeepSparse offers an agile solution for efficient and scalable neural network execution across various devices.
Benefits of Integrating Neural Magic's DeepSparse with YOLO11
Avant de plonger dans le déploiement YOLOV8 en utilisant DeepSparse, comprenons les avantages de l’utilisation de DeepSparse. Voici quelques avantages clés :
- Enhanced Inference Speed: Achieves up to 525 FPS (on YOLO11n), significantly speeding up YOLO11's inference capabilities compared to traditional methods.
- Optimized Model Efficiency: Uses pruning and quantization to enhance YOLO11's efficiency, reducing model size and computational requirements while maintaining accuracy.
-
Haute performance sur les unités centrales standard: Offre des performances comparables à celles du site GPU sur les unités centrales, ce qui constitue une option plus accessible et plus rentable pour diverses applications.
-
Streamlined Integration and Deployment: Offers user-friendly tools for easy integration of YOLO11 into applications, including image and video annotation features.
-
Support for Various Model Types: Compatible with both standard and sparsity-optimized YOLO11 models, adding deployment flexibility.
-
Solution rentable et évolutive: Réduit les dépenses opérationnelles et offre un déploiement évolutif des modèles de détection d'objets avancés.
Comment fonctionne la technologie DeepSparse de Neural Magic?
Neural MagicLa technologie Deep Sparse s’inspire de l’efficacité du cerveau humain dans le calcul des réseaux neuronaux. Il adopte deux principes clés du cerveau comme suit :
-
Sparsity: The process of sparsification involves pruning redundant information from deep learning networks, leading to smaller and faster models without compromising accuracy. This technique reduces the network's size and computational needs significantly.
-
Localité de référence: DeepSparse utilise une méthode d'exécution unique en décomposant le réseau en Tensor Columns. Ces colonnes sont exécutées en profondeur et s'insèrent entièrement dans la mémoire cache de CPU. Cette approche imite l'efficacité du cerveau, en minimisant le mouvement des données et en maximisant l'utilisation de la mémoire cache de CPU.
Pour plus de détails sur le fonctionnement de la technologie DeepSparse de Neural Magic, consulte leur article de blog.
Creating A Sparse Version of YOLO11 Trained on a Custom Dataset
SparseZoo, an open-source model repository by Neural Magic, offers a collection of pre-sparsified YOLO11 model checkpoints. With SparseML, seamlessly integrated with Ultralytics, users can effortlessly fine-tune these sparse checkpoints on their specific datasets using a straightforward command-line interface.
Checkout Neural Magic's SparseML YOLO11 documentation for more details.
Utilisation : DĂ©ploiement de YOLOV8 Ă l'aide de DeepSparse
Deploying YOLO11 with Neural Magic's DeepSparse involves a few straightforward steps. Before diving into the usage instructions, be sure to check out the range of YOLO11 models offered by Ultralytics. This will help you choose the most appropriate model for your project requirements. Here's how you can get started.
Étape 1 : Installation
Pour installer les paquets nécessaires, exécute :
Step 2: Exporting YOLO11 to ONNX Format
DeepSparse Engine requires YOLO11 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLO11 models:
Modèle Export
Cette commande permet d'enregistrer le yolo11n.onnx
modèle sur ton disque.
Étape 3 : Déploiement et exécution des inférences
With your YOLO11 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:
Déployer et exécuter les inférences
from deepsparse import Pipeline
# Specify the path to your YOLO11 ONNX model
model_path = "path/to/yolo11n.onnx"
# Set up the DeepSparse Pipeline
yolo_pipeline = Pipeline.create(task="yolov8", model_path=model_path)
# Run the model on your images
images = ["path/to/image.jpg"]
pipeline_outputs = yolo_pipeline(images=images)
Étape 4 : Analyse comparative des performances
It's important to check that your YOLO11 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:
Benchmarking
Étape 5 : Fonctionnalités supplémentaires
DeepSparse provides additional features for practical integration of YOLO11 in applications, such as image annotation and dataset evaluation.
Caractéristiques supplémentaires
En exécutant la commande annotate, tu traites l'image que tu as spécifiée, tu détectes les objets et tu enregistres l'image annotée avec les boîtes de délimitation et les classifications. L'image annotée sera stockée dans un dossier annotation-results. Cela permet de fournir une représentation visuelle des capacités de détection du modèle.
After running the eval command, you will receive detailed output metrics such as precision, recall, and mAP (mean Average Precision). This provides a comprehensive view of your model's performance on the dataset. This functionality is particularly useful for fine-tuning and optimizing your YOLO11 models for specific use cases, ensuring high accuracy and efficiency.
Résumé
This guide explored integrating Ultralytics' YOLO11 with Neural Magic's DeepSparse Engine. It highlighted how this integration enhances YOLO11's performance on CPU platforms, offering GPU-level efficiency and advanced neural network sparsity techniques.
For more detailed information and advanced usage, visit Neural Magic's DeepSparse documentation. Also, check out Neural Magic's documentation on the integration with YOLO11 here and watch a great session on it here.
Additionally, for a broader understanding of various YOLO11 integrations, visit the Ultralytics integration guide page, where you can discover a range of other exciting integration possibilities.
FAQ
What is Neural Magic's DeepSparse Engine and how does it optimize YOLO11 performance?
Neural Magic's DeepSparse Engine is an inference runtime designed to optimize the execution of neural networks on CPUs through advanced techniques such as sparsity, pruning, and quantization. By integrating DeepSparse with YOLO11, you can achieve GPU-like performance on standard CPUs, significantly enhancing inference speed, model efficiency, and overall performance while maintaining accuracy. For more details, check out the Neural Magic's DeepSparse section.
How can I install the needed packages to deploy YOLO11 using Neural Magic's DeepSparse?
Installing the required packages for deploying YOLO11 with Neural Magic's DeepSparse is straightforward. You can easily install them using the CLI. Here's the command you need to run:
Once installed, follow the steps provided in the Installation section to set up your environment and start using DeepSparse with YOLO11.
How do I convert YOLO11 models to ONNX format for use with DeepSparse?
To convert YOLO11 models to the ONNX format, which is required for compatibility with DeepSparse, you can use the following CLI command:
This command will export your YOLO11 model (yolo11n.pt
) Ă un format (yolo11n.onnx
) qui peuvent être utilisés par le moteur DeepSparse. Tu trouveras plus d'informations sur l'exportation de modèles dans la rubrique Section Exportation de modèles.
How do I benchmark YOLO11 performance on the DeepSparse Engine?
Benchmarking YOLO11 performance on DeepSparse helps you analyze throughput and latency to ensure your model is optimized. You can use the following CLI command to run a benchmark:
deepsparse.benchmark model_path="path/to/yolo11n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"
Cette commande te fournira des mesures de performance essentielles. Pour plus de détails, voir la section Analyse comparative des performances.
Why should I use Neural Magic's DeepSparse with YOLO11 for object detection tasks?
Integrating Neural Magic's DeepSparse with YOLO11 offers several benefits:
- Enhanced Inference Speed: Achieves up to 525 FPS, significantly speeding up YOLO11's capabilities.
- Efficacité optimisée du modèle : Utilise des techniques d'éparpillement, d'élagage et de quantification pour réduire la taille du modèle et les besoins de calcul tout en maintenant la précision.
- Haute performance sur les unités centrales standard : Offre des performances comparables à celles de GPU sur du matériel CPU économique.
- Intégration simplifiée : Des outils conviviaux pour un déploiement et une intégration faciles.
- Flexibility: Supports both standard and sparsity-optimized YOLO11 models.
- Rentable : Réduit les dépenses opérationnelles grâce à une utilisation efficace des ressources.
For a deeper dive into these advantages, visit the Benefits of Integrating Neural Magic's DeepSparse with YOLO11 section.