Comprehensive Guide to Ultralytics YOLOv5
Welcome to the Ultralytics' YOLOv5🚀 Documentation! YOLOv5, the fifth iteration of the revolutionary "You Only Look Once" object detection model, is designed to deliver high-speed, high-accuracy results in real-time.
Built on PyTorch, this powerful deep learning framework has garnered immense popularity for its versatility, ease of use, and high performance. Our documentation guides you through the installation process, explains the architectural nuances of the model, showcases various use-cases, and provides a series of detailed tutorials. These resources will help you harness the full potential of YOLOv5 for your computer vision projects. Let's get started!
Explore and Learn
Here's a compilation of comprehensive tutorials that will guide you through different aspects of YOLOv5.
- Train Custom Data 🚀 RECOMMENDED: Learn how to train the YOLOv5 model on your custom dataset.
- Tips for Best Training Results ☘️: Uncover practical tips to optimize your model training process.
- Multi-GPU Training: Understand how to leverage multiple GPUs to expedite your training.
- PyTorch Hub 🌟 NEW: Learn to load pre-trained models via PyTorch Hub.
- TFLite, ONNX, CoreML, TensorRT Export 🚀: Understand how to export your model to different formats.
- Test-Time Augmentation (TTA): Explore how to use TTA to improve your model's prediction accuracy.
- Model Ensembling: Learn the strategy of combining multiple models for improved performance.
- Model Pruning/Sparsity: Understand pruning and sparsity concepts, and how to create a more efficient model.
- Hyperparameter Evolution: Discover the process of automated hyperparameter tuning for better model performance.
- Transfer Learning with Frozen Layers: Learn how to implement transfer learning by freezing layers in YOLOv5.
- Architecture Summary 🌟 Delve into the structural details of the YOLOv5 model.
- Roboflow for Datasets: Understand how to utilize Roboflow for dataset management, labeling, and active learning.
- ClearML Logging 🌟 Learn how to integrate ClearML for efficient logging during your model training.
- YOLOv5 with Neural Magic Discover how to use Neural Magic's Deepsparse to prune and quantize your YOLOv5 model.
- Comet Logging 🌟 NEW: Explore how to utilize Comet for improved model training logging.
Supported Environments
Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as CUDA, CUDNN, Python, and PyTorch, to kickstart your projects.
- Free GPU Notebooks:
- Google Cloud: GCP Quickstart Guide
- Amazon: AWS Quickstart Guide
- Azure: AzureML Quickstart Guide
- Docker: Docker Quickstart Guide
Project Status
This badge indicates that all YOLOv5 GitHub Actions Continuous Integration (CI) tests are successfully passing. These CI tests rigorously check the functionality and performance of YOLOv5 across various key aspects: training, validation, inference, export, and benchmarks. They ensure consistent and reliable operation on macOS, Windows, and Ubuntu, with tests conducted every 24 hours and upon each new commit.
Connect and Contribute
Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on GitHub, connect with professionals on LinkedIn, share your results on Twitter, and find educational resources on YouTube. Follow us on TikTok and BiliBili for more engaging content.
Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our contributing guidelines for more information.
We're excited to see the innovative ways you'll use YOLOv5. Dive in, experiment, and revolutionize your computer vision projects! 🚀
FAQ
What are the key features of Ultralytics YOLOv5?
Ultralytics YOLOv5 is renowned for its high-speed and high-accuracy object detection capabilities. Built on PyTorch, it is versatile and user-friendly, making it suitable for various computer vision projects. Key features include real-time inference, support for multiple training tricks like Test-Time Augmentation (TTA) and Model Ensembling, and compatibility with export formats such as TFLite, ONNX, CoreML, and TensorRT. To delve deeper into how Ultralytics YOLOv5 can elevate your project, explore our TFLite, ONNX, CoreML, TensorRT Export guide.
How can I train a custom YOLOv5 model on my dataset?
Training a custom YOLOv5 model on your dataset involves a few key steps. First, prepare your dataset in the required format, annotated with labels. Then, configure the YOLOv5 training parameters and start the training process using the train.py
script. For an in-depth tutorial on this process, consult our Train Custom Data guide. It provides step-by-step instructions to ensure optimal results for your specific use case.
Why should I use Ultralytics YOLOv5 over other object detection models like RCNN?
Ultralytics YOLOv5 is preferred over models like RCNN due to its superior speed and accuracy in real-time object detection. YOLOv5 processes the entire image in one go, making it significantly faster compared to the region-based approach of RCNN, which involves multiple passes. Additionally, YOLOv5's seamless integration with various export formats and extensive documentation make it an excellent choice for both beginners and professionals. Learn more about the architectural advantages in our Architecture Summary.
How can I optimize YOLOv5 model performance during training?
Optimizing YOLOv5 model performance involves tuning various hyperparameters and incorporating techniques like data augmentation and transfer learning. Ultralytics provides comprehensive resources on hyperparameter evolution and pruning/sparsity to improve model efficiency. You can discover practical tips in our Tips for Best Training Results guide, which offers actionable insights for achieving optimal performance during training.
What environments are supported for running YOLOv5 applications?
Ultralytics YOLOv5 supports a variety of environments, including free GPU notebooks on Gradient, Google Colab, Kaggle, as well as major cloud platforms like Google Cloud, Amazon AWS, and Azure. Docker images are also available for convenient setup. For a detailed guide on setting up these environments, check our Supported Environments section, which includes step-by-step instructions for each platform.