Train Custom Data 📌
📚 This guide explains how to train your own custom dataset with YOLOv5 🚀. UPDATED 25 September 2022.
Use this guide with the YOLOv5 Custom Training Notebook.
Before You Start
Train On Custom Data
Creating a custom model to detect your objects is an iterative process of collecting and organizing images, labeling your objects of interest, training a model, deploying it into the wild to make predictions, and then using that deployed model to collect examples of edge cases to repeat and improve.
1. Create Dataset
YOLOv5 models must be trained on labelled data in order to learn classes of objects in that data. There are two options for creating your dataset before you start training:
Use Roboflow to label, prepare, and host your custom data automatically in YOLO format 🚀 NEW (click to expand)### 1.1 Collect Images Your model will learn by example. Training on images similar to the ones it will see in the wild is of the utmost importance. Ideally, you will collect a wide variety of images from the same configuration (camera, angle, lighting, etc) as you will ultimately deploy your project. If this is not possible, you can start from [a public dataset](https://universe.roboflow.com/?ref=ultralytics) to train your initial model and then [sample images from the wild during inference](https://blog.roboflow.com/computer-vision-active-learning-tips/?ref=ultralytics) to improve your dataset and model iteratively. ### 1.2 Create Labels Once you have collected images, you will need to annotate the objects of interest to create a ground truth for your model to learn from.
* **Auto-Orient** - to strip EXIF orientation from your images. * **Resize (Stretch)** - to the square input size of your model (640x640 is the YOLOv5 default). Generating a version will give you a point in time snapshot of your dataset so you can always go back and compare your future model training runs against it, even if you add more images or change its configuration later.
Export in `YOLOv5 Pytorch` format, then copy the snippet into your training script or notebook to download your dataset.
Now continue with `2. Select a Model`.
Or manually prepare your dataset (click to expand)### 1.1 Create dataset.yaml [COCO128](https://www.kaggle.com/ultralytics/coco128) is an example small tutorial dataset composed of the first 128 images in [COCO](http://cocodataset.org/#home) train2017. These same 128 images are used for both training and validation to verify our training pipeline is capable of overfitting. [data/coco128.yaml](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), shown below, is the dataset config file that defines 1) the dataset root directory `path` and relative paths to `train` / `val` / `test` image directories (or *.txt files with image paths) and 2) a class `names` dictionary:
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco128 # dataset root dir train: images/train2017 # train images (relative to 'path') 128 images val: images/train2017 # val images (relative to 'path') 128 images test: # test images (optional) # Classes (80 COCO classes) names: 0: person 1: bicycle 2: car ... 77: teddy bear 78: hair drier 79: toothbrush
The label file corresponding to the above image contains 2 persons (class `0`) and a tie (class `27`):
### 1.3 Organize Directories Organize your train and val images and labels according to the example below. YOLOv5 assumes `/coco128` is inside a `/datasets` directory **next to** the `/yolov5` directory. **YOLOv5 locates labels automatically for each image** by replacing the last instance of `/images/` in each image path with `/labels/`. For example:
2. Select a Model
Train a YOLOv5s model on COCO128 by specifying dataset, batch-size, image size and either pretrained
--weights yolov5s.pt (recommended), or randomly initialized
--weights '' --cfg yolov5s.yaml (not recommended). Pretrained weights are auto-downloaded from the latest YOLOv5 release.
💡 ProTip: Add
--cache ram or
--cache disk to speed up training (requires significant RAM/disk resources).
💡 ProTip: Always train from a local dataset. Mounted or network drives like Google Drive will be very slow.
ClearML Logging and Automation 🌟 NEW
ClearML is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML:
pip install clearml
clearml-initto connect to a ClearML server (deploy your own open-source server here, or use our free hosted server here)
You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a YOLOv5 training task on a queue to be executed by any number of ClearML Agents (workers).
You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the ClearML Tutorial for details!
Weights & Biases Logging
Weights & Biases (W&B) is integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B
pip install wandb, and then train normally (you will be guided through setup on first use).
This directory contains train and val statistics, mosaics, labels, predictions and augmentated mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices.
results.csv is updated after each epoch, and then plotted as
results.png (below) after training completes. You can also plot any
results.csv file manually:
Once your model is trained you can use your best checkpoint
* Run CLI or Python inference on new images and videos
* Validate accuracy on train, val and test splits
* Export to TensorFlow, Keras, ONNX, TFlite, TF.js, CoreML and TensorRT formats
* Evolve hyperparameters to improve performance
* Improve your model by sampling real-world images and adding them to your dataset
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.