You will need Python >= 3.8 and PIP in order to follow this guide.
The rest of the requirements are listed in './requirements.txt'
*If you have mutliple versions of python installed, ensure you are using the correct one
Clone the repository
$ git clone https://github.com/ultralytics/yolov5.git
Enter the repository root directory
$ cd yolov5
Install the required packages from your cloned repository root directory
$ pip install -r requirements.txt
For a quick and hassle free setup YOLOv5 has been packaged with all dependencies* for the following environments
- Google Colab and Kaggle notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Inference - Detect Objects
From Your Cloned Repository
To get started with object detection using the latest YOLO models, run this command from your repository root directory. Results are saved to './runs/detect'
$ python detect.py --source OPTION
Replace OPTION with your selection, to detect from:
- Webcam : (OPTION = 0) For live object detection from your connected webcam
- Image : (OPTION = filename.jpg) Create a copy of the image with an object detection overlay
- Video : (OPTION = filename.mp4) Create a copy of the video with an object detection overlay
- Directory : (OPTION = directory_name/) Create a copy of all file with an object detection overlay
- Global File Type (OPTION = directory_name/*.jpg) Create a copy of all file with an object detection overlay
- RTSP stream : (OPTION = rtsp://188.8.131.52/rtplive/470011e600ef003a004ee33696235daa) For live object detection from a stream
- RTMP stream : (OPTION = rtmp://192.168.1.105/live/test) For live object detection from a stream
- HTTP stream : (OPTION = http://184.108.40.206/PLTV/88888888/224/3221225900/1.m3u8) For live object detection from a stream
The following file formats are currently supported:
- Images: bmp, jpg, jpeg, png, tif, tiff, dng, webp, mpo
- Videos: mov, avi, mp4, mpg, mpeg, m4v, wmv, mkv
From PyTorch Hub
Inference can be run directly from PyTorch Hub without cloning the repository. The necesary files will be downloaded into your temporary directory.
Here is an example script that uses the latest YOLOv5s model and the repositories example images.
import torch # Model model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # Images dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/' imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images # Inference results = model(imgs) results.print() # or .show(), .save()