Skip to content



YOLOv5u represents an advancement in object detection methodologies. Originating from the foundational architecture of the YOLOv5 model developed by Ultralytics, YOLOv5u integrates the anchor-free, objectness-free split head, a feature previously introduced in the YOLOv8 models. This adaptation refines the model's architecture, leading to an improved accuracy-speed tradeoff in object detection tasks. Given the empirical results and its derived features, YOLOv5u provides an efficient alternative for those seeking robust solutions in both research and practical applications.

Ultralytics YOLOv5

Key Features

  • Anchor-free Split Ultralytics Head: Traditional object detection models rely on predefined anchor boxes to predict object locations. However, YOLOv5u modernizes this approach. By adopting an anchor-free split Ultralytics head, it ensures a more flexible and adaptive detection mechanism, consequently enhancing the performance in diverse scenarios.

  • Optimized Accuracy-Speed Tradeoff: Speed and accuracy often pull in opposite directions. But YOLOv5u challenges this tradeoff. It offers a calibrated balance, ensuring real-time detections without compromising on accuracy. This feature is particularly invaluable for applications that demand swift responses, such as autonomous vehicles, robotics, and real-time video analytics.

  • Variety of Pre-trained Models: Understanding that different tasks require different toolsets, YOLOv5u provides a plethora of pre-trained models. Whether you're focusing on Inference, Validation, or Training, there's a tailor-made model awaiting you. This variety ensures you're not just using a one-size-fits-all solution, but a model specifically fine-tuned for your unique challenge.

Supported Tasks

Model Type Pre-trained Weights Task
YOLOv5u yolov5nu, yolov5su, yolov5mu, yolov5lu, yolov5xu, yolov5n6u, yolov5s6u, yolov5m6u, yolov5l6u, yolov5x6u Detection

Supported Modes

Mode Supported
Inference ✔
Validation ✔
Training ✔


Model YAML size
A100 TensorRT
(B) yolov5n.yaml 640 34.3 73.6 1.06 2.6 7.7 yolov5s.yaml 640 43.0 120.7 1.27 9.1 24.0 yolov5m.yaml 640 49.0 233.9 1.86 25.1 64.2 yolov5l.yaml 640 52.2 408.4 2.50 53.2 135.0 yolov5x.yaml 640 53.2 763.2 3.81 97.2 246.4 yolov5n6.yaml 1280 42.1 211.0 1.83 4.3 7.8 yolov5s6.yaml 1280 48.6 422.6 2.34 15.3 24.6 yolov5m6.yaml 1280 53.6 810.9 4.36 41.2 65.7 yolov5l6.yaml 1280 55.7 1470.9 5.47 86.1 137.4 yolov5x6.yaml 1280 56.8 2436.5 8.98 155.4 250.7


You can use YOLOv5u for object detection tasks using the Ultralytics repository. The following is a sample code snippet showing how to use YOLOv5u model for inference:

This example provides simple inference code for YOLOv5. For more options including handling inference results see Predict mode. For using YOLOv5 with additional modes see Train, Val and Export.

PyTorch pretrained *.pt models as well as configuration *.yaml files can be passed to the YOLO() class to create a model instance in python:

from ultralytics import YOLO

# Load a COCO-pretrained YOLOv5n model
model = YOLO('')

# Display model information (optional)

# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)

# Run inference with the YOLOv5n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')

CLI commands are available to directly run the models:

# Load a COCO-pretrained YOLOv5n model and train it on the COCO8 example dataset for 100 epochs
yolo train data=coco8.yaml epochs=100 imgsz=640

# Load a COCO-pretrained YOLOv5n model and run inference on the 'bus.jpg' image
yolo predict source=path/to/bus.jpg

Citations and Acknowledgements

If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows:

  title = {Ultralytics YOLOv5},
  author = {Glenn Jocher},
  year = {2020},
  version = {7.0},
  license = {AGPL-3.0},
  url = {},
  doi = {10.5281/zenodo.3908559},
  orcid = {0000-0001-5950-6979}

Special thanks to Glenn Jocher and the Ultralytics team for their work on developing and maintaining the YOLOv5 and YOLOv5u models.

Created 2023-05-01, Updated 2023-08-14
Authors: glenn-jocher (11), sergiuwaxmann (1)