Skip to content

YOLOv8

Overview

YOLOv8 is the latest iteration in the YOLO series of real-time object detectors, offering cutting-edge performance in terms of accuracy and speed. Building upon the advancements of previous YOLO versions, YOLOv8 introduces new features and optimizations that make it an ideal choice for various object detection tasks in a wide range of applications.

Key Features

  • Advanced Backbone and Neck Architectures: YOLOv8 employs state-of-the-art backbone and neck architectures, resulting in improved feature extraction and object detection performance.
  • Anchor-free Split Ultralytics Head: YOLOv8 adopts an anchor-free split Ultralytics head, which contributes to better accuracy and a more efficient detection process compared to anchor-based approaches.
  • Optimized Accuracy-Speed Tradeoff: With a focus on maintaining an optimal balance between accuracy and speed, YOLOv8 is suitable for real-time object detection tasks in diverse application areas.
  • Variety of Pre-trained Models: YOLOv8 offers a range of pre-trained models to cater to various tasks and performance requirements, making it easier to find the right model for your specific use case.

Supported Tasks

Model Type Pre-trained Weights Task
YOLOv8 yolov8n.pt, yolov8s.pt, yolov8m.pt, yolov8l.pt, yolov8x.pt Detection
YOLOv8-seg yolov8n-seg.pt, yolov8s-seg.pt, yolov8m-seg.pt, yolov8l-seg.pt, yolov8x-seg.pt Instance Segmentation
YOLOv8-pose yolov8n-pose.pt, yolov8s-pose.pt, yolov8m-pose.pt, yolov8l-pose.pt, yolov8x-pose.pt ,yolov8x-pose-p6 Pose/Keypoints
YOLOv8-cls yolov8n-cls.pt, yolov8s-cls.pt, yolov8m-cls.pt, yolov8l-cls.pt, yolov8x-cls.pt Classification

Supported Modes

Mode Supported
Inference ✔
Validation ✔
Training ✔
Performance
Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
Model size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-seg 640 36.7 30.5 96.1 1.21 3.4 12.6
YOLOv8s-seg 640 44.6 36.8 155.7 1.47 11.8 42.6
YOLOv8m-seg 640 49.9 40.8 317.0 2.18 27.3 110.2
YOLOv8l-seg 640 52.3 42.6 572.4 2.79 46.0 220.5
YOLOv8x-seg 640 53.4 43.4 712.1 4.02 71.8 344.1
Model size
(pixels)
acc
top1
acc
top5
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B) at 640
YOLOv8n-cls 224 66.6 87.0 12.9 0.31 2.7 4.3
YOLOv8s-cls 224 72.3 91.1 23.4 0.35 6.4 13.5
YOLOv8m-cls 224 76.4 93.2 85.4 0.62 17.0 42.7
YOLOv8l-cls 224 78.0 94.1 163.0 0.87 37.5 99.7
YOLOv8x-cls 224 78.4 94.3 232.0 1.01 57.4 154.8
Model size
(pixels)
mAPpose
50-95
mAPpose
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-pose 640 50.4 80.1 131.8 1.18 3.3 9.2
YOLOv8s-pose 640 60.0 86.2 233.2 1.42 11.6 30.2
YOLOv8m-pose 640 65.0 88.8 456.3 2.00 26.4 81.0
YOLOv8l-pose 640 67.6 90.0 784.5 2.59 44.4 168.6
YOLOv8x-pose 640 69.2 90.2 1607.1 3.73 69.4 263.2
YOLOv8x-pose-p6 1280 71.6 91.2 4088.7 10.04 99.1 1066.4

Usage

You can use YOLOv8 for object detection tasks using the Ultralytics pip package. The following is a sample code snippet showing how to use YOLOv8 models for inference:

from ultralytics import YOLO

# Load the model
model = YOLO('yolov8n.pt')  # load a pretrained model

# Perform inference
results = model('image.jpg')

# Print the results
results.print()

Citation

If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:

@software{yolov8_ultralytics,
  author       = {Glenn Jocher and Ayush Chaurasia and Jing Qiu},
  title        = {YOLO by Ultralytics},
  version      = {8.0.0},
  year         = {2023},
  url          = {https://github.com/ultralytics/ultralytics},
  orcid        = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069},
  license      = {AGPL-3.0}
}

Please note that the DOI is pending and will be added to the citation once it is available. The usage of the software is in accordance with the AGPL-3.0 license.


Created 2023-05-01, Updated 2023-05-22
Authors: Glenn Jocher (4)

Comments