跳至内容

Ultralytics YOLOv8

概述

YOLOv8 is the latest iteration in the YOLO series of real-time object detectors, offering cutting-edge performance in terms of accuracy and speed. Building upon the advancements of previous YOLO versions, YOLOv8 introduces new features and optimizations that make it an ideal choice for various object detection tasks in a wide range of applications.

Ultralytics YOLOv8



观看: Ultralytics YOLOv8 机型概览

主要功能

  • Advanced Backbone and Neck Architectures: YOLOv8 employs state-of-the-art backbone and neck architectures, resulting in improved feature extraction and object detection performance.
  • 无锚分裂Ultralytics 头: YOLOv8 采用无锚分裂Ultralytics 头,与基于锚的方法相比,它有助于提高检测过程的准确性和效率。
  • 优化精度与速度之间的权衡: YOLOv8 专注于保持精度与速度之间的最佳平衡,适用于各种应用领域的实时目标检测任务。
  • 各种预训练模型: YOLOv8 提供一系列预训练模型,以满足各种任务和性能要求,从而更容易为您的特定用例找到合适的模型。

支持的任务和模式

The YOLOv8 series offers a diverse range of models, each specialized for specific tasks in computer vision. These models are designed to cater to various requirements, from object detection to more complex tasks like instance segmentation, pose/keypoints detection, oriented object detection, and classification.

YOLOv8 系列的每个变体都针对各自的任务进行了优化,以确保高性能和高精确度。此外,这些模型还兼容各种操作模式,包括推理验证训练输出,便于在部署和开发的不同阶段使用。

模型文件名任务推论验证培训出口
YOLOv8yolov8n.pt yolov8s.pt yolov8m.pt yolov8l.pt yolov8x.pt检测
YOLOv8-segyolov8n-seg.pt yolov8s-seg.pt yolov8m-seg.pt yolov8l-seg.pt yolov8x-seg.pt实例分割
YOLOv8-姿势yolov8n-pose.pt yolov8s-pose.pt yolov8m-pose.pt yolov8l-pose.pt yolov8x-pose.pt yolov8x-pose-p6.pt姿势/关键点
YOLOv8-obbyolov8n-obb.pt yolov8s-obb.pt yolov8m-obb.pt yolov8l-obb.pt yolov8x-obb.pt定向检测
YOLOv8-clsyolov8n-cls.pt yolov8s-cls.pt yolov8m-cls.pt yolov8l-cls.pt yolov8x-cls.pt分类

This table provides an overview of the YOLOv8 model variants, highlighting their applicability in specific tasks and their compatibility with various operational modes such as Inference, Validation, Training, and Export. It showcases the versatility and robustness of the YOLOv8 series, making them suitable for a variety of applications in computer vision.

性能指标

性能

有关在COCO 上训练的这些模型的使用示例,请参见检测文档,其中包括 80 个预先训练的类别。

模型尺寸
(像素)
mAPval
50-95
速度
CPU ONNX
(毫秒)
速度
A100 TensorRT
(毫秒)
params
(M)
FLOPs
(B)
YOLOv8n64037.380.40.993.28.7
YOLOv8s64044.9128.41.2011.228.6
YOLOv8m64050.2234.71.8325.978.9
YOLOv8l64052.9375.22.3943.7165.2
YOLOv8x64053.9479.13.5368.2257.8

有关在Open Image V7 上训练的这些模型的使用示例,请参见检测文档,其中包括 600 个预先训练的类别。

模型尺寸
(像素)
mAPval
50-95
速度
CPU ONNX
(毫秒)
速度
A100 TensorRT
(毫秒)
params
(M)
FLOPs
(B)
YOLOv8n64018.4142.41.213.510.5
YOLOv8s64027.7183.11.4011.429.7
YOLOv8m64033.6408.52.2626.280.6
YOLOv8l64034.9596.92.4344.1167.4
YOLOv8x64036.3860.63.5668.7260.6

有关在COCO 上训练的这些模型的使用示例,请参阅 "分割文档",其中包括 80 个预先训练的类别。

模型尺寸
(像素)
mAPbox
50-95
mAPmask
50-95
速度
CPU ONNX
(毫秒)
速度
A100 TensorRT
(毫秒)
params
(M)
FLOPs
(B)
YOLOv8n-seg64036.730.596.11.213.412.6
YOLOv8s-seg64044.636.8155.71.4711.842.6
YOLOv8m-seg64049.940.8317.02.1827.3110.2
YOLOv8l-seg64052.342.6572.42.7946.0220.5
YOLOv8x-seg64053.443.4712.14.0271.8344.1

有关在ImageNet 上训练的这些模型的使用示例,请参见分类文档,其中包括 1000 个预训练的类别。

模型尺寸
(像素)
acc
top1
acc
top5
速度
CPU ONNX
(毫秒)
速度
A100 TensorRT
(毫秒)
params
(M)
FLOPs
(B) at 640
YOLOv8n-cls22469.088.312.90.312.74.3
YOLOv8s-cls22473.891.723.40.356.413.5
YOLOv8m-cls22476.893.585.40.6217.042.7
YOLOv8l-cls22476.893.5163.00.8737.599.7
YOLOv8x-cls22479.094.6232.01.0157.4154.8

有关在COCO 上训练的这些模型的使用示例,请参见姿势估计文档,其中包括一个预训练类别 "人"。

模型尺寸
(像素)

50-95
mAPpose
50
速度
CPU ONNX
(毫秒)
速度
A100 TensorRT
(毫秒)
params
(M)
FLOPs
(B)
YOLOv8n-姿势64050.480.1131.81.183.39.2
YOLOv8s-姿势64060.086.2233.21.4211.630.2
YOLOv8m-姿势64065.088.8456.32.0026.481.0
YOLOv8l-姿势64067.690.0784.52.5944.4168.6
YOLOv8x-姿势64069.290.21607.13.7369.4263.2
YOLOv8x-pose-p6128071.691.24088.710.0499.11066.4

有关在DOTAv1 上训练的这些模型的使用示例,请参见定向检测文档,其中包括 15 个预先训练的类别。

模型尺寸
(像素)
mAPtest
50
速度
CPU ONNX
(毫秒)
速度
A100 TensorRT
(毫秒)
params
(M)
FLOPs
(B)
YOLOv8n-obb102478.0204.773.573.123.3
YOLOv8s-obb102479.5424.884.0711.476.3
YOLOv8m-obb102480.5763.487.6126.4208.6
YOLOv8l-obb102480.71278.4211.8344.5433.8
YOLOv8x-obb102481.361759.1013.2369.5676.7

使用示例

本示例提供了简单的YOLOv8 训练和推理示例。有关这些模式和其他模式的完整文档,请参阅 "预测"、"训练"、"验证"和 "导出"文档页面。

请注意,以下示例用于YOLOv8 Detect模型的对象检测。有关其他支持的任务,请参阅 "分割"、"分类"、"OBB"文档和 "姿势"文档。

示例

PyTorch pretrained *.pt 模型以及配置 *.yaml 文件可以传递给 YOLO() 类在python 中创建模型实例:

from ultralytics import YOLO

# Load a COCO-pretrained YOLOv8n model
model = YOLO("yolov8n.pt")

# Display model information (optional)
model.info()

# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)

# Run inference with the YOLOv8n model on the 'bus.jpg' image
results = model("path/to/bus.jpg")

CLI 命令可直接运行模型:

# Load a COCO-pretrained YOLOv8n model and train it on the COCO8 example dataset for 100 epochs
yolo train model=yolov8n.pt data=coco8.yaml epochs=100 imgsz=640

# Load a COCO-pretrained YOLOv8n model and run inference on the 'bus.jpg' image
yolo predict model=yolov8n.pt source=path/to/bus.jpg

引文和致谢

Ultralytics YOLOv8 Publication

Ultralytics has not published a formal research paper for YOLOv8 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our GitHub repository and documentation.

如果您在工作中使用了YOLOv8 模型或本资源库中的任何其他软件,请使用以下格式引用:

@software{yolov8_ultralytics,
  author = {Glenn Jocher and Ayush Chaurasia and Jing Qiu},
  title = {Ultralytics YOLOv8},
  version = {8.0.0},
  year = {2023},
  url = {https://github.com/ultralytics/ultralytics},
  orcid = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069},
  license = {AGPL-3.0}
}

请注意,DOI 尚未公布,一旦公布,将添加到引文中。YOLOv8 模型是根据 AGPL-3.0企业许可证提供。

常见问题

YOLOv8 是什么?它与以前的YOLO 版本有何不同?

YOLOv8 is the latest iteration in the Ultralytics YOLO series, designed to improve real-time object detection performance with advanced features. Unlike earlier versions, YOLOv8 incorporates an anchor-free split Ultralytics head, state-of-the-art backbone and neck architectures, and offers optimized accuracy-speed tradeoff, making it ideal for diverse applications. For more details, check the Overview and Key Features sections.

如何将YOLOv8 用于不同的计算机视觉任务?

YOLOv8 支持多种计算机视觉任务,包括物体检测、实例分割、姿态/关键点检测、定向物体检测和分类。每个模型变体都针对其特定任务进行了优化,并与推理验证训练导出等各种操作模式兼容。更多信息,请参阅 "支持的任务和模式"部分。

YOLOv8 模型的性能指标是什么?

YOLOv8 模型在各种基准数据集上都达到了最先进的性能。例如,YOLOv8n 模型在 COCO 数据集上的 mAP(平均精度)为 37.3,在 A100TensorRT 上的速度为 0.99 毫秒。每个模型变体在不同任务和数据集上的详细性能指标见性能指标部分。

如何训练YOLOv8 模型?

Training a YOLOv8 model can be done using either Python or CLI. Below are examples for training a model using a COCO-pretrained YOLOv8 model on the COCO8 dataset for 100 epochs:

示例

from ultralytics import YOLO

# Load a COCO-pretrained YOLOv8n model
model = YOLO("yolov8n.pt")

# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
yolo train model=yolov8n.pt data=coco8.yaml epochs=100 imgsz=640

有关详细信息,请访问培训文档。

我能否对YOLOv8 型号进行性能基准测试?

是的,可以对YOLOv8 机型在各种导出格式下的速度和准确性进行性能基准测试。您可以使用PyTorch,ONNX,TensorRT 等进行基准测试。下面是使用Python 和CLI 进行基准测试的命令示例:

示例

from ultralytics.utils.benchmarks import benchmark

# Benchmark on GPU
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0

有关其他信息,请查看性能指标部分。

📅 Created 1 year ago ✏️ Updated 10 days ago

评论