跳至内容

首页

Introducing Ultralytics YOLO11, the latest version of the acclaimed real-time object detection and image segmentation model. YOLO11 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs.

Explore the Ultralytics Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned machine learning practitioner or new to the field, this hub aims to maximize YOLO's potential in your projects


Ultralytics GitHub空间Ultralytics LinkedIn空间Ultralytics 推特空间Ultralytics YouTube空间Ultralytics TikTok空间Ultralytics 比利比利空间Ultralytics 纪和声

从哪里开始

  •   入门


    安装 ultralytics with pip and get up and running in minutes to train a YOLO model


    快速入门

  •   预测


    Predict on new images, videos and streams with YOLO
     


    Learn more

  •   训练模型


    Train a new YOLO model on your own custom dataset from scratch or load and train on a pretrained model


    Learn more

  •   探索任务


    Discover YOLO tasks like detect, segment, classify, pose, OBB and track
     


    探索任务

  • 🚀   Explore YOLO11 NEW


    Discover Ultralytics' latest state-of-the-art YOLO11 models and their capabilities
     


    YOLO11 Models 🚀 NEW

  •   Open Source, AGPL-3.0


    Ultralytics offers two licensing options for YOLO: AGPL-3.0 License and Enterprise License. Ultralytics is available on GitHub


    许可证



观看: How to Train a YOLO model on Your Custom Dataset in Google Colab.

YOLO:简史

YOLO (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy.

  • 2016 年发布的YOLOv2 通过纳入批量归一化、锚框和维度集群改进了原始模型。
  • 2018 年推出的YOLOv3 使用更高效的骨干网络、多锚和空间金字塔池进一步增强了模型的性能。
  • YOLOv4 was released in 2020, introducing innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function.
  • YOLOv5进一步提高了模型的性能,并增加了超参数优化、集成实验跟踪和自动导出为常用导出格式等新功能。
  • YOLOv6于 2022 年由美团开源,目前已用于该公司的许多自主配送机器人。
  • YOLOv7增加了额外的任务,如 COCO 关键点数据集的姿势估计。
  • YOLOv8 released in 2023 by Ultralytics. YOLOv8 introduced new features and improvements for enhanced performance, flexibility, and efficiency, supporting a full range of vision AI tasks,
  • YOLOv9 引入了可编程梯度信息 (PGI) 和广义高效层聚合网络 (GELAN) 等创新方法。
  • YOLOv10是由清华大学的研究人员使用该软件包创建的。 UltralyticsPython 软件包创建的。该版本通过引入端到端头(End-to-End head),消除了非最大抑制(NMS)要求,实现了实时目标检测的进步。
  • YOLO11 🚀 NEW: Ultralytics' latest YOLO models delivering state-of-the-art (SOTA) performance across multiple tasks, including detection, segmentation, pose estimation, tracking, and classification, leverage capabilities across diverse AI applications and domains.

YOLO 许可证:Ultralytics YOLO 如何获得许可?

Ultralytics 提供两种许可选项,以适应不同的使用情况:

  • AGPL-3.0 许可证:这种经 OSI 批准的开源许可证非常适合学生和爱好者使用,可促进开放协作和知识共享。详情请查看LICENSE文件。
  • 企业许可证:该许可证专为商业用途设计,允许将Ultralytics 软件和人工智能模型无缝集成到商业产品和服务中,绕过了AGPL-3.0 的开源要求。如果您的方案涉及将我们的解决方案嵌入到商业产品中,请通过Ultralytics Licensing 联系我们。

我们的许可策略旨在确保我们开源项目的任何改进都能回归社区。我们将开放源代码的原则牢记在心 ❤️,我们的使命是确保我们的贡献能够以对所有人都有利的方式得到利用和扩展。

常见问题

什么是Ultralytics YOLO ,它如何改进物体检测?

Ultralytics YOLO is the latest advancement in the acclaimed YOLO (You Only Look Once) series for real-time object detection and image segmentation. It builds on previous versions by introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLO supports various vision AI tasks such as detection, segmentation, pose estimation, tracking, and classification. Its state-of-the-art architecture ensures superior speed and accuracy, making it suitable for diverse applications, including edge devices and cloud APIs.

如何开始YOLO 安装和设置?

Getting started with YOLO is quick and straightforward. You can install the Ultralytics package using pip and get up and running in minutes. Here's a basic installation command:

Installation using pip

pip install ultralytics

如需全面的分步指南,请访问我们的快速入门指南。该资源将在安装说明、初始设置和运行第一个模型方面为您提供帮助。

如何在数据集上训练自定义YOLO 模型?

在数据集上训练自定义YOLO 模型需要几个详细步骤:

  1. 准备附加注释的数据集。
  2. 在 YAML 文件中配置训练参数。
  3. 使用 yolo TASK train command to start training. (Each TASK has its own argument)

Here's example code for the Object Detection Task:

Train Example for Object Detection Task

from ultralytics import YOLO

# Load a pre-trained YOLO model (you can choose n, s, m, l, or x versions)
model = YOLO("yolo11n.pt")

# Start training on your custom dataset
model.train(data="path/to/dataset.yaml", epochs=100, imgsz=640)
# Train a YOLO model from the command line
yolo detect train data=path/to/dataset.yaml epochs=100 imgsz=640

如需详细了解,请查看我们的 "培训模型"指南,其中包括优化培训流程的示例和技巧。

Ultralytics YOLO 有哪些许可选项?

Ultralytics YOLO 提供两种许可选项:

  • AGPL-3.0 许可证:该开源许可证非常适合教育和非商业用途,可促进开放式协作。
  • 企业许可证:该许可证专为商业应用而设计,允许将Ultralytics 软件无缝集成到商业产品中,而不受AGPL-3.0 许可证的限制。

更多详情,请访问我们的许可页面。

Ultralytics YOLO 如何用于实时物体跟踪?

Ultralytics YOLO 支持高效和可定制的多目标跟踪。要利用跟踪功能,可以使用 yolo track 命令,如下图所示:

Example for Object Tracking on a Video

from ultralytics import YOLO

# Load a pre-trained YOLO model
model = YOLO("yolo11n.pt")

# Start tracking objects in a video
# You can also use live video streams or webcam input
model.track(source="path/to/video.mp4")
# Perform object tracking on a video from the command line
# You can specify different sources like webcam (0) or RTSP streams
yolo track source=path/to/video.mp4

有关设置和运行对象跟踪的详细指南,请查看我们的跟踪模式文档,其中介绍了实时场景中的配置和实际应用。

📅 Created 11 months ago ✏️ Updated 2 days ago

评论