使用Ultralytics YOLO11 🚀 进行实例分割和跟踪
什么是实例分割?
Ultralytics YOLO11实例分割包括识别和勾勒图像中的单个物体,提供对空间分布的详细了解。与语义分割不同的是,它对每个物体进行唯一标记和精确划分,这对物体检测和医学成像等任务至关重要。
Ultralytics 软件包中有两种类型的实例分割跟踪:
-
使用类对象进行实例分割:每个类对象都有一种独特的颜色,以实现清晰的视觉分隔。
-
使用对象轨迹进行实例分割:每个轨迹都用不同的颜色表示,便于识别和跟踪。
观看: 使用对象跟踪技术进行实例分割Ultralytics YOLO11
样品
实例分割 | 实例分割 + 物体跟踪 |
---|---|
Ultralytics 实例分割 😍 | Ultralytics 利用对象跟踪进行实例分割 🔥 |
实例分割和跟踪
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n-seg.pt") # segmentation model
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.predict(im0)
annotator = Annotator(im0, line_width=2)
if results[0].masks is not None:
clss = results[0].boxes.cls.cpu().tolist()
masks = results[0].masks.xy
for mask, cls in zip(masks, clss):
color = colors(int(cls), True)
txt_color = annotator.get_txt_color(color)
annotator.seg_bbox(mask=mask, mask_color=color, label=names[int(cls)], txt_color=txt_color)
out.write(im0)
cv2.imshow("instance-segmentation", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
print("Video frame is empty or video processing has been successfully completed.")
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
if results[0].boxes.id is not None and results[0].masks is not None:
masks = results[0].masks.xy
track_ids = results[0].boxes.id.int().cpu().tolist()
for mask, track_id in zip(masks, track_ids):
color = colors(int(track_id), True)
txt_color = annotator.get_txt_color(color)
annotator.seg_bbox(mask=mask, mask_color=color, label=str(track_id), txt_color=txt_color)
out.write(im0)
cv2.imshow("instance-segmentation-object-tracking", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
seg_bbox
论据
名称 | 类型 | 默认值 | 说明 |
---|---|---|---|
mask |
array |
None |
分割掩码坐标 |
mask_color |
RGB |
(255, 0, 255) |
每个分割框的屏蔽颜色 |
label |
str |
None |
分段对象的标签 |
txt_color |
RGB |
None |
被分割和跟踪物体的标签颜色 |
备注
如有任何疑问,请随时在Ultralytics 问题板块或下面提到的讨论板块发布您的问题。
常见问题
如何使用Ultralytics YOLO11 执行实例分割?
要使用Ultralytics YOLO11 执行实例分割,请使用YOLO11 的分割版本初始化YOLO 模型,并通过它处理视频帧。下面是一个简化的代码示例:
示例
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
break
results = model.predict(im0)
annotator = Annotator(im0, line_width=2)
if results[0].masks is not None:
clss = results[0].boxes.cls.cpu().tolist()
masks = results[0].masks.xy
for mask, cls in zip(masks, clss):
annotator.seg_bbox(mask=mask, mask_color=colors(int(cls), True), det_label=model.model.names[int(cls)])
out.write(im0)
cv2.imshow("instance-segmentation", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
有关实例分割的更多信息,请参阅Ultralytics YOLO11 指南。
Ultralytics YOLO11 中的实例分割与对象跟踪有何区别?
实例分割可识别和勾勒图像中的单个物体,为每个物体赋予独特的标签和遮罩。物体追踪则是在此基础上的延伸,它能在不同的视频帧中为物体分配一致的标签,从而便于在一段时间内对同一物体进行连续追踪。更多有关区别的信息,请参阅Ultralytics YOLO11 文档。
例如,在分割和跟踪方面,为什么要使用Ultralytics YOLO11 而不是其他模型,如掩码 R-CNN 或更快 R-CNN?
Ultralytics YOLO11 YOLO11 与 HUB 无缝集成,使用户能够高效Ultralytics 地管理模型、数据集和训练管道。有关 优点的更多信息,请访问YOLO11 Ultralytics 博客。
如何使用Ultralytics YOLO11 实现对象跟踪?
要实施对象跟踪,请使用 model.track
方法,并确保每个对象的 ID 在各帧之间分配一致。下面是一个简单的例子:
示例
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
if results[0].boxes.id is not None and results[0].masks is not None:
masks = results[0].masks.xy
track_ids = results[0].boxes.id.int().cpu().tolist()
for mask, track_id in zip(masks, track_ids):
annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id))
out.write(im0)
cv2.imshow("instance-segmentation-object-tracking", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
更多信息,请参阅实例分割和跟踪部分。
Ultralytics 是否提供了适合用于训练YOLO11 模型(例如分割和跟踪)的数据集?
是的,Ultralytics 提供了多个适合训练YOLO11 模型的数据集,包括分割和跟踪数据集。数据集示例、结构和使用说明可在Ultralytics Datasets 文档中找到。