VisionEye View Object Mapping using Ultralytics YOLO11 🚀
什么是 VisionEye Object Mapping?
Ultralytics YOLO11 VisionEye offers the capability for computers to identify and pinpoint objects, simulating the observational precision of the human eye. This functionality enables computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint.
样品
VisionEye 视图 | 带有物体跟踪功能的 VisionEye 视图 | 带有距离计算功能的 VisionEye 视图 |
---|---|---|
VisionEye View Object Mapping using Ultralytics YOLO11 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11 | VisionEye View with Distance Calculation using Ultralytics YOLO11 |
VisionEye Object Mapping using YOLO11
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
center_point = (-10, h)
while True:
ret, im0 = cap.read()
if not ret:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.predict(im0)
boxes = results[0].boxes.xyxy.cpu()
clss = results[0].boxes.cls.cpu().tolist()
annotator = Annotator(im0, line_width=2)
for box, cls in zip(boxes, clss):
annotator.box_label(box, label=names[int(cls)], color=colors(int(cls)))
annotator.visioneye(box, center_point)
out.write(im0)
cv2.imshow("visioneye-pinpoint", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
center_point = (-10, h)
while True:
ret, im0 = cap.read()
if not ret:
print("Video frame is empty or video processing has been successfully completed.")
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
boxes = results[0].boxes.xyxy.cpu()
if results[0].boxes.id is not None:
track_ids = results[0].boxes.id.int().cpu().tolist()
for box, track_id in zip(boxes, track_ids):
annotator.box_label(box, label=str(track_id), color=colors(int(track_id)))
annotator.visioneye(box, center_point)
out.write(im0)
cv2.imshow("visioneye-pinpoint", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
import math
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("visioneye-distance-calculation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
center_point = (0, h)
pixel_per_meter = 10
txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))
while True:
ret, im0 = cap.read()
if not ret:
print("Video frame is empty or video processing has been successfully completed.")
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
boxes = results[0].boxes.xyxy.cpu()
if results[0].boxes.id is not None:
track_ids = results[0].boxes.id.int().cpu().tolist()
for box, track_id in zip(boxes, track_ids):
annotator.box_label(box, label=str(track_id), color=bbox_clr)
annotator.visioneye(box, center_point)
x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid
distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2)) / pixel_per_meter
text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX, 1.2, 3)
cv2.rectangle(im0, (x1, y1 - text_size[1] - 10), (x1 + text_size[0] + 10, y1), txt_background, -1)
cv2.putText(im0, f"Distance: {distance:.2f} m", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2, txt_color, 3)
out.write(im0)
cv2.imshow("visioneye-distance-calculation", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
visioneye
论据
名称 | 类型 | 默认值 | 说明 |
---|---|---|---|
color |
tuple |
(235, 219, 11) |
线条和对象中心点颜色 |
pin_color |
tuple |
(255, 0, 255) |
VisionEye 精确定位色彩 |
备注
如有任何疑问,请随时在Ultralytics 问题板块或下面提到的讨论板块发布您的问题。
常见问题
How do I start using VisionEye Object Mapping with Ultralytics YOLO11?
To start using VisionEye Object Mapping with Ultralytics YOLO11, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up object detection with VisionEye. Here's a simple example to get you started:
import cv2
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
results = model.predict(frame)
for result in results:
# Perform custom logic with result
pass
cv2.imshow("visioneye", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
What are the key features of VisionEye's object tracking capability using Ultralytics YOLO11?
VisionEye's object tracking with Ultralytics YOLO11 allows users to follow the movement of objects within a video frame. Key features include:
- 实时物体跟踪:可在物体移动时对其进行跟踪。
- Object Identification: Utilizes YOLO11's powerful detection algorithms.
- 距离计算计算物体与指定点之间的距离。
- 注释和可视化:为跟踪对象提供可视化标记。
下面是使用 VisionEye 进行跟踪的简短代码片段:
import cv2
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, persist=True)
for result in results:
# Annotate and visualize tracking
pass
cv2.imshow("visioneye-tracking", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
有关全面指南,请访问VisionEye Object Mapping with Object Tracking。
How can I calculate distances with VisionEye's YOLO11 model?
Distance calculation with VisionEye and Ultralytics YOLO11 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance.
下面是一个简化的例子:
import math
import cv2
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
center_point = (0, 480) # Example center point
pixel_per_meter = 10
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, persist=True)
for result in results:
# Calculate distance logic
distances = [
(math.sqrt((box[0] - center_point[0]) ** 2 + (box[1] - center_point[1]) ** 2)) / pixel_per_meter
for box in results
]
cv2.imshow("visioneye-distance", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
有关详细说明,请参阅带有距离计算功能的 VisionEye。
Why should I use Ultralytics YOLO11 for object mapping and tracking?
Ultralytics YOLO11 is renowned for its speed, accuracy, and ease of integration, making it a top choice for object mapping and tracking. Key advantages include:
- 最先进的性能:实时物体检测精度高。
- 灵活性:支持检测、跟踪和距离计算等各种任务。
- 社区和支持:丰富的文档和活跃的 GitHub 社区,可用于故障排除和改进。
- 易用性:直观的应用程序接口可简化复杂的任务,实现快速部署和迭代。
For more information on applications and benefits, check out the Ultralytics YOLO11 documentation.
How can I integrate VisionEye with other machine learning tools like Comet or ClearML?
Ultralytics YOLO11 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on how to use YOLOv5 with Comet and integrate YOLO11 with ClearML to get started.
如需进一步了解和了解集成示例,请查阅我们的Ultralytics 集成指南。