Skip to content

VisionEye View Object Mapping using Ultralytics YOLO11 🚀

Qu'est-ce que la cartographie d'objets VisionEye ?

Ultralytics YOLO11 VisionEye offers the capability for computers to identify and pinpoint objects, simulating the observational precision of the human eye. This functionality enables computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint.

Échantillons

VisionEye View VisionEye View avec suivi des objets VisionEye Vue avec calcul de la distance
VisionEye View Object Mapping using Ultralytics YOLO11 VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11 VisionEye View with Distance Calculation using Ultralytics YOLO11
VisionEye View Object Mapping using Ultralytics YOLO11 VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11 VisionEye View with Distance Calculation using Ultralytics YOLO11

VisionEye Object Mapping using YOLO11

import cv2

from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors

model = YOLO("yolo11n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))

center_point = (-10, h)

while True:
    ret, im0 = cap.read()
    if not ret:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    results = model.predict(im0)
    boxes = results[0].boxes.xyxy.cpu()
    clss = results[0].boxes.cls.cpu().tolist()

    annotator = Annotator(im0, line_width=2)

    for box, cls in zip(boxes, clss):
        annotator.box_label(box, label=names[int(cls)], color=colors(int(cls)))
        annotator.visioneye(box, center_point)

    out.write(im0)
    cv2.imshow("visioneye-pinpoint", im0)

    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

out.release()
cap.release()
cv2.destroyAllWindows()
import cv2

from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors

model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))

center_point = (-10, h)

while True:
    ret, im0 = cap.read()
    if not ret:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    annotator = Annotator(im0, line_width=2)

    results = model.track(im0, persist=True)
    boxes = results[0].boxes.xyxy.cpu()

    if results[0].boxes.id is not None:
        track_ids = results[0].boxes.id.int().cpu().tolist()

        for box, track_id in zip(boxes, track_ids):
            annotator.box_label(box, label=str(track_id), color=colors(int(track_id)))
            annotator.visioneye(box, center_point)

    out.write(im0)
    cv2.imshow("visioneye-pinpoint", im0)

    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

out.release()
cap.release()
cv2.destroyAllWindows()
import math

import cv2

from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator

model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")

w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

out = cv2.VideoWriter("visioneye-distance-calculation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))

center_point = (0, h)
pixel_per_meter = 10

txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))

while True:
    ret, im0 = cap.read()
    if not ret:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    annotator = Annotator(im0, line_width=2)

    results = model.track(im0, persist=True)
    boxes = results[0].boxes.xyxy.cpu()

    if results[0].boxes.id is not None:
        track_ids = results[0].boxes.id.int().cpu().tolist()

        for box, track_id in zip(boxes, track_ids):
            annotator.box_label(box, label=str(track_id), color=bbox_clr)
            annotator.visioneye(box, center_point)

            x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2)  # Bounding box centroid

            distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2)) / pixel_per_meter

            text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX, 1.2, 3)
            cv2.rectangle(im0, (x1, y1 - text_size[1] - 10), (x1 + text_size[0] + 10, y1), txt_background, -1)
            cv2.putText(im0, f"Distance: {distance:.2f} m", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2, txt_color, 3)

    out.write(im0)
    cv2.imshow("visioneye-distance-calculation", im0)

    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

out.release()
cap.release()
cv2.destroyAllWindows()

visioneye Arguments

Nom Type Défaut Description
color tuple (235, 219, 11) Couleur du centroïde de la ligne et de l'objet
pin_color tuple (255, 0, 255) VisionEye pinpoint color

Note

Pour toute demande de renseignements, n'hésite pas à poster tes questions dans la section des problèmes deUltralytics ou dans la section de discussion mentionnée ci-dessous.

FAQ

How do I start using VisionEye Object Mapping with Ultralytics YOLO11?

To start using VisionEye Object Mapping with Ultralytics YOLO11, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up object detection with VisionEye. Here's a simple example to get you started:

import cv2

from ultralytics import YOLO

model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")

while True:
    ret, frame = cap.read()
    if not ret:
        break

    results = model.predict(frame)
    for result in results:
        # Perform custom logic with result
        pass

    cv2.imshow("visioneye", frame)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

cap.release()
cv2.destroyAllWindows()

What are the key features of VisionEye's object tracking capability using Ultralytics YOLO11?

VisionEye's object tracking with Ultralytics YOLO11 allows users to follow the movement of objects within a video frame. Key features include:

  1. Suivi des objets en temps réel: Suit l'évolution des objets au fur et à mesure qu'ils se déplacent.
  2. Object Identification: Utilizes YOLO11's powerful detection algorithms.
  3. Calcul de la distance: Calcule les distances entre les objets et les points spécifiés.
  4. Annotation et visualisation: Fournit des marqueurs visuels pour les objets suivis.

Voici un bref extrait de code démontrant le suivi avec VisionEye :

import cv2

from ultralytics import YOLO

model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")

while True:
    ret, frame = cap.read()
    if not ret:
        break

    results = model.track(frame, persist=True)
    for result in results:
        # Annotate and visualize tracking
        pass

    cv2.imshow("visioneye-tracking", frame)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

cap.release()
cv2.destroyAllWindows()

Pour un guide complet, visite le site VisionEye Object Mapping with Object Tracking.

How can I calculate distances with VisionEye's YOLO11 model?

Distance calculation with VisionEye and Ultralytics YOLO11 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance.

Voici un exemple simplifié :

import math

import cv2

from ultralytics import YOLO

model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
center_point = (0, 480)  # Example center point
pixel_per_meter = 10

while True:
    ret, frame = cap.read()
    if not ret:
        break

    results = model.track(frame, persist=True)
    for result in results:
        # Calculate distance logic
        distances = [
            (math.sqrt((box[0] - center_point[0]) ** 2 + (box[1] - center_point[1]) ** 2)) / pixel_per_meter
            for box in results
        ]

    cv2.imshow("visioneye-distance", frame)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

cap.release()
cv2.destroyAllWindows()

Pour obtenir des instructions détaillées, reporte-toi au VisionEye avec calcul de la distance.

Why should I use Ultralytics YOLO11 for object mapping and tracking?

Ultralytics YOLO11 is renowned for its speed, accuracy, and ease of integration, making it a top choice for object mapping and tracking. Key advantages include:

  1. Des performances de pointe: Offre une grande précision dans la détection des objets en temps réel.
  2. Flexibilité: Prend en charge diverses tâches telles que la détection, le suivi et le calcul de la distance.
  3. Communauté et assistance: Documentation exhaustive et communauté GitHub active pour le dépannage et les améliorations.
  4. Facilité d'utilisation: l'API intuitive simplifie les tâches complexes, ce qui permet un déploiement et une itération rapides.

For more information on applications and benefits, check out the Ultralytics YOLO11 documentation.

How can I integrate VisionEye with other machine learning tools like Comet or ClearML?

Ultralytics YOLO11 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on how to use YOLOv5 with Comet and integrate YOLO11 with ClearML to get started.

Pour une exploration plus approfondie et des exemples d'intégration, consulte notre guide des intégrationsUltralytics .


📅 Created 9 months ago ✏️ Updated 12 days ago

Commentaires