์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

Ultralytics YOLOv8 ๐Ÿš€๋ฅผ ์‚ฌ์šฉํ•œ VisionEye ๋ทฐ ์˜ค๋ธŒ์ ํŠธ ๋งคํ•‘

๋น„์ „์•„์ด ์˜ค๋ธŒ์ ํŠธ ๋งคํ•‘์ด๋ž€ ๋ฌด์—‡์ธ๊ฐ€์š”?

Ultralytics YOLOv8 VisionEye๋Š” ์ปดํ“จํ„ฐ๊ฐ€ ๋ฌผ์ฒด๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ์ •ํ™•ํžˆ ์ฐพ์•„๋‚ผ ์ˆ˜ ์žˆ๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜์—ฌ ์‚ฌ๋žŒ์˜ ๋ˆˆ์˜ ๊ด€์ฐฐ ์ •๋ฐ€๋„๋ฅผ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์„ ํ†ตํ•ด ์ปดํ“จํ„ฐ๋Š” ์‚ฌ๋žŒ์˜ ๋ˆˆ์ด ํŠน์ • ์‹œ์ ์—์„œ ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ๊ด€์ฐฐํ•˜๋Š” ๋ฐฉ์‹๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ํŠน์ • ๋ฌผ์ฒด๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์ƒ˜ํ”Œ

๋น„์ „์•„์ด ๋ทฐ ๊ฐ์ฒด ์ถ”์  ๊ธฐ๋Šฅ์ด ์žˆ๋Š” VisionEye ๋ทฐ ๊ฑฐ๋ฆฌ ๊ณ„์‚ฐ์ด ํฌํ•จ๋œ VisionEye ๋ณด๊ธฐ
VisionEye ๋ทฐ ๊ฐ์ฒด ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ Ultralytics YOLOv8 ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ์ฒด ์ถ”์ ์„ ํ†ตํ•œ VisionEye ๋ณด๊ธฐ ๊ฐ์ฒด ๋งคํ•‘ Ultralytics YOLOv8 ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ฑฐ๋ฆฌ ๊ณ„์‚ฐ์ด ํฌํ•จ๋œ VisionEye ๋ณด๊ธฐ Ultralytics YOLOv8
VisionEye ๋ทฐ ๊ฐ์ฒด ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ Ultralytics YOLOv8 ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ์ฒด ์ถ”์ ์„ ํ†ตํ•œ VisionEye ๋ณด๊ธฐ ๊ฐ์ฒด ๋งคํ•‘ Ultralytics YOLOv8 ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ฑฐ๋ฆฌ ๊ณ„์‚ฐ์ด ํฌํ•จ๋œ VisionEye ๋ณด๊ธฐ Ultralytics YOLOv8

VisionEye ์˜ค๋ธŒ์ ํŠธ ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ YOLOv8

import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import colors, Annotator

model = YOLO("yolov8n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))

center_point = (-10, h)

while True:
    ret, im0 = cap.read()
    if not ret:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    results = model.predict(im0)
    boxes = results[0].boxes.xyxy.cpu()
    clss = results[0].boxes.cls.cpu().tolist()

    annotator = Annotator(im0, line_width=2)

    for box, cls in zip(boxes, clss):
        annotator.box_label(box, label=names[int(cls)], color=colors(int(cls)))
        annotator.visioneye(box, center_point)

    out.write(im0)
    cv2.imshow("visioneye-pinpoint", im0)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

out.release()
cap.release()
cv2.destroyAllWindows()
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import colors, Annotator

model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))

center_point = (-10, h)

while True:
    ret, im0 = cap.read()
    if not ret:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    annotator = Annotator(im0, line_width=2)

    results = model.track(im0, persist=True)
    boxes = results[0].boxes.xyxy.cpu()

    if results[0].boxes.id is not None:
        track_ids = results[0].boxes.id.int().cpu().tolist()

        for box, track_id in zip(boxes, track_ids):
            annotator.box_label(box, label=str(track_id), color=colors(int(track_id)))
            annotator.visioneye(box, center_point)

    out.write(im0)
    cv2.imshow("visioneye-pinpoint", im0)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

out.release()
cap.release()
cv2.destroyAllWindows()
import cv2
import math
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors

model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")

w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

out = cv2.VideoWriter('visioneye-distance-calculation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))

center_point = (0, h)
pixel_per_meter = 10

txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))

while True:
    ret, im0 = cap.read()
    if not ret:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    annotator = Annotator(im0, line_width=2)

    results = model.track(im0, persist=True)
    boxes = results[0].boxes.xyxy.cpu()

    if results[0].boxes.id is not None:
        track_ids = results[0].boxes.id.int().cpu().tolist()

        for box, track_id in zip(boxes, track_ids):
            annotator.box_label(box, label=str(track_id), color=bbox_clr)
            annotator.visioneye(box, center_point)

            x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2)    # Bounding box centroid

            distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2))/pixel_per_meter

            text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX,1.2, 3)
            cv2.rectangle(im0, (x1, y1 - text_size[1] - 10),(x1 + text_size[0] + 10, y1), txt_background, -1)
            cv2.putText(im0, f"Distance: {distance:.2f} m",(x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2,txt_color, 3)

    out.write(im0)
    cv2.imshow("visioneye-distance-calculation", im0)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

out.release()
cap.release()
cv2.destroyAllWindows()

visioneye ์ธ์ˆ˜

์ด๋ฆ„ ์œ ํ˜• ๊ธฐ๋ณธ๊ฐ’ ์„ค๋ช…
color tuple (235, 219, 11) ์„  ๋ฐ ๊ฐœ์ฒด ์ค‘์‹ฌ์ƒ‰
pin_color tuple (255, 0, 255) VisionEye ํ•€ํฌ์ธํŠธ ์ปฌ๋Ÿฌ
thickness int 2 ์˜ค๋ธŒ์ ํŠธ ์„  ๋‘๊ป˜์— ๋Œ€ํ•œ ํ•€ํฌ์ธํŠธ
pins_radius int 10 ํ•€ ํฌ์ธํŠธ ๋ฐ ๊ฐœ์ฒด ์ค‘์‹ฌ์  ์› ๋ฐ˜๊ฒฝ

์ฐธ๊ณ 

๋ฌธ์˜ ์‚ฌํ•ญ์ด ์žˆ์œผ์‹œ๋ฉด Ultralytics ์ด์Šˆ ์„น์…˜ ๋˜๋Š” ์•„๋ž˜์— ์–ธ๊ธ‰๋œ ํ† ๋ก  ์„น์…˜์— ์ž์œ ๋กญ๊ฒŒ ์งˆ๋ฌธ์„ ๊ฒŒ์‹œํ•ด ์ฃผ์„ธ์š”.



์ƒ์„ฑ 2023-12-18, ์—…๋ฐ์ดํŠธ 2024-03-03
์ž‘์„ฑ์ž: glenn-jocher (6), RizwanMunawar (1)

๋Œ“๊ธ€