์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

Object Counting using Ultralytics YOLO11

์˜ค๋ธŒ์ ํŠธ ์นด์šดํŒ…์ด๋ž€ ๋ฌด์—‡์ธ๊ฐ€์š”?

Object counting with Ultralytics YOLO11 involves accurate identification and counting of specific objects in videos and camera streams. YOLO11 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and deep learning capabilities.


Watch: Object Counting using Ultralytics YOLO11

Watch: Class-wise Object Counting using Ultralytics YOLO11

์˜ค๋ธŒ์ ํŠธ ์นด์šดํŒ…์˜ ์žฅ์ ์€?

  • ๋ฆฌ์†Œ์Šค ์ตœ์ ํ™”: ๊ฐœ์ฒด ์นด์šดํŒ…์€ ์ •ํ™•ํ•œ ๊ฐœ์ˆ˜๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žฌ๊ณ  ๊ด€๋ฆฌ์™€ ๊ฐ™์€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์—์„œ ๋ฆฌ์†Œ์Šค ํ• ๋‹น์„ ์ตœ์ ํ™”ํ•˜์—ฌ ํšจ์œจ์ ์ธ ๋ฆฌ์†Œ์Šค ๊ด€๋ฆฌ๋ฅผ ์šฉ์ดํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค.
  • ๋ณด์•ˆ ๊ฐ•ํ™”: ๊ฐœ์ฒด ์นด์šดํŒ…์€ ๊ฐœ์ฒด๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ถ”์ ํ•˜๊ณ  ๊ณ„์‚ฐํ•˜์—ฌ ์‚ฌ์ „ ์œ„ํ˜‘ ํƒ์ง€๋ฅผ ์ง€์›ํ•จ์œผ๋กœ์จ ๋ณด์•ˆ๊ณผ ๊ฐ์‹œ๋ฅผ ๊ฐ•ํ™”ํ•ฉ๋‹ˆ๋‹ค.
  • ์ •๋ณด์— ๊ธฐ๋ฐ˜ํ•œ ์˜์‚ฌ ๊ฒฐ์ •: ๊ฐ์ฒด ์นด์šดํŒ…์€ ์˜์‚ฌ ๊ฒฐ์ •, ์†Œ๋งค์—…, ๊ตํ†ต ๊ด€๋ฆฌ ๋ฐ ๊ธฐํƒ€ ๋‹ค์–‘ํ•œ ์˜์—ญ์˜ ํ”„๋กœ์„ธ์Šค ์ตœ์ ํ™”๋ฅผ ์œ„ํ•œ ๊ท€์ค‘ํ•œ ์ธ์‚ฌ์ดํŠธ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

์‹ค์ œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜

๋ฌผ๋ฅ˜ ์–‘์‹์—…
Conveyor Belt Packets Counting Using Ultralytics YOLO11 Fish Counting in Sea using Ultralytics YOLO11
Conveyor Belt Packets Counting Using Ultralytics YOLO11 Fish Counting in Sea using Ultralytics YOLO11

Object Counting using YOLO11 Example

import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]

# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Init Object Counter
counter = solutions.ObjectCounter(
    show=True,
    region=region_points,
    model="yolo11n.pt",
)

# Process video
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    im0 = counter.count(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()
import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# line or region points
line_points = [(20, 400), (1080, 400)]

# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Init Object Counter
counter = solutions.ObjectCounter(
    show=True,
    region=line_points,
    model="yolo11n-obb.pt",
)

# Process video
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    im0 = counter.count(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()
import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]

# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Init Object Counter
counter = solutions.ObjectCounter(
    show=True,
    region=region_points,
    model="yolo11n.pt",
)

# Process video
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    im0 = counter.count(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()
import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Define region points
line_points = [(20, 400), (1080, 400)]

# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Init Object Counter
counter = solutions.ObjectCounter(
    show=True,
    region=line_points,
    model="yolo11n.pt",
)

# Process video
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    im0 = counter.count(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()
import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Init Object Counter
counter = solutions.ObjectCounter(
    show=True,
    model="yolo11n.pt",
    classes=[0, 1],
)

# Process video
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    im0 = counter.count(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()

์ธ์ˆ˜ ObjectCounter

๋‹ค์Œ์€ ํ‘œ์ž…๋‹ˆ๋‹ค. ObjectCounter ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:

์ด๋ฆ„ ์œ ํ˜• ๊ธฐ๋ณธ๊ฐ’ ์„ค๋ช…
model str None Path to Ultralytics YOLO Model File
region list [(20, 400), (1260, 400)] ๊ณ„์‚ฐ ์˜์—ญ์„ ์ •์˜ํ•˜๋Š” ํฌ์ธํŠธ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค.
line_width int 2 ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ์„  ๋‘๊ป˜์ž…๋‹ˆ๋‹ค.
show bool False ๋น„๋””์˜ค ์ŠคํŠธ๋ฆผ ํ‘œ์‹œ ์—ฌ๋ถ€๋ฅผ ์ œ์–ดํ•˜๋Š” ํ”Œ๋ž˜๊ทธ์ž…๋‹ˆ๋‹ค.
show_in bool True ๋น„๋””์˜ค ์ŠคํŠธ๋ฆผ์— ์ธ ์นด์šดํŠธ๋ฅผ ํ‘œ์‹œํ• ์ง€ ์—ฌ๋ถ€๋ฅผ ์ œ์–ดํ•˜๋Š” ํ”Œ๋ž˜๊ทธ์ž…๋‹ˆ๋‹ค.
show_out bool True ๋น„๋””์˜ค ์ŠคํŠธ๋ฆผ์— ์•„์›ƒ ์นด์šดํŠธ๋ฅผ ํ‘œ์‹œํ• ์ง€ ์—ฌ๋ถ€๋ฅผ ์ œ์–ดํ•˜๋Š” ํ”Œ๋ž˜๊ทธ์ž…๋‹ˆ๋‹ค.

์ธ์ˆ˜ model.track

์ธ์ˆ˜ ์œ ํ˜• ๊ธฐ๋ณธ๊ฐ’ ์„ค๋ช…
source str None Specifies the source directory for images or videos. Supports file paths and URLs.
persist bool False Enables persistent tracking of objects between frames, maintaining IDs across video sequences.
tracker str botsort.yaml Specifies the tracking algorithm to use, e.g., bytetrack.yaml ๋˜๋Š” botsort.yaml.
conf float 0.3 Sets the confidence threshold for detections; lower values allow more objects to be tracked but may include false positives.
iou float 0.5 Sets the Intersection over Union (IoU) threshold for filtering overlapping detections.
classes list None Filters results by class index. For example, classes=[0, 2, 3] only tracks the specified classes.
verbose bool True Controls the display of tracking results, providing a visual output of tracked objects.

์ž์ฃผ ๋ฌป๋Š” ์งˆ๋ฌธ

How do I count objects in a video using Ultralytics YOLO11?

To count objects in a video using Ultralytics YOLO11, you can follow these steps:

  1. ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๊ฐ€์ ธ์˜ค๊ธฐ(cv2, ultralytics).
  2. ์นด์šดํŒ… ์˜์—ญ(์˜ˆ: ๋‹ค๊ฐํ˜•, ์„  ๋“ฑ)์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.
  3. ๋™์˜์ƒ ์บก์ฒ˜๋ฅผ ์„ค์ •ํ•˜๊ณ  ๊ฐœ์ฒด ์นด์šดํ„ฐ๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค.
  4. ๊ฐ ํ”„๋ ˆ์ž„์„ ์ฒ˜๋ฆฌํ•˜์—ฌ ๊ฐ์ฒด๋ฅผ ์ถ”์ ํ•˜๊ณ  ์ •์˜๋œ ์˜์—ญ ๋‚ด์—์„œ ๊ฐ์ฒด๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค.

๋‹ค์Œ์€ ํ•œ ์ง€์—ญ์—์„œ ๊ณ„์‚ฐํ•˜๋Š” ๊ฐ„๋‹จํ•œ ์˜ˆ์ž…๋‹ˆ๋‹ค:

import cv2

from ultralytics import solutions


def count_objects_in_region(video_path, output_video_path, model_path):
    """Count objects in a specific region within a video."""
    cap = cv2.VideoCapture(video_path)
    assert cap.isOpened(), "Error reading video file"
    w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
    video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

    region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
    counter = solutions.ObjectCounter(show=True, region=region_points, model=model_path)

    while cap.isOpened():
        success, im0 = cap.read()
        if not success:
            print("Video frame is empty or video processing has been successfully completed.")
            break
        im0 = counter.count(im0)
        video_writer.write(im0)

    cap.release()
    video_writer.release()
    cv2.destroyAllWindows()


count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolo11n.pt")

๊ฐœ์ฒด ์ˆ˜ ๊ณ„์‚ฐ ์„น์…˜์—์„œ ๋” ๋งŽ์€ ๊ตฌ์„ฑ ๋ฐ ์˜ต์…˜์„ ์‚ดํŽด๋ณด์„ธ์š”.

What are the advantages of using Ultralytics YOLO11 for object counting?

Using Ultralytics YOLO11 for object counting offers several advantages:

  1. ๋ฆฌ์†Œ์Šค ์ตœ์ ํ™”: ์ •ํ™•ํ•œ ์นด์šดํŠธ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํšจ์œจ์ ์ธ ๋ฆฌ์†Œ์Šค ๊ด€๋ฆฌ๋ฅผ ์šฉ์ดํ•˜๊ฒŒ ํ•˜๊ณ , ์žฌ๊ณ  ๊ด€๋ฆฌ์™€ ๊ฐ™์€ ์‚ฐ์—…์—์„œ ๋ฆฌ์†Œ์Šค ํ• ๋‹น์„ ์ตœ์ ํ™”ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค๋‹ˆ๋‹ค.
  2. ๋ณด์•ˆ ๊ฐ•ํ™”: ์—”ํ‹ฐํ‹ฐ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ถ”์ ํ•˜๊ณ  ๊ณ„์‚ฐํ•˜์—ฌ ๋ณด์•ˆ ๋ฐ ๊ฐ์‹œ๋ฅผ ๊ฐ•ํ™”ํ•˜์—ฌ ์„ ์ œ์ ์ธ ์œ„ํ˜‘ ํƒ์ง€๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค.
  3. ์ •๋ณด์— ๊ธฐ๋ฐ˜ํ•œ ์˜์‚ฌ ๊ฒฐ์ •: ๋ฆฌํ…Œ์ผ, ํŠธ๋ž˜ํ”ฝ ๊ด€๋ฆฌ ๋“ฑ์˜ ์˜์—ญ์—์„œ ์˜์‚ฌ ๊ฒฐ์ •์„ ์œ„ํ•œ ๊ท€์ค‘ํ•œ ์ธ์‚ฌ์ดํŠธ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ตœ์ ํ™”ํ•ฉ๋‹ˆ๋‹ค.

์‹ค์ œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜๊ณผ ์ฝ”๋“œ ์˜ˆ์ œ๋ฅผ ๋ณด๋ ค๋ฉด ๊ฐ์ฒด ์นด์šดํŒ…์˜ ์žฅ์  ์„น์…˜์„ ์ฐธ์กฐํ•˜์„ธ์š”.

How can I count specific classes of objects using Ultralytics YOLO11?

To count specific classes of objects using Ultralytics YOLO11, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:

import cv2

from ultralytics import solutions


def count_specific_classes(video_path, output_video_path, model_path, classes_to_count):
    """Count specific classes of objects in a video."""
    cap = cv2.VideoCapture(video_path)
    assert cap.isOpened(), "Error reading video file"
    w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
    video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

    line_points = [(20, 400), (1080, 400)]
    counter = solutions.ObjectCounter(show=True, region=line_points, model=model_path, classes=classes_to_count)

    while cap.isOpened():
        success, im0 = cap.read()
        if not success:
            print("Video frame is empty or video processing has been successfully completed.")
            break
        im0 = counter.count(im0)
        video_writer.write(im0)

    cap.release()
    video_writer.release()
    cv2.destroyAllWindows()


count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolo11n.pt", [0, 2])

์ด ์˜ˆ์ œ์—์„œ๋Š” classes_to_count=[0, 2]ํด๋ž˜์Šค์˜ ๊ฐ์ฒด๋ฅผ ๊ณ„์‚ฐํ•œ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. 0 ๊ทธ๋ฆฌ๊ณ  2 (์˜ˆ: ์‚ฌ๋žŒ ๋ฐ ์ž๋™์ฐจ).

Why should I use YOLO11 over other object detection models for real-time applications?

Ultralytics YOLO11 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:

  1. Speed and Efficiency: YOLO11 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
  2. Accuracy: It provides state-of-the-art accuracy for object detection and tracking tasks, reducing the number of false positives and improving overall system reliability.
  3. Ease of Integration: YOLO11 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
  4. ์œ ์—ฐ์„ฑ: ํŠน์ • ์‚ฌ์šฉ ์‚ฌ๋ก€ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์ถฉ์กฑํ•˜๋„๋ก ๊ตฌ์„ฑ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ํ†ตํ•ด ๊ฐ์ฒด ๊ฐ์ง€, ์„ธ๋ถ„ํ™”, ์ถ”์ ๊ณผ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ์ž‘์—…์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค.

Check out Ultralytics YOLO11 Documentation for a deeper dive into its features and performance comparisons.

Can I use YOLO11 for advanced applications like crowd analysis and traffic management?

Yes, Ultralytics YOLO11 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:

  • ๊ตฐ์ค‘ ๋ถ„์„: ๋Œ€๊ทœ๋ชจ ๋ชจ์ž„์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ณ  ๊ด€๋ฆฌํ•˜์—ฌ ์•ˆ์ „์„ ๋ณด์žฅํ•˜๊ณ  ๊ตฐ์ค‘์˜ ํ๋ฆ„์„ ์ตœ์ ํ™”ํ•ฉ๋‹ˆ๋‹ค.
  • ๊ตํ†ต ๊ด€๋ฆฌ: ์ฐจ๋Ÿ‰์„ ์ถ”์  ๋ฐ ์ง‘๊ณ„ํ•˜๊ณ , ๊ตํ†ต ํŒจํ„ด์„ ๋ถ„์„ํ•˜๊ณ , ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ตํ†ต ํ˜ผ์žก์„ ๊ด€๋ฆฌํ•˜์„ธ์š”.

For more information and implementation details, refer to the guide on Real World Applications of object counting with YOLO11.


๐Ÿ“… Created 10 months ago โœ๏ธ Updated 8 days ago

๋Œ“๊ธ€