Skip to content

Object Counting using Ultralytics YOLO11

What is Object Counting?

Object counting with Ultralytics YOLO11 involves accurate identification and counting of specific objects in videos and camera streams. YOLO11 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and deep learning capabilities.


Watch: Object Counting using Ultralytics YOLOv8

Watch: Class-wise Object Counting using Ultralytics YOLO11

Advantages of Object Counting?

  • Resource Optimization: Object counting facilitates efficient resource management by providing accurate counts, and optimizing resource allocation in applications like inventory management.
  • Enhanced Security: Object counting enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
  • Informed Decision-Making: Object counting offers valuable insights for decision-making, optimizing processes in retail, traffic management, and various other domains.

Real World Applications

Logistics Aquaculture
Conveyor Belt Packets Counting Using Ultralytics YOLO11 Fish Counting in Sea using Ultralytics YOLO11
Conveyor Belt Packets Counting Using Ultralytics YOLO11 Fish Counting in Sea using Ultralytics YOLO11

Object Counting using YOLO11 Example

# Run a counting example
yolo solutions count show=True

# Pass a source video
yolo solutions count source="path/to/video/file.mp4"

# Pass region coordinates
yolo solutions count region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Define region points
# region_points = [(20, 400), (1080, 400)]  # For line counting
region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)]  # For rectangle region counting
# region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360), (20, 400)]  # For polygon region counting

# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Init Object Counter
counter = solutions.ObjectCounter(
    show=True,  # Display the output
    region=region_points,  # Pass region points
    model="yolo11n.pt",  # model="yolo11n-obb.pt" for object counting using YOLO11 OBB model.
    # classes=[0, 2],  # If you want to count specific classes i.e person and car with COCO pretrained model.
    # show_in=True,  # Display in counts
    # show_out=True,  # Display out counts
    # line_width=2,  # Adjust the line width for bounding boxes and text display
)

# Process video
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    im0 = counter.count(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()

Argument ObjectCounter

Here's a table with the ObjectCounter arguments:

Name Type Default Description
model str None Path to Ultralytics YOLO Model File
region list [(20, 400), (1260, 400)] List of points defining the counting region.
line_width int 2 Line thickness for bounding boxes.
show bool False Flag to control whether to display the video stream.
show_in bool True Flag to control whether to display the in counts on the video stream.
show_out bool True Flag to control whether to display the out counts on the video stream.

Arguments model.track

Argument Type Default Description
source str None Specifies the source directory for images or videos. Supports file paths and URLs.
persist bool False Enables persistent tracking of objects between frames, maintaining IDs across video sequences.
tracker str botsort.yaml Specifies the tracking algorithm to use, e.g., bytetrack.yaml or botsort.yaml.
conf float 0.3 Sets the confidence threshold for detections; lower values allow more objects to be tracked but may include false positives.
iou float 0.5 Sets the Intersection over Union (IoU) threshold for filtering overlapping detections.
classes list None Filters results by class index. For example, classes=[0, 2, 3] only tracks the specified classes.
verbose bool True Controls the display of tracking results, providing a visual output of tracked objects.

FAQ

How do I count objects in a video using Ultralytics YOLO11?

To count objects in a video using Ultralytics YOLO11, you can follow these steps:

  1. Import the necessary libraries (cv2, ultralytics).
  2. Define the counting region (e.g., a polygon, line, etc.).
  3. Set up the video capture and initialize the object counter.
  4. Process each frame to track objects and count them within the defined region.

Here's a simple example for counting in a region:

import cv2

from ultralytics import solutions


def count_objects_in_region(video_path, output_video_path, model_path):
    """Count objects in a specific region within a video."""
    cap = cv2.VideoCapture(video_path)
    assert cap.isOpened(), "Error reading video file"
    w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
    video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

    region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
    counter = solutions.ObjectCounter(show=True, region=region_points, model=model_path)

    while cap.isOpened():
        success, im0 = cap.read()
        if not success:
            print("Video frame is empty or video processing has been successfully completed.")
            break
        im0 = counter.count(im0)
        video_writer.write(im0)

    cap.release()
    video_writer.release()
    cv2.destroyAllWindows()


count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolo11n.pt")

Explore more configurations and options in the Object Counting section.

What are the advantages of using Ultralytics YOLO11 for object counting?

Using Ultralytics YOLO11 for object counting offers several advantages:

  1. Resource Optimization: It facilitates efficient resource management by providing accurate counts, helping optimize resource allocation in industries like inventory management.
  2. Enhanced Security: It enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
  3. Informed Decision-Making: It offers valuable insights for decision-making, optimizing processes in domains like retail, traffic management, and more.

For real-world applications and code examples, visit the Advantages of Object Counting section.

How can I count specific classes of objects using Ultralytics YOLO11?

To count specific classes of objects using Ultralytics YOLO11, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:

import cv2

from ultralytics import solutions


def count_specific_classes(video_path, output_video_path, model_path, classes_to_count):
    """Count specific classes of objects in a video."""
    cap = cv2.VideoCapture(video_path)
    assert cap.isOpened(), "Error reading video file"
    w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
    video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

    line_points = [(20, 400), (1080, 400)]
    counter = solutions.ObjectCounter(show=True, region=line_points, model=model_path, classes=classes_to_count)

    while cap.isOpened():
        success, im0 = cap.read()
        if not success:
            print("Video frame is empty or video processing has been successfully completed.")
            break
        im0 = counter.count(im0)
        video_writer.write(im0)

    cap.release()
    video_writer.release()
    cv2.destroyAllWindows()


count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolo11n.pt", [0, 2])

In this example, classes_to_count=[0, 2], which means it counts objects of class 0 and 2 (e.g., person and car).

Why should I use YOLO11 over other object detection models for real-time applications?

Ultralytics YOLO11 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:

  1. Speed and Efficiency: YOLO11 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
  2. Accuracy: It provides state-of-the-art accuracy for object detection and tracking tasks, reducing the number of false positives and improving overall system reliability.
  3. Ease of Integration: YOLO11 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
  4. Flexibility: Supports various tasks like object detection, segmentation, and tracking with configurable models to meet specific use-case requirements.

Check out Ultralytics YOLO11 Documentation for a deeper dive into its features and performance comparisons.

Can I use YOLO11 for advanced applications like crowd analysis and traffic management?

Yes, Ultralytics YOLO11 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:

  • Crowd Analysis: Monitor and manage large gatherings, ensuring safety and optimizing crowd flow.
  • Traffic Management: Track and count vehicles, analyze traffic patterns, and manage congestion in real-time.

For more information and implementation details, refer to the guide on Real World Applications of object counting with YOLO11.

📅 Created 1 year ago ✏️ Updated 16 days ago

Comments