Skip to content

Object Blurring using Ultralytics YOLO11 🚀

What is Object Blurring?

Object blurring with Ultralytics YOLO11 involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLO11 model capabilities to identify and manipulate objects within a given scene.



Watch: Object Blurring using Ultralytics YOLO11

Advantages of Object Blurring?

  • Privacy Protection: Object blurring is an effective tool for safeguarding privacy by concealing sensitive or personally identifiable information in images or videos.
  • Selective Focus: YOLO11 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
  • Real-time Processing: YOLO11's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.

Object Blurring using YOLO11 Example

import cv2

from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors

model = YOLO("yolo11n.pt")
names = model.names

cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Blur ratio
blur_ratio = 50

# Video writer
video_writer = cv2.VideoWriter("object_blurring_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break

    results = model.predict(im0, show=False)
    boxes = results[0].boxes.xyxy.cpu().tolist()
    clss = results[0].boxes.cls.cpu().tolist()
    annotator = Annotator(im0, line_width=2, example=names)

    if boxes is not None:
        for box, cls in zip(boxes, clss):
            annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])

            obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
            blur_obj = cv2.blur(obj, (blur_ratio, blur_ratio))

            im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = blur_obj

    cv2.imshow("ultralytics", im0)
    video_writer.write(im0)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

cap.release()
video_writer.release()
cv2.destroyAllWindows()

Arguments model.predict

Argument Type Default Description
source str 'ultralytics/assets' Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across different types of input.
conf float 0.25 Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives.
iou float 0.7 Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates.
imgsz int or tuple 640 Defines the image size for inference. Can be a single integer 640 for square resizing or a (height, width) tuple. Proper sizing can improve detection accuracy and processing speed.
half bool False Enables half-precision (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy.
device str None Specifies the device for inference (e.g., cpu, cuda:0 or 0). Allows users to select between CPU, a specific GPU, or other compute devices for model execution.
batch int 1 Specifies the batch size for inference (only works when the source is a directory, video file or .txt file). A larger batch size can provide higher throughput, shortening the total amount of time required for inference.
max_det int 300 Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes.
vid_stride int 1 Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames.
stream_buffer bool False Determines whether to queue incoming frames for video streams. If False, old frames get dropped to accomodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS.
visualize bool False Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation.
augment bool False Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed.
agnostic_nms bool False Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common.
classes list[int] None Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks.
retina_masks bool False Returns high-resolution segmentation masks. The returned masks (masks.data) will match the original image size if enabled. If disabled, they have the image size used during inference.
embed list[int] None Specifies the layers from which to extract feature vectors or embeddings. Useful for downstream tasks like clustering or similarity search.
project str None Name of the project directory where prediction outputs are saved if save is enabled.
name str None Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if save is enabled.

FAQ

What is object blurring with Ultralytics YOLO11?

Object blurring with Ultralytics YOLO11 involves automatically detecting and applying a blurring effect to specific objects in images or videos. This technique enhances privacy by concealing sensitive information while retaining relevant visual data. YOLO11's real-time processing capabilities make it suitable for applications requiring immediate privacy protection and selective focus adjustments.

How can I implement real-time object blurring using YOLO11?

To implement real-time object blurring with YOLO11, follow the provided Python example. This involves using YOLO11 for object detection and OpenCV for applying the blur effect. Here's a simplified version:

import cv2

from ultralytics import YOLO

model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        break

    results = model.predict(im0, show=False)
    for box in results[0].boxes.xyxy.cpu().tolist():
        obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
        im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = cv2.blur(obj, (50, 50))

    cv2.imshow("YOLO11 Blurring", im0)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

cap.release()
cv2.destroyAllWindows()

What are the benefits of using Ultralytics YOLO11 for object blurring?

Ultralytics YOLO11 offers several advantages for object blurring:

  • Privacy Protection: Effectively obscure sensitive or identifiable information.
  • Selective Focus: Target specific objects for blurring, maintaining essential visual content.
  • Real-time Processing: Execute object blurring efficiently in dynamic environments, suitable for instant privacy enhancements.

For more detailed applications, check the advantages of object blurring section.

Can I use Ultralytics YOLO11 to blur faces in a video for privacy reasons?

Yes, Ultralytics YOLO11 can be configured to detect and blur faces in videos to protect privacy. By training or using a pre-trained model to specifically recognize faces, the detection results can be processed with OpenCV to apply a blur effect. Refer to our guide on object detection with YOLO11 and modify the code to target face detection.

How does YOLO11 compare to other object detection models like Faster R-CNN for object blurring?

Ultralytics YOLO11 typically outperforms models like Faster R-CNN in terms of speed, making it more suitable for real-time applications. While both models offer accurate detection, YOLO11's architecture is optimized for rapid inference, which is critical for tasks like real-time object blurring. Learn more about the technical differences and performance metrics in our YOLO11 documentation.

📅 Created 11 months ago ✏️ Updated 2 months ago

Comments