Object Blurring using Ultralytics YOLO11 ๐
์ค๋ธ์ ํธ ๋ธ๋ฌ๋ ๋ฌด์์ธ๊ฐ์?
Object blurring with Ultralytics YOLO11 involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLO11 model capabilities to identify and manipulate objects within a given scene.
Watch: Object Blurring using Ultralytics YOLO11
์ค๋ธ์ ํธ ๋ธ๋ฌ์ ์ฅ์ ์?
- ๊ฐ์ธ์ ๋ณด ๋ณดํธ: ๋ฌผ์ฒด ํ๋ฆผ ํจ๊ณผ๋ ์ด๋ฏธ์ง๋ ๋์์์์ ๋ฏผ๊ฐํ ์ ๋ณด๋ ๊ฐ์ธ ์๋ณ ์ ๋ณด๋ฅผ ์จ๊ฒจ ๊ฐ์ธ ์ ๋ณด๋ฅผ ๋ณดํธํ๋ ๋ฐ ํจ๊ณผ์ ์ธ ๋๊ตฌ์ ๋๋ค.
- Selective Focus: YOLO11 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
- Real-time Processing: YOLO11's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
Object Blurring using YOLO11 Example
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolo11n.pt")
names = model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Blur ratio
blur_ratio = 50
# Video writer
video_writer = cv2.VideoWriter("object_blurring_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.predict(im0, show=False)
boxes = results[0].boxes.xyxy.cpu().tolist()
clss = results[0].boxes.cls.cpu().tolist()
annotator = Annotator(im0, line_width=2, example=names)
if boxes is not None:
for box, cls in zip(boxes, clss):
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
blur_obj = cv2.blur(obj, (blur_ratio, blur_ratio))
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = blur_obj
cv2.imshow("ultralytics", im0)
video_writer.write(im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
video_writer.release()
cv2.destroyAllWindows()
์ธ์ model.predict
์ธ์ | ์ ํ | ๊ธฐ๋ณธ๊ฐ | ์ค๋ช |
---|---|---|---|
source |
str |
'ultralytics/assets' |
Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across different types of input. |
conf |
float |
0.25 |
ํ์ง์ ๋ํ ์ต์ ์ ๋ขฐ๋ ์๊ณ๊ฐ์ ์ค์ ํฉ๋๋ค. ์ด ์๊ณ๊ฐ๋ณด๋ค ๋ฎ์ ์ ๋ขฐ๋๋ก ํ์ง๋ ๊ฐ์ฒด๋ ๋ฌด์๋ฉ๋๋ค. ์ด ๊ฐ์ ์กฐ์ ํ๋ฉด ์คํ์ง๋ฅผ ์ค์ด๋ ๋ฐ ๋์์ด ๋ ์ ์์ต๋๋ค. |
iou |
float |
0.7 |
Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
imgsz |
int or tuple |
640 |
์ถ๋ก ํ ์ด๋ฏธ์ง ํฌ๊ธฐ๋ฅผ ์ ์ํฉ๋๋ค. ๋จ์ผ ์ ์์ผ ์ ์์ต๋๋ค. 640 for square resizing or a (height, width) tuple. Proper sizing can improve detection accuracy and processing speed. |
half |
bool |
False |
Enables half-precision (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
device |
str |
None |
์ถ๋ก ํ ์ฅ์น๋ฅผ ์ง์ ํฉ๋๋ค(์, cpu , cuda:0 ๋๋ 0 ). ์ฌ์ฉ์๊ฐ ๋ชจ๋ธ ์คํ์ ์ํด CPU, ํน์ GPU, ๋๋ ๊ธฐํ ์ปดํจํ
์ฅ์น ์ค์์ ์ ํํ ์ ์์ต๋๋ค. |
max_det |
int |
300 |
์ด๋ฏธ์ง๋น ํ์ฉ๋๋ ์ต๋ ๊ฐ์ง ํ์. ๋ชจ๋ธ์ด ํ ๋ฒ์ ์ถ๋ก ์์ ๊ฐ์งํ ์ ์๋ ์ด ์ค๋ธ์ ํธ ์๋ฅผ ์ ํํ์ฌ ๋ฐ์ง๋ ์ฅ๋ฉด์์ ๊ณผ๋ํ ์ถ๋ ฅ์ ๋ฐฉ์งํฉ๋๋ค. |
vid_stride |
int |
1 |
๋น๋์ค ์ ๋ ฅ์ ํ๋ ์ ๋ณดํญ. ๋์์์์ ํ๋ ์์ ๊ฑด๋๋ฐ์ด ์๊ฐ ํด์๋๋ฅผ ํฌ์ํ๋ฉด์ ์ฒ๋ฆฌ ์๋๋ฅผ ๋์ผ ์ ์์ต๋๋ค. ๊ฐ์ด 1์ด๋ฉด ๋ชจ๋ ํ๋ ์์ ์ฒ๋ฆฌํ๊ณ , ๊ฐ์ด ํด์๋ก ํ๋ ์์ ๊ฑด๋๋๋๋ค. |
stream_buffer |
bool |
False |
Determines whether to queue incoming frames for video streams. If False , old frames get dropped to accomodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
visualize |
bool |
False |
์ถ๋ก ์ค์ ๋ชจ๋ธ ๊ธฐ๋ฅ์ ์๊ฐํ๋ฅผ ํ์ฑํํ์ฌ ๋ชจ๋ธ์ด '๋ณด๊ณ ์๋ ๊ฒ'์ ๋ํ ์ธ์ฌ์ดํธ๋ฅผ ์ ๊ณตํฉ๋๋ค. ๋๋ฒ๊น ๋ฐ ๋ชจ๋ธ ํด์์ ์ ์ฉํฉ๋๋ค. |
augment |
bool |
False |
์์ธก์ ์ํ ํ ์คํธ ์๊ฐ ์ฆ๊ฐ(TTA)์ ์ง์ํ์ฌ ์ถ๋ก ์๋๋ฅผ ํฌ์ํ๋๋ผ๋ ํ์ง ๊ฒฌ๊ณ ์ฑ์ ํฅ์์ํฌ ์ ์์ต๋๋ค. |
agnostic_nms |
bool |
False |
์๋ก ๋ค๋ฅธ ํด๋์ค์ ๊ฒน์น๋ ์์๋ฅผ ๋ณํฉํ๋ ํด๋์ค ๋ฌด๊ด NMS(Non-Maximum Suppression)๋ฅผ ํ์ฑํํฉ๋๋ค. ํด๋์ค ์ค๋ณต์ด ์ผ๋ฐ์ ์ธ ๋ค์ค ํด๋์ค ํ์ง ์๋๋ฆฌ์ค์ ์ ์ฉํฉ๋๋ค. |
classes |
list[int] |
None |
ํด๋์ค ID ์งํฉ์ผ๋ก ์์ธก์ ํํฐ๋งํฉ๋๋ค. ์ง์ ๋ ํด๋์ค์ ์ํ๋ ํ์ง๋ง ๋ฐํ๋ฉ๋๋ค. ๋ค์ค ํด๋์ค ํ์ง ์์ ์์ ๊ด๋ จ ๊ฐ์ฒด์ ์ง์คํ๋ ๋ฐ ์ ์ฉํฉ๋๋ค. |
retina_masks |
bool |
False |
๋ชจ๋ธ์์ ์ฌ์ฉ ๊ฐ๋ฅํ ๊ฒฝ์ฐ ๊ณ ํด์๋ ์ธ๋ถํ ๋ง์คํฌ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ์ธ๋ถํ ์์ ์ ๋ง์คํฌ ํ์ง์ ํฅ์์์ผ ๋ ์ธ๋ฐํ ๋ํ ์ผ์ ์ ๊ณตํ ์ ์์ต๋๋ค. |
embed |
list[int] |
None |
Specifies the layers from which to extract feature vectors or embeddings. Useful for downstream tasks like clustering or similarity search. |
์์ฃผ ๋ฌป๋ ์ง๋ฌธ
What is object blurring with Ultralytics YOLO11?
Object blurring with Ultralytics YOLO11 involves automatically detecting and applying a blurring effect to specific objects in images or videos. This technique enhances privacy by concealing sensitive information while retaining relevant visual data. YOLO11's real-time processing capabilities make it suitable for applications requiring immediate privacy protection and selective focus adjustments.
How can I implement real-time object blurring using YOLO11?
To implement real-time object blurring with YOLO11, follow the provided Python example. This involves using YOLO11 for object detection and OpenCV for applying the blur effect. Here's a simplified version:
import cv2
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
results = model.predict(im0, show=False)
for box in results[0].boxes.xyxy.cpu().tolist():
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = cv2.blur(obj, (50, 50))
cv2.imshow("YOLO11 Blurring", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
What are the benefits of using Ultralytics YOLO11 for object blurring?
Ultralytics YOLO11 offers several advantages for object blurring:
- ๊ฐ์ธ์ ๋ณด ๋ณดํธ: ๋ฏผ๊ฐํ๊ฑฐ๋ ์๋ณ ๊ฐ๋ฅํ ์ ๋ณด๋ฅผ ํจ๊ณผ์ ์ผ๋ก ๊ฐ๋ฆด ์ ์์ต๋๋ค.
- ์ ํ์ ์ด์ : ํน์ ๊ฐ์ฒด๋ฅผ ๋์์ผ๋ก ํ๋ฆฌ๊ฒ ์ฒ๋ฆฌํ์ฌ ํ์ ์๊ฐ์ ์ฝํ ์ธ ๋ฅผ ์ ์งํฉ๋๋ค.
- ์ค์๊ฐ ์ฒ๋ฆฌ: ๋์ ํ๊ฒฝ์์ ์ค๋ธ์ ํธ ๋ธ๋ฌ๋ง์ ํจ์จ์ ์ผ๋ก ์คํํ์ฌ ์ฆ๊ฐ์ ์ธ ํ๋ผ์ด๋ฒ์ ํฅ์์ ์ ํฉํฉ๋๋ค.
์์ธํ ์ ์ฉ ์ฌ๋ก๋ ๋ฌผ์ฒด ํ๋ฆผ ํจ๊ณผ์ ์ฅ์ ์น์ ์์ ํ์ธํ์ธ์.
Can I use Ultralytics YOLO11 to blur faces in a video for privacy reasons?
Yes, Ultralytics YOLO11 can be configured to detect and blur faces in videos to protect privacy. By training or using a pre-trained model to specifically recognize faces, the detection results can be processed with OpenCV to apply a blur effect. Refer to our guide on object detection with YOLO11 and modify the code to target face detection.
How does YOLO11 compare to other object detection models like Faster R-CNN for object blurring?
Ultralytics YOLO11 typically outperforms models like Faster R-CNN in terms of speed, making it more suitable for real-time applications. While both models offer accurate detection, YOLO11's architecture is optimized for rapid inference, which is critical for tasks like real-time object blurring. Learn more about the technical differences and performance metrics in our YOLO11 documentation.