μ½˜ν…μΈ λ‘œ κ±΄λ„ˆλ›°κΈ°

Speed Estimation using Ultralytics YOLO11 πŸš€

속도 μΆ”μ •μ΄λž€ λ¬΄μ—‡μΈκ°€μš”?

Speed estimation is the process of calculating the rate of movement of an object within a given context, often employed in computer vision applications. Using Ultralytics YOLO11 you can now calculate the speed of object using object tracking alongside distance and time data, crucial for tasks like traffic and surveillance. The accuracy of speed estimation directly influences the efficiency and reliability of various applications, making it a key component in the advancement of intelligent systems and real-time decision-making processes.



Watch: Speed Estimation using Ultralytics YOLO11

λΈ”λ‘œκ·Έ 확인

For deeper insights into speed estimation, check out our blog post: Ultralytics YOLO11 for Speed Estimation in Computer Vision Projects

속도 예츑의 μž₯점은?

  • 효율적인 ꡐ톡 ν†΅μ œ: μ •ν™•ν•œ 속도 μ˜ˆμΈ‘μ€ ꡐ톡 흐름을 κ΄€λ¦¬ν•˜κ³  μ•ˆμ „μ„ κ°•ν™”ν•˜λ©° λ„λ‘œμ˜ ν˜Όμž‘μ„ μ€„μ΄λŠ” 데 도움이 λ©λ‹ˆλ‹€.
  • μ •λ°€ν•œ μžμœ¨μ£Όν–‰ λ‚΄λΉ„κ²Œμ΄μ…˜: μžμœ¨μ£Όν–‰μ°¨μ™€ 같은 자율 μ£Όν–‰ μ‹œμŠ€ν…œμ—μ„œ μ‹ λ’°ν•  수 μžˆλŠ” 속도 μ˜ˆμΈ‘μ€ μ•ˆμ „ν•˜κ³  μ •ν™•ν•œ μ°¨λŸ‰ λ‚΄λΉ„κ²Œμ΄μ…˜μ„ 보μž₯ν•©λ‹ˆλ‹€.
  • κ°•ν™”λœ λ³΄μ•ˆ κ°μ‹œ: κ°μ‹œ λΆ„μ„μ˜ 속도 μΆ”μ • κΈ°λŠ₯은 비정상적인 ν–‰λ™μ΄λ‚˜ 잠재적 μœ„ν˜‘μ„ μ‹λ³„ν•˜μ—¬ λ³΄μ•ˆ 쑰치의 효과λ₯Ό κ°œμ„ ν•˜λŠ” 데 도움이 λ©λ‹ˆλ‹€.

μ‹€μ œ μ• ν”Œλ¦¬μΌ€μ΄μ…˜

κ΅ν†΅νŽΈ κ΅ν†΅νŽΈ
Speed Estimation on Road using Ultralytics YOLO11 Speed Estimation on Bridge using Ultralytics YOLO11
Speed Estimation on Road using Ultralytics YOLO11 Speed Estimation on Bridge using Ultralytics YOLO11

Speed Estimation using YOLO11 Example

import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("Path/to/video/file.mp4")

assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

video_writer = cv2.VideoWriter("speed_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

speed_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]

speed = solutions.SpeedEstimator(model="yolo11n.pt", region=speed_region, show=True)

while cap.isOpened():
    success, im0 = cap.read()

    if success:
        out = speed.estimate_speed(im0)
        video_writer.write(im0)
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
        continue

    print("Video frame is empty or video processing has been successfully completed.")
    break

cap.release()
cv2.destroyAllWindows()
μ†λ„λŠ” μΆ”μ •μΉ˜μž…λ‹ˆλ‹€.

μ†λ„λŠ” μΆ”μ •μΉ˜μ΄λ©° μ™„μ „νžˆ μ •ν™•ν•˜μ§€ μ•Šμ„ 수 μžˆμŠ΅λ‹ˆλ‹€. λ˜ν•œ GPU 속도에 따라 μΆ”μ •μΉ˜κ°€ λ‹¬λΌμ§ˆ 수 μžˆμŠ΅λ‹ˆλ‹€.

인수 SpeedEstimator

이름 μœ ν˜• κΈ°λ³Έκ°’ μ„€λͺ…
model str None Path to Ultralytics YOLO Model File
region list [(20, 400), (1260, 400)] 계산 μ˜μ—­μ„ μ •μ˜ν•˜λŠ” 포인트 λͺ©λ‘μž…λ‹ˆλ‹€.
line_width int 2 경계 μƒμžμ˜ μ„  λ‘κ»˜μž…λ‹ˆλ‹€.
show bool False λΉ„λ””μ˜€ 슀트림 ν‘œμ‹œ μ—¬λΆ€λ₯Ό μ œμ–΄ν•˜λŠ” ν”Œλž˜κ·Έμž…λ‹ˆλ‹€.

인수 model.track

인수 μœ ν˜• κΈ°λ³Έκ°’ μ„€λͺ…
source str None Specifies the source directory for images or videos. Supports file paths and URLs.
persist bool False Enables persistent tracking of objects between frames, maintaining IDs across video sequences.
tracker str botsort.yaml Specifies the tracking algorithm to use, e.g., bytetrack.yaml λ˜λŠ” botsort.yaml.
conf float 0.3 Sets the confidence threshold for detections; lower values allow more objects to be tracked but may include false positives.
iou float 0.5 Sets the Intersection over Union (IoU) threshold for filtering overlapping detections.
classes list None Filters results by class index. For example, classes=[0, 2, 3] only tracks the specified classes.
verbose bool True Controls the display of tracking results, providing a visual output of tracked objects.

자주 λ¬»λŠ” 질문

How do I estimate object speed using Ultralytics YOLO11?

Estimating object speed with Ultralytics YOLO11 involves combining object detection and tracking techniques. First, you need to detect objects in each frame using the YOLO11 model. Then, track these objects across frames to calculate their movement over time. Finally, use the distance traveled by the object between frames and the frame rate to estimate its speed.

μ˜ˆμ‹œ:

import cv2

from ultralytics import solutions

cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

# Initialize SpeedEstimator
speed_obj = solutions.SpeedEstimator(
    region=[(0, 360), (1280, 360)],
    model="yolo11n.pt",
    show=True,
)

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        break
    im0 = speed_obj.estimate_speed(im0)
    video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()

μžμ„Έν•œ λ‚΄μš©μ€ 곡식 λΈ”λ‘œκ·Έ κ²Œμ‹œλ¬Όμ„ μ°Έμ‘°ν•˜μ„Έμš”.

What are the benefits of using Ultralytics YOLO11 for speed estimation in traffic management?

Using Ultralytics YOLO11 for speed estimation offers significant advantages in traffic management:

  • ν–₯μƒλœ μ•ˆμ „μ„±: μ°¨λŸ‰ 속도λ₯Ό μ •ν™•ν•˜κ²Œ μ˜ˆμΈ‘ν•˜μ—¬ 과속을 κ°μ§€ν•˜κ³  λ„λ‘œ μ•ˆμ „μ„ κ°œμ„ ν•©λ‹ˆλ‹€.
  • Real-Time Monitoring: Benefit from YOLO11's real-time object detection capability to monitor traffic flow and congestion effectively.
  • ν™•μž₯μ„±: 엣지 λ””λ°”μ΄μŠ€μ—μ„œ μ„œλ²„μ— 이λ₯΄κΈ°κΉŒμ§€ λ‹€μ–‘ν•œ ν•˜λ“œμ›¨μ–΄ 섀정에 λͺ¨λΈμ„ λ°°ν¬ν•˜μ—¬ λŒ€κ·œλͺ¨ κ΅¬ν˜„μ„ μœ„ν•œ μœ μ—°ν•˜κ³  ν™•μž₯ κ°€λŠ₯ν•œ μ†”λ£¨μ…˜μ„ 보μž₯ν•©λ‹ˆλ‹€.

더 λ§Žμ€ μ• ν”Œλ¦¬μΌ€μ΄μ…˜μ— λŒ€ν•œ μžμ„Έν•œ λ‚΄μš©μ€ 속도 μΆ”μ •μ˜ μž₯점을 μ°Έμ‘°ν•˜μ„Έμš”.

Can YOLO11 be integrated with other AI frameworks like TensorFlow or PyTorch?

Yes, YOLO11 can be integrated with other AI frameworks like TensorFlow and PyTorch. Ultralytics provides support for exporting YOLO11 models to various formats like ONNX, TensorRT, and CoreML, ensuring smooth interoperability with other ML frameworks.

To export a YOLO11 model to ONNX format:

yolo export --weights yolo11n.pt --include onnx

내보내기 κ°€μ΄λ“œμ—μ„œ λͺ¨λΈ 내보내기에 λŒ€ν•΄ μžμ„Ένžˆ μ•Œμ•„λ³΄μ„Έμš”.

How accurate is the speed estimation using Ultralytics YOLO11?

The accuracy of speed estimation using Ultralytics YOLO11 depends on several factors, including the quality of the object tracking, the resolution and frame rate of the video, and environmental variables. While the speed estimator provides reliable estimates, it may not be 100% accurate due to variances in frame processing speed and object occlusion.

μ°Έκ³ : 항상 였차 λ²”μœ„λ₯Ό κ³ λ €ν•˜κ³  κ°€λŠ₯ν•˜λ©΄ μ‹€μΈ‘ λ°μ΄ν„°λ‘œ μΆ”μ •μΉ˜λ₯Ό κ²€μ¦ν•˜μ„Έμš”.

μΆ”κ°€ 정확도 ν–₯상 νŒμ„ ν™•μΈν•˜λ €λ©΄ 인수 SpeedEstimator μ„Ήμ…˜.

Why choose Ultralytics YOLO11 over other object detection models like TensorFlow Object Detection API?

Ultralytics YOLO11 offers several advantages over other object detection models, such as the TensorFlow Object Detection API:

  • Real-Time Performance: YOLO11 is optimized for real-time detection, providing high speed and accuracy.
  • Ease of Use: Designed with a user-friendly interface, YOLO11 simplifies model training and deployment.
  • λ‹€λͺ©μ μ„±: 물체 감지, μ„ΈλΆ„ν™”, 포즈 μΆ”μ • λ“± λ‹€μ–‘ν•œ μž‘μ—…μ„ μ§€μ›ν•©λ‹ˆλ‹€.
  • Community and Support: YOLO11 is backed by an active community and extensive documentation, ensuring developers have the resources they need.

For more information on the benefits of YOLO11, explore our detailed model page.


9κ°œμ›” μ „ 생성됨 ✏️ 2 일 μ „ μ—…λ°μ΄νŠΈλ¨

λŒ“κΈ€