์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

PyTorch ํ—ˆ๋ธŒ

์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ์—์„œ YOLOv5 ๐Ÿš€๋ฅผ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. PyTorch https:// pytorch.org/hub/ultralytics_yolov5.

์‹œ์ž‘ํ•˜๊ธฐ ์ „์—

์š”๊ตฌ ์‚ฌํ•ญ.txt๋ฅผ Python>=3.8.0 ํ™˜๊ฒฝ์„ ํฌํ•จํ•˜์—ฌ PyTorch>=1.8. ๋ชจ๋ธ ๋ฐ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์ตœ์‹  YOLOv5 ๋ฆด๋ฆฌ์Šค์—์„œ ์ž๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œ๋ฉ๋‹ˆ๋‹ค.

pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt

ํ”„๋กœํŒ: ๋ณต์ œ๋Š” https://github.com/ultralytics / yolov5 ๋ถˆํ•„์š” ๐Ÿ˜ƒ

PyTorch ํ—ˆ๋ธŒ์— YOLOv5 ๋กœ๋“œ

๊ฐ„๋‹จํ•œ ์˜ˆ

์ด ์˜ˆ๋Š” PyTorch ํ—ˆ๋ธŒ์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ YOLOv5s ๋ชจ๋ธ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. model ๋ฅผ ํ˜ธ์ถœํ•˜๊ณ  ์ถ”๋ก ์„ ์œ„ํ•ด ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 'yolov5s' ์€ ๊ฐ€์žฅ ๊ฐ€๋ณ๊ณ  ๋น ๋ฅธ YOLOv5 ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ README.

import torch

# Model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")

# Image
im = "https://ultralytics.com/images/zidane.jpg"

# Inference
results = model(im)

results.pandas().xyxy[0]
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

์ž์„ธํ•œ ์˜ˆ์‹œ

์ด ์˜ˆ๋Š” ๋‹ค์Œ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์ผ๊ด„ ์ถ”๋ก  ์™€ ํ•จ๊ป˜ PIL ๊ทธ๋ฆฌ๊ณ  OpenCV ์ด๋ฏธ์ง€ ์†Œ์Šค. results ๋Š” ์ธ์‡„ ๋ฅผ ์ฝ˜์†”๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค, ์ €์žฅ ์— runs/hub, ํ‘œ์‹œ ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ง€์›๋˜๋Š” ํ™˜๊ฒฝ์—์„œ ํ™”๋ฉด์„ ํ‘œ์‹œํ•˜๊ณ  ํ…์„œ ๋˜๋Š” ํŒ๋‹ค ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„.

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")

# Images
for f in "zidane.jpg", "bus.jpg":
    torch.hub.download_url_to_file("https://ultralytics.com/images/" + f, f)  # download 2 images
im1 = Image.open("zidane.jpg")  # PIL image
im2 = cv2.imread("bus.jpg")[..., ::-1]  # OpenCV image (BGR to RGB)

# Inference
results = model([im1, im2], size=640)  # batch of images

# Results
results.print()
results.save()  # or .show()

results.xyxy[0]  # im1 predictions (tensor)
results.pandas().xyxy[0]  # im1 predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

YOLO zidane.jpg์— ๋Œ€ํ•œ ์ถ”๋ก  ๊ฒฐ๊ณผ YOLO bus.jpg์— ๋Œ€ํ•œ ์ถ”๋ก  ๊ฒฐ๊ณผ

๋ชจ๋“  ์ถ”๋ก  ์˜ต์…˜์€ ๋‹ค์Œ์„ ์ฐธ์กฐํ•˜์„ธ์š”. YOLOv5 AutoShape() ์•ž์œผ๋กœ ๋ฉ”์„œ๋“œ.

์ถ”๋ก  ์„ค์ •

YOLOv5 ๋ชจ๋ธ์—๋Š” ์‹ ๋ขฐ๋„ ์ž„๊ณ„๊ฐ’, IoU ์ž„๊ณ„๊ฐ’ ๋“ฑ๊ณผ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ์ถ”๋ก  ์†์„ฑ์ด ํฌํ•จ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์ด๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

model.conf = 0.25  # NMS confidence threshold
iou = 0.45  # NMS IoU threshold
agnostic = False  # NMS class-agnostic
multi_label = False  # NMS multiple labels per box
classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
max_det = 1000  # maximum number of detections per image
amp = False  # Automatic Mixed Precision (AMP) inference

results = model(im, size=320)  # custom inference size

์žฅ์น˜

๋ชจ๋ธ์€ ์ƒ์„ฑ ํ›„ ๋ชจ๋“  ๋””๋ฐ”์ด์Šค๋กœ ์ „์†กํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

model.cpu()  # CPU
model.cuda()  # GPU
model.to(device)  # i.e. device=torch.device(0)

๋ชจ๋ธ์„ ์ง์ ‘ ๋งŒ๋“ค ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. device:

model = torch.hub.load("ultralytics/yolov5", "yolov5s", device="cpu")  # load on CPU

ํ”„๋กœํŒ: ์ž…๋ ฅ ์ด๋ฏธ์ง€๋Š” ์ถ”๋ก  ์ „์— ์˜ฌ๋ฐ”๋ฅธ ๋ชจ๋ธ ์žฅ์น˜๋กœ ์ž๋™ ์ „์†ก๋ฉ๋‹ˆ๋‹ค.

๋ฌด์Œ ์ถœ๋ ฅ

๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ž๋™์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. _verbose=False:

model = torch.hub.load("ultralytics/yolov5", "yolov5s", _verbose=False)  # load silently

์ž…๋ ฅ ์ฑ„๋„

๊ธฐ๋ณธ 3๊ฐœ๊ฐ€ ์•„๋‹Œ 4๊ฐœ์˜ ์ž…๋ ฅ ์ฑ„๋„๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ YOLOv5s ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค:

model = torch.hub.load("ultralytics/yolov5", "yolov5s", channels=4)

์ด ๊ฒฝ์šฐ ๋ชจ๋ธ์€ ๋” ์ด์ƒ ์‚ฌ์ „ ํ•™์Šต๋œ ์ž…๋ ฅ ๋ ˆ์ด์–ด์™€ ๋™์ผํ•œ ๋ชจ์–‘์ด ์•„๋‹Œ ์ฒซ ๋ฒˆ์งธ ์ž…๋ ฅ ๋ ˆ์ด์–ด๋ฅผ ์ œ์™ธํ•˜๊ณ  ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ ˆ์ด์–ด๋Š” ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ์ดˆ๊ธฐํ™”๋œ ์ƒํƒœ๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค.

์ˆ˜์—… ์ˆ˜

๊ธฐ๋ณธ๊ฐ’ 80์ด ์•„๋‹Œ 10๊ฐœ์˜ ์ถœ๋ ฅ ํด๋ž˜์Šค๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ YOLOv5s ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค:

model = torch.hub.load("ultralytics/yolov5", "yolov5s", classes=10)

์ด ๊ฒฝ์šฐ ๋ชจ๋ธ์€ ๋” ์ด์ƒ ์‚ฌ์ „ ํ•™์Šต๋œ ์ถœ๋ ฅ ๋ ˆ์ด์–ด์™€ ๋™์ผํ•œ ๋ชจ์–‘์ด ์•„๋‹Œ ์ถœ๋ ฅ ๋ ˆ์ด์–ด๋ฅผ ์ œ์™ธํ•œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ ๋ ˆ์ด์–ด๋Š” ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ์ดˆ๊ธฐํ™”๋œ ์ƒํƒœ๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค.

๊ฐ•์ œ ์žฌ์žฅ์ „

์œ„์˜ ๋‹จ๊ณ„์—์„œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด ์„ค์ • force_reload=True ๊ธฐ์กด ์บ์‹œ๋ฅผ ์‚ญ์ œํ•˜๊ณ  PyTorch ํ—ˆ๋ธŒ์—์„œ ์ตœ์‹  YOLOv5 ๋ฒ„์ „์„ ๊ฐ•์ œ๋กœ ์ƒˆ๋กœ ๋‹ค์šด๋กœ๋“œํ•˜๋ฉด ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

model = torch.hub.load("ultralytics/yolov5", "yolov5s", force_reload=True)  # force reload

์Šคํฌ๋ฆฐ์ƒท ์ถ”๋ก 

๋ฐ์Šคํฌํ†ฑ ํ™”๋ฉด์—์„œ ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•˜์„ธ์š”:

import torch
from PIL import ImageGrab

# Model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")

# Image
im = ImageGrab.grab()  # take a screenshot

# Inference
results = model(im)

๋‹ค์ค‘GPU ์ถ”๋ก 

YOLOv5 ๋ชจ๋ธ์„ ์Šค๋ ˆ๋“œ ์ถ”๋ก ์„ ํ†ตํ•ด ์—ฌ๋Ÿฌ GPU์— ๋ณ‘๋ ฌ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

import threading

import torch


def run(model, im):
    """Performs inference on an image using a given model and saves the output; model must support `.save()` method."""
    results = model(im)
    results.save()


# Models
model0 = torch.hub.load("ultralytics/yolov5", "yolov5s", device=0)
model1 = torch.hub.load("ultralytics/yolov5", "yolov5s", device=1)

# Inference
threading.Thread(target=run, args=[model0, "https://ultralytics.com/images/zidane.jpg"], daemon=True).start()
threading.Thread(target=run, args=[model1, "https://ultralytics.com/images/bus.jpg"], daemon=True).start()

๊ต์œก

์ถ”๋ก ์ด ์•„๋‹Œ ํŠธ๋ ˆ์ด๋‹์„ ์œ„ํ•ด YOLOv5 ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋ ค๋ฉด autoshape=False. ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋ ค๋ฉด(์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šตํ•˜๋ ค๋ฉด) ๋‹ค์Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. pretrained=False. ์ด ๊ฒฝ์šฐ ์ž์ฒด ๊ต์œก ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ œ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜๋Š” YOLOv5 ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์ž์Šต์„œ ๊ต์œก ๋ชจ๋ธ ํŠธ๋ ˆ์ด๋‹์„ ์œ„ํ•ด

import torch

model = torch.hub.load("ultralytics/yolov5", "yolov5s", autoshape=False)  # load pretrained
model = torch.hub.load("ultralytics/yolov5", "yolov5s", autoshape=False, pretrained=False)  # load scratch

Base64 ๊ฒฐ๊ณผ

API ์„œ๋น„์Šค์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ž์„ธํ•œ ๋‚ด์šฉ์€ https://github.com/ultralytics/yolov5 /pull/2291 ๋ฐ ํ”Œ๋ผ์Šคํฌ REST API ์˜ˆ์ œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.

results = model(im)  # inference

results.ims  # array of original images (as np array) passed to model for inference
results.render()  # updates results.ims with boxes and labels
for im in results.ims:
    buffered = BytesIO()
    im_base64 = Image.fromarray(im)
    im_base64.save(buffered, format="JPEG")
    print(base64.b64encode(buffered.getvalue()).decode("utf-8"))  # base64 encoded image with results

์ž˜๋ฆฐ ๊ฒฐ๊ณผ

๊ฒฐ๊ณผ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ  ๊ฐ์ง€ ํฌ๋กญ์œผ๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

ํŒ๋‹ค ๊ฒฐ๊ณผ

๊ฒฐ๊ณผ๋Š” ํŒ๋‹ค์Šค ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์œผ๋กœ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

results = model(im)  # inference
results.pandas().xyxy[0]  # Pandas DataFrame
ํŒ๋‹ค ์ถœ๋ ฅ(ํ™•์žฅํ•˜๋ ค๋ฉด ํด๋ฆญ)
print(results.pandas().xyxy[0])
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

์ •๋ ฌ๋œ ๊ฒฐ๊ณผ

๊ฒฐ๊ณผ๋ฅผ ์—ด๋ณ„๋กœ ์ •๋ ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ฐจ๋Ÿ‰ ๋ฒˆํ˜ธํŒ ์ˆซ์ž ๊ฐ์ง€๋ฅผ ์™ผ์ชฝ์—์„œ ์˜ค๋ฅธ์ชฝ(x์ถ•)์œผ๋กœ ์ •๋ ฌ):

results = model(im)  # inference
results.pandas().xyxy[0].sort_values("xmin")  # sorted left-right

๋ฐ•์Šค ์ž๋ฅด๊ธฐ ๊ฒฐ๊ณผ

๊ฒฐ๊ณผ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ  ๊ฐ์ง€ ํฌ๋กญ์œผ๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

JSON ๊ฒฐ๊ณผ

๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณ€ํ™˜๋œ ํ›„ JSON ํ˜•์‹์œผ๋กœ ๋ฐ˜ํ™˜๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. .pandas() ๋ฐ์ดํ„ฐ ํ”„๋ ˆ์ž„์„ ์‚ฌ์šฉํ•˜๋Š” .to_json() ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. JSON ํ˜•์‹์€ ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. orient ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํŒ๋‹ค ๋ณด๊ธฐ .to_json() ๋ฌธ์„œ ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.

results = model(ims)  # inference
results.pandas().xyxy[0].to_json(orient="records")  # JSON img1 predictions
JSON ์ถœ๋ ฅ(ํ™•์žฅํ•˜๋ ค๋ฉด ํด๋ฆญ)
[
    {
        "xmin": 749.5,
        "ymin": 43.5,
        "xmax": 1148.0,
        "ymax": 704.5,
        "confidence": 0.8740234375,
        "class": 0,
        "name": "person"
    },
    {
        "xmin": 433.5,
        "ymin": 433.5,
        "xmax": 517.5,
        "ymax": 714.5,
        "confidence": 0.6879882812,
        "class": 27,
        "name": "tie"
    },
    {
        "xmin": 115.25,
        "ymin": 195.75,
        "xmax": 1096.0,
        "ymax": 708.0,
        "confidence": 0.6254882812,
        "class": 0,
        "name": "person"
    },
    {
        "xmin": 986.0,
        "ymin": 304.0,
        "xmax": 1028.0,
        "ymax": 420.0,
        "confidence": 0.2873535156,
        "class": 27,
        "name": "tie"
    }
]

์‚ฌ์šฉ์ž ์ง€์ • ๋ชจ๋ธ

์ด ์˜ˆ์ œ์—์„œ๋Š” ์‚ฌ์šฉ์ž ์ง€์ • 20ํด๋ž˜์Šค VOC-ํ›ˆ๋ จ๋œ YOLOv5s ๋ชจ๋ธ 'best.pt' PyTorch ํ—ˆ๋ธŒ.

import torch

model = torch.hub.load("ultralytics/yolov5", "custom", path="path/to/best.pt")  # local model
model = torch.hub.load("path/to/yolov5", "custom", path="path/to/best.pt", source="local")  # local repo

TensorRT, ONNX ๋ฐ OpenVINO ๋ชจ๋ธ

PyTorch Hub๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ํ•™์Šต๋œ ๋ชจ๋ธ์„ ํฌํ•จํ•˜์—ฌ ๋Œ€๋ถ€๋ถ„์˜ YOLOv5 ๋‚ด๋ณด๋‚ด๊ธฐ ํ˜•์‹์— ๋Œ€ํ•œ ์ถ”๋ก ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ TFLite, ONNX, CoreML, TensorRT ๋‚ด๋ณด๋‚ด๊ธฐ ํŠœํ† ๋ฆฌ์–ผ์„ ์ฐธ์กฐํ•˜์„ธ์š”.

๐Ÿ’ก ProTip: TensorRT ๋Š” PyTorch ๋ณด๋‹ค ์ตœ๋Œ€ 2~5๋ฐฐ ๋น ๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU ๋ฒค์น˜๋งˆํฌ ๐Ÿ’ก ProTip: ONNX ๋ฐ OpenVINO ๋Š” PyTorch ๋ณด๋‹ค ์ตœ๋Œ€ 2-3๋ฐฐ ๋น ๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. CPU ๋ฒค์น˜๋งˆํฌ

import torch

model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s.pt")  # PyTorch
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s.torchscript")  # TorchScript
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s.onnx")  # ONNX
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s_openvino_model/")  # OpenVINO
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s.engine")  # TensorRT
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s.mlmodel")  # CoreML (macOS-only)
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s.tflite")  # TFLite
model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s_paddle_model/")  # PaddlePaddle

์ง€์› ํ™˜๊ฒฝ

Ultralytics ๋Š” ๋ฐ”๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์–‘ํ•œ ํ™˜๊ฒฝ์„ ์ œ๊ณตํ•˜๋ฉฐ, ๊ฐ ํ™˜๊ฒฝ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ•„์ˆ˜ ์ข…์†์„ฑ์ด ์‚ฌ์ „ ์„ค์น˜๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. CUDA, CUDNN, Python, ๋ฐ PyTorch์™€ ๊ฐ™์€ ํ•„์ˆ˜ ์ข…์† ์š”์†Œ๋ฅผ ์„ค์น˜ํ•˜์—ฌ ํ”„๋กœ์ ํŠธ๋ฅผ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ํ”„๋กœ์ ํŠธ ์ƒํƒœ

YOLOv5 CI

์ด ๋ฐฐ์ง€๋Š” ๋ชจ๋“  YOLOv5 GitHub Actions ์ง€์†์  ํ†ตํ•ฉ(CI) ํ…Œ์ŠคํŠธ๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ํ†ต๊ณผ๋˜์—ˆ์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ CI ํ…Œ์ŠคํŠธ๋Š” ๊ต์œก, ๊ฒ€์ฆ, ์ถ”๋ก , ๋‚ด๋ณด๋‚ด๊ธฐ ๋ฐ ๋ฒค์น˜๋งˆํฌ ๋“ฑ ๋‹ค์–‘ํ•œ ์ฃผ์š” ์ธก๋ฉด์—์„œ YOLOv5 ์˜ ๊ธฐ๋Šฅ๊ณผ ์„ฑ๋Šฅ์„ ์—„๊ฒฉํ•˜๊ฒŒ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. 24์‹œ๊ฐ„๋งˆ๋‹ค ๊ทธ๋ฆฌ๊ณ  ์ƒˆ๋กœ์šด ์ปค๋ฐ‹์ด ์žˆ์„ ๋•Œ๋งˆ๋‹ค ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•˜์—ฌ macOS, Windows ๋ฐ Ubuntu์—์„œ ์ผ๊ด€๋˜๊ณ  ์•ˆ์ •์ ์ธ ์ž‘๋™์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค.

๐Ÿ“…1 ๋…„ ์ „ ์ƒ์„ฑ๋จ โœ๏ธ 1๊ฐœ์›” ์ „ ์—…๋ฐ์ดํŠธ๋จ

๋Œ“๊ธ€