μ½˜ν…μΈ λ‘œ κ±΄λ„ˆλ›°κΈ°

Python μ‚¬μš©λ²•

YOLOv8 Python μ‚¬μš© μ„€λͺ…μ„œμ— μ˜€μ‹  것을 ν™˜μ˜ν•©λ‹ˆλ‹€! 이 κ°€μ΄λ“œλŠ” 물체 감지, μ„ΈλΆ„ν™” 및 λΆ„λ₯˜λ₯Ό μœ„ν•΄ Python ν”„λ‘œμ νŠΈμ— YOLOv8 을 μ›ν™œν•˜κ²Œ ν†΅ν•©ν•˜λŠ” 데 도움이 λ˜λ„λ‘ μ„€κ³„λ˜μ—ˆμŠ΅λ‹ˆλ‹€. μ—¬κΈ°μ—μ„œλŠ” 사전 ν•™μŠ΅λœ λͺ¨λΈμ„ λ‘œλ“œ 및 μ‚¬μš©ν•˜κ³ , μƒˆ λͺ¨λΈμ„ ν•™μŠ΅μ‹œν‚€κ³ , 이미지에 λŒ€ν•œ μ˜ˆμΈ‘μ„ μˆ˜ν–‰ν•˜λŠ” 방법을 λ°°μ›λ‹ˆλ‹€. μ‚¬μš©ν•˜κΈ° μ‰¬μš΄ Python μΈν„°νŽ˜μ΄μŠ€λŠ” YOLOv8 을 Python ν”„λ‘œμ νŠΈμ— ν†΅ν•©ν•˜μ—¬ κ³ κΈ‰ 객체 감지 κΈ°λŠ₯을 λΉ λ₯΄κ²Œ κ΅¬ν˜„ν•˜λ €λŠ” λͺ¨λ“  μ‚¬λžŒμ—κ²Œ μœ μš©ν•œ λ¦¬μ†ŒμŠ€μž…λ‹ˆλ‹€. μ§€κΈˆ μ‹œμž‘ν•΄λ³΄μ„Έμš”!



Watch: λ§ˆμŠ€ν„°ν•˜κΈ° Ultralytics YOLOv8 : Python

예λ₯Ό λ“€μ–΄, μ‚¬μš©μžλŠ” λͺ‡ μ€„μ˜ μ½”λ“œλ§ŒμœΌλ‘œ λͺ¨λΈμ„ λ‘œλ“œν•˜κ³ , ν•™μŠ΅ν•˜κ³ , μœ νš¨μ„± 검사 μ„ΈνŠΈμ—μ„œ μ„±λŠ₯을 ν‰κ°€ν•˜κ³ , ONNX ν˜•μ‹μœΌλ‘œ 내보낼 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.

Python

from ultralytics import YOLO

# Create a new YOLO model from scratch
model = YOLO('yolov8n.yaml')

# Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt')

# Train the model using the 'coco8.yaml' dataset for 3 epochs
results = model.train(data='coco8.yaml', epochs=3)

# Evaluate the model's performance on the validation set
results = model.val()

# Perform object detection on an image using the model
results = model('https://ultralytics.com/images/bus.jpg')

# Export the model to ONNX format
success = model.export(format='onnx')

κΈ°μ°¨

ν›ˆλ ¨ λͺ¨λ“œλŠ” μ‚¬μš©μž 지정 데이터 μ„ΈνŠΈμ—μ„œ YOLOv8 λͺ¨λΈμ„ ν›ˆλ ¨ν•˜λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. 이 λͺ¨λ“œμ—μ„œλŠ” μ§€μ •λœ 데이터 μ„ΈνŠΈμ™€ ν•˜μ΄νΌνŒŒλΌλ―Έν„°λ₯Ό μ‚¬μš©ν•˜μ—¬ λͺ¨λΈμ„ ν•™μŠ΅μ‹œν‚΅λ‹ˆλ‹€. ν›ˆλ ¨ κ³Όμ •μ—λŠ” μ΄λ―Έμ§€μ—μ„œ 객체의 ν΄λž˜μŠ€μ™€ μœ„μΉ˜λ₯Ό μ •ν™•ν•˜κ²Œ μ˜ˆμΈ‘ν•  수 μžˆλ„λ‘ λͺ¨λΈμ˜ λ§€κ°œλ³€μˆ˜λ₯Ό μ΅œμ ν™”ν•˜λŠ” μž‘μ—…μ΄ ν¬ν•¨λ©λ‹ˆλ‹€.

κΈ°μ°¨

from ultralytics import YOLO

model = YOLO('yolov8n.pt') # pass any model type
results = model.train(epochs=5)
from ultralytics import YOLO

model = YOLO('yolov8n.yaml')
results = model.train(data='coco8.yaml', epochs=5)
model = YOLO("last.pt")
results = model.train(resume=True)

μ—΄μ°¨ μ˜ˆμ‹œ

Val

Val λͺ¨λ“œλŠ” YOLOv8 λͺ¨λΈμ„ ν•™μŠ΅ν•œ ν›„ κ²€μ¦ν•˜λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. 이 λͺ¨λ“œμ—μ„œλŠ” μœ νš¨μ„± 검사 μ„ΈνŠΈμ—μ„œ λͺ¨λΈμ„ ν‰κ°€ν•˜μ—¬ 정확도와 μΌλ°˜ν™” μ„±λŠ₯을 μΈ‘μ •ν•©λ‹ˆλ‹€. 이 λͺ¨λ“œλŠ” λͺ¨λΈμ˜ ν•˜μ΄νΌνŒŒλΌλ―Έν„°λ₯Ό μ‘°μ •ν•˜μ—¬ μ„±λŠ₯을 κ°œμ„ ν•˜λŠ” 데 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

Val

  from ultralytics import YOLO

  model = YOLO('yolov8n.yaml')
  model.train(data='coco8.yaml', epochs=5)
  model.val()  # It'll automatically evaluate the data you trained.
  from ultralytics import YOLO

  model = YOLO("model.pt")
  # It'll use the data YAML file in model.pt if you don't set data.
  model.val()
  # or you can set the data you want to val
  model.val(data='coco8.yaml')

Val 예제

예츑

예츑 λͺ¨λ“œλŠ” μƒˆλ‘œμš΄ 이미지 λ˜λŠ” λ™μ˜μƒμ— λŒ€ν•΄ ν•™μŠ΅λœ YOLOv8 λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ μ˜ˆμΈ‘ν•˜λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. 이 λͺ¨λ“œμ—μ„œλŠ” 체크포인트 νŒŒμΌμ—μ„œ λͺ¨λΈμ΄ λ‘œλ“œλ˜λ©°, μ‚¬μš©μžλŠ” 좔둠을 μˆ˜ν–‰ν•  이미지 λ˜λŠ” λ™μ˜μƒμ„ μ œκ³΅ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λͺ¨λΈμ€ μž…λ ₯된 이미지 λ˜λŠ” λ™μ˜μƒμ—μ„œ 객체의 클래슀 및 μœ„μΉ˜λ₯Ό μ˜ˆμΈ‘ν•©λ‹ˆλ‹€.

예츑

from ultralytics import YOLO
from PIL import Image
import cv2

model = YOLO("model.pt")
# accepts all formats - image/dir/Path/URL/video/PIL/ndarray. 0 for webcam
results = model.predict(source="0")
results = model.predict(source="folder", show=True) # Display preds. Accepts all YOLO predict arguments

# from PIL
im1 = Image.open("bus.jpg")
results = model.predict(source=im1, save=True)  # save plotted images

# from ndarray
im2 = cv2.imread("bus.jpg")
results = model.predict(source=im2, save=True, save_txt=True)  # save predictions as labels

# from list of PIL/ndarray
results = model.predict(source=[im1, im2])
# results would be a list of Results object including all the predictions by default
# but be careful as it could occupy a lot memory when there're many images,
# especially the task is segmentation.
# 1. return as a list
results = model.predict(source="folder")

# results would be a generator which is more friendly to memory by setting stream=True
# 2. return as a generator
results = model.predict(source=0, stream=True)

for result in results:
    # Detection
    result.boxes.xyxy   # box with xyxy format, (N, 4)
    result.boxes.xywh   # box with xywh format, (N, 4)
    result.boxes.xyxyn  # box with xyxy format but normalized, (N, 4)
    result.boxes.xywhn  # box with xywh format but normalized, (N, 4)
    result.boxes.conf   # confidence score, (N, 1)
    result.boxes.cls    # cls, (N, 1)

    # Segmentation
    result.masks.data      # masks, (N, H, W)
    result.masks.xy        # x,y segments (pixels), List[segment] * N
    result.masks.xyn       # x,y segments (normalized), List[segment] * N

    # Classification
    result.probs     # cls prob, (num_class, )

# Each result is composed of torch.Tensor by default,
# in which you can easily use following functionality:
result = result.cuda()
result = result.cpu()
result = result.to("cpu")
result = result.numpy()

예츑 예제

내보내기

내보내기 λͺ¨λ“œλŠ” YOLOv8 λͺ¨λΈμ„ 배포에 μ‚¬μš©ν•  수 μžˆλŠ” ν˜•μ‹μœΌλ‘œ λ‚΄λ³΄λ‚΄λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. 이 λͺ¨λ“œμ—μ„œλŠ” λͺ¨λΈμ΄ λ‹€λ₯Έ μ†Œν”„νŠΈμ›¨μ–΄ μ• ν”Œλ¦¬μΌ€μ΄μ…˜μ΄λ‚˜ ν•˜λ“œμ›¨μ–΄ μž₯μΉ˜μ—μ„œ μ‚¬μš©ν•  수 μžˆλŠ” ν˜•μ‹μœΌλ‘œ λ³€ν™˜λ©λ‹ˆλ‹€. 이 λͺ¨λ“œλŠ” λͺ¨λΈμ„ ν”„λ‘œλ•μ…˜ ν™˜κ²½μ— 배포할 λ•Œ μœ μš©ν•©λ‹ˆλ‹€.

내보내기

동적 배치 크기 및 이미지 크기λ₯Ό μ‚¬μš©ν•˜μ—¬ 곡식 YOLOv8n λͺ¨λΈμ„ ONNX 으둜 λ‚΄λ³΄λƒ…λ‹ˆλ‹€.

  from ultralytics import YOLO

  model = YOLO('yolov8n.pt')
  model.export(format='onnx', dynamic=True)

곡식 YOLOv8n λͺ¨λΈμ„ TensorRT 으둜 내보내기 device=0 λ₯Ό μ‚¬μš©ν•˜μ—¬ CUDA λ””λ°”μ΄μŠ€μ—μ„œ 가속할 수 μžˆμŠ΅λ‹ˆλ‹€.

  from ultralytics import YOLO

  model = YOLO('yolov8n.pt')
  model.export(format='onnx', device=0)

내보내기 예제

νŠΈλž™

좔적 λͺ¨λ“œλŠ” YOLOv8 λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ μ‹€μ‹œκ°„μœΌλ‘œ 개체λ₯Ό μΆ”μ ν•˜λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. 이 λͺ¨λ“œμ—μ„œλŠ” 체크포인트 νŒŒμΌμ—μ„œ λͺ¨λΈμ„ λ‘œλ“œν•˜κ³  μ‚¬μš©μžκ°€ μ‹€μ‹œκ°„ λΉ„λ””μ˜€ μŠ€νŠΈλ¦Όμ„ μ œκ³΅ν•˜μ—¬ μ‹€μ‹œκ°„ 객체 좔적을 μˆ˜ν–‰ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 이 λͺ¨λ“œλŠ” κ°μ‹œ μ‹œμŠ€ν…œμ΄λ‚˜ 자율 μ£Όν–‰ μ°¨λŸ‰κ³Ό 같은 μ• ν”Œλ¦¬μΌ€μ΄μ…˜μ— μœ μš©ν•©λ‹ˆλ‹€.

νŠΈλž™

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load an official detection model
model = YOLO('yolov8n-seg.pt')  # load an official segmentation model
model = YOLO('path/to/best.pt')  # load a custom model

# Track with the model
results = model.track(source="https://youtu.be/LNwODJXcvt4", show=True)
results = model.track(source="https://youtu.be/LNwODJXcvt4", show=True, tracker="bytetrack.yaml")

νŠΈλž™ μ˜ˆμ‹œ

벀치마크

벀치마크 λͺ¨λ“œλŠ” YOLOv8 에 λŒ€ν•œ λ‹€μ–‘ν•œ 내보내기 ν˜•μ‹μ˜ 속도와 정확도λ₯Ό ν”„λ‘œνŒŒμΌλ§ν•˜λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. λ²€μΉ˜λ§ˆν¬λŠ” 내보낸 ν˜•μ‹μ˜ 크기, 내보낸 ν˜•μ‹μ— λŒ€ν•œ 정보λ₯Ό μ œκ³΅ν•©λ‹ˆλ‹€. mAP50-95 λ©”νŠΈλ¦­(객체 감지 및 μ„ΈλΆ„ν™”μš©) λ˜λŠ” accuracy_top5 λ©”νŠΈλ¦­(λΆ„λ₯˜μš©)κ³Ό ONNX, OpenVINO, TensorRT λ“±κ³Ό 같은 λ‹€μ–‘ν•œ 내보내기 ν˜•μ‹μ—μ„œ 이미지당 λ°€λ¦¬μ΄ˆ λ‹¨μœ„μ˜ μΆ”λ‘  μ‹œκ°„μ„ 확인할 수 μžˆμŠ΅λ‹ˆλ‹€. 이 μ •λ³΄λŠ” μ‚¬μš©μžκ°€ 속도와 정확성에 λŒ€ν•œ μš”κ΅¬ 사항에 따라 νŠΉμ • μ‚¬μš© 사둀에 λ§žλŠ” 졜적의 내보내기 ν˜•μ‹μ„ μ„ νƒν•˜λŠ” 데 도움이 될 수 μžˆμŠ΅λ‹ˆλ‹€.

벀치마크

λͺ¨λ“  내보내기 ν˜•μ‹μ—μ„œ 곡식 YOLOv8n λͺ¨λΈμ„ λ²€μΉ˜λ§ˆν‚Ήν•˜μ„Έμš”.

from ultralytics.utils.benchmarks import benchmark

# Benchmark
benchmark(model='yolov8n.pt', data='coco8.yaml', imgsz=640, half=False, device=0)

벀치마크 μ˜ˆμ‹œ

탐색기

Explorer APIλŠ” κ³ κΈ‰ μ‹œλ§¨ν‹±, 벑터 μœ μ‚¬λ„, SQL 검색 λ“±μ˜ κΈ°λŠ₯을 톡해 데이터 μ„ΈνŠΈλ₯Ό νƒμƒ‰ν•˜λŠ” 데 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ˜ν•œ μžμ—°μ–΄ 처리 κΈ°λŠ₯을 ν™œμš©ν•΄ μžμ—°μ–΄λ₯Ό μ‚¬μš©ν•΄ μ½˜ν…μΈ λ₯Ό 기반으둜 이미지λ₯Ό 검색할 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. Explorer APIλ₯Ό μ‚¬μš©ν•˜λ©΄ μžμ‹ λ§Œμ˜ 데이터 μ„ΈνŠΈ 탐색 λ…ΈνŠΈλΆμ΄λ‚˜ 슀크립트λ₯Ό μž‘μ„±ν•˜μ—¬ 데이터 μ„ΈνŠΈμ— λŒ€ν•œ μΈμ‚¬μ΄νŠΈλ₯Ό 얻을 수 μžˆμŠ΅λ‹ˆλ‹€.

탐색기λ₯Ό μ‚¬μš©ν•œ μ‹œλ§¨ν‹± 검색

from ultralytics import Explorer

# create an Explorer object
exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table()

similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
print(similar.head())

# Search using multiple indices
similar = exp.get_similar(
                        img=['https://ultralytics.com/images/bus.jpg',
                             'https://ultralytics.com/images/bus.jpg'],
                        limit=10
                        )
print(similar.head())
from ultralytics import Explorer

# create an Explorer object
exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table()

similar = exp.get_similar(idx=1, limit=10)
print(similar.head())

# Search using multiple indices
similar = exp.get_similar(idx=[1,10], limit=10)
print(similar.head())

탐색기

νŠΈλ ˆμ΄λ„ˆ μ‚¬μš©

YOLO λͺ¨λΈ ν΄λž˜μŠ€λŠ” νŠΈλ ˆμ΄λ„ˆ 클래슀의 μƒμœ„ 레벨 λž˜νΌμž…λ‹ˆλ‹€. 각 YOLO μž‘μ—…μ—λŠ” λ‹€μŒμ„ μƒμ†ν•˜λŠ” 자체 νŠΈλ ˆμ΄λ„ˆκ°€ μžˆμŠ΅λ‹ˆλ‹€. BaseTrainer.

탐지 νŠΈλ ˆμ΄λ„ˆ μ˜ˆμ‹œ

```python
from ultralytics.models.yolo import DetectionTrainer, DetectionValidator, DetectionPredictor

# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best

# Validator
val = DetectionValidator(args=...)
val(model=trained_model)

# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)

# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```

νŠΈλ ˆμ΄λ„ˆλ₯Ό μ‰½κ²Œ μ‚¬μš©μž μ§€μ •ν•˜μ—¬ λ§žμΆ€ν˜• μž‘μ—…μ„ μ§€μ›ν•˜κ±°λ‚˜ R&D 아이디어λ₯Ό 탐색할 수 μžˆμŠ΅λ‹ˆλ‹€. μ‚¬μš©μž 지정에 λŒ€ν•΄ μžμ„Ένžˆ μ•Œμ•„λ³΄κΈ° Trainers, Validators 그리고 Predictors μ‚¬μš©μž 지정 μ„Ήμ…˜μ—μ„œ ν”„λ‘œμ νŠΈ μš”κ΅¬ 사항에 맞게 μ‘°μ •ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

μ‚¬μš©μž 지정 νŠœν† λ¦¬μ–Ό



생성 2023-11-12, μ—…λ°μ΄νŠΈ 2024-04-18
μž‘μ„±μž: glenn-jocher (9), AyushExel (1), RizwanMunawar (1), Laughing-q (1), maianumerosky (1)

λŒ“κΈ€