Skip to content

Detect

Object detection is a task that involves identifying the location and class of objects in an image or video stream.

The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape.

Tip

YOLOv8 Detect models are the default YOLOv8 models, i.e. yolov8n.pt and are pretrained on COCO.

Models

YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.

Models download automatically from the latest Ultralytics release on first use.

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val detect data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val detect data=coco128.yaml batch=1 device=0|cpu

Train

Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the Configuration page.

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.yaml')  # build a new model from YAML
model = YOLO('yolov8n.pt')  # load a pretrained model (recommended for training)
model = YOLO('yolov8n.yaml').load('yolov8n.pt')  # build from YAML and transfer weights

# Train the model
model.train(data='coco128.yaml', epochs=100, imgsz=640)
# Build a new model from YAML and start training from scratch
yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640

# Start training from a pretrained *.pt model
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640

# Build a new model from YAML, transfer pretrained weights to it and start training
yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640

Val

Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the model retains it's training data and arguments as model attributes.

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load an official model
model = YOLO('path/to/best.pt')  # load a custom model

# Validate the model
metrics = model.val()  # no arguments needed, dataset and settings remembered
metrics.box.map    # map50-95
metrics.box.map50  # map50
metrics.box.map75  # map75
metrics.box.maps   # a list contains map50-95 of each category
yolo detect val model=yolov8n.pt  # val official model
yolo detect val model=path/to/best.pt  # val custom model

Predict

Use a trained YOLOv8n model to run predictions on images.

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load an official model
model = YOLO('path/to/best.pt')  # load a custom model

# Predict with the model
results = model('https://ultralytics.com/images/bus.jpg')  # predict on an image
yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'  # predict with official model
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg'  # predict with custom model

Read more details of predict in our Predict page.

Export

Export a YOLOv8n model to a different format like ONNX, CoreML, etc.

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load an official model
model = YOLO('path/to/best.pt')  # load a custom trained

# Export the model
model.export(format='onnx')
yolo export model=yolov8n.pt format=onnx  # export official model
yolo export model=path/to/best.pt format=onnx  # export custom trained model

Available YOLOv8 export formats are in the table below. You can predict or validate directly on exported models, i.e. yolo predict model=yolov8n.onnx.

Format format Argument Model Metadata
PyTorch - yolov8n.pt
TorchScript torchscript yolov8n.torchscript
ONNX onnx yolov8n.onnx
OpenVINO openvino yolov8n_openvino_model/
TensorRT engine yolov8n.engine
CoreML coreml yolov8n.mlmodel
TF SavedModel saved_model yolov8n_saved_model/
TF GraphDef pb yolov8n.pb
TF Lite tflite yolov8n.tflite
TF Edge TPU edgetpu yolov8n_edgetpu.tflite
TF.js tfjs yolov8n_web_model/
PaddlePaddle paddle yolov8n_paddle_model/