Oriented Bounding Boxes Object Detection
定向物体检测比物体检测更进一步,它引入了一个额外的角度来更准确地定位图像中的物体。
定向物体检测器的输出结果是一组旋转的边界框,这些边界框精确地包围了图像中的物体,同时还包含每个边界框的类标签和置信度分数。当你需要识别场景中感兴趣的物体,但又不需要知道物体的具体位置或确切形状时,物体检测是一个不错的选择。
提示
YOLO11 OBB models use the -obb
后缀,即 yolo11n-obb.pt
并对 DOTAv1.
观看: Object Detection using Ultralytics YOLO Oriented Bounding Boxes (YOLO-OBB)
视觉样本
使用 OBB 进行船舶探测 | 使用 OBB 进行车辆检测 |
---|---|
机型
YOLO11 pretrained OBB models are shown here, which are pretrained on the DOTAv1 dataset.
首次使用时,模型会自动从最新的Ultralytics 版本下载。
模型 | 尺寸 (像素) |
mAPtest 50 |
速度 CPU ONNX (毫秒) |
Speed T4 TensorRT10 (ms) |
params (M) |
FLOPs (B) |
---|---|---|---|---|---|---|
YOLO11n-obb | 1024 | 78.4 | 117.6 ± 0.8 | 4.4 ± 0.0 | 2.7 | 17.2 |
YOLO11s-obb | 1024 | 79.5 | 219.4 ± 4.0 | 5.1 ± 0.0 | 9.7 | 57.5 |
YOLO11m-obb | 1024 | 80.9 | 562.8 ± 2.9 | 10.1 ± 0.4 | 20.9 | 183.5 |
YOLO11l-obb | 1024 | 81.0 | 712.5 ± 5.0 | 13.5 ± 0.6 | 26.2 | 232.0 |
YOLO11x-obb | 1024 | 81.3 | 1408.6 ± 7.7 | 28.6 ± 1.0 | 58.8 | 520.2 |
- mAPtest 值为单一模型多尺度上的 DOTAv1 数据集。
复制方式yolo val obb data=DOTAv1.yaml device=0 split=test
并将合并结果提交给 DOTA 评估. - 速度 对 DOTAv1 val 图像进行平均。 亚马逊 EC2 P4d 实例
复制方式yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu
火车
Train YOLO11n-obb on the dota8.yaml
dataset for 100 epochs at image size 640. For a full list of available arguments see the 配置 page.
示例
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n-obb.yaml") # build a new model from YAML
model = YOLO("yolo11n-obb.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-obb.yaml").load("yolo11n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
# Build a new model from YAML and start training from scratch
yolo obb train data=dota8.yaml model=yolo11n-obb.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
yolo obb train data=dota8.yaml model=yolo11n-obb.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
yolo obb train data=dota8.yaml model=yolo11n-obb.yaml pretrained=yolo11n-obb.pt epochs=100 imgsz=640
观看: How to Train Ultralytics YOLO-OBB (Oriented Bounding Boxes) Models on DOTA Dataset using Ultralytics HUB
数据集格式
OBB 数据集格式详见《数据集指南》。
瓦尔
Validate trained YOLO11n-obb model accuracy on the DOTA8 dataset. No arguments are needed as the model
保留其培训 data
和参数作为模型属性。
示例
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n-obb.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
metrics = model.val(data="dota8.yaml") # no arguments needed, dataset and settings remembered
metrics.box.map # map50-95(B)
metrics.box.map50 # map50(B)
metrics.box.map75 # map75(B)
metrics.box.maps # a list contains map50-95(B) of each category
预测
Use a trained YOLO11n-obb model to run predictions on images.
示例
观看: How to Detect and Track Storage Tanks using Ultralytics YOLO-OBB | Oriented Bounding Boxes | DOTA
查看全文 predict
模式的详细信息,请参见 预测 page.
出口
Export a YOLO11n-obb model to a different format like ONNX, CoreML, etc.
示例
Available YOLO11-obb export formats are in the table below. You can export to any format using the format
参数,即 format='onnx'
或 format='engine'
.您可以直接对导出的模型进行预测或验证,即 yolo predict model=yolo11n-obb.onnx
.导出完成后会显示模型的使用示例。
格式 | format 论据 |
模型 | 元数据 | 论据 |
---|---|---|---|---|
PyTorch | - | yolo11n-obb.pt |
✅ | - |
TorchScript | torchscript |
yolo11n-obb.torchscript |
✅ | imgsz , optimize , batch |
ONNX | onnx |
yolo11n-obb.onnx |
✅ | imgsz , half , dynamic , simplify , opset , batch |
OpenVINO | openvino |
yolo11n-obb_openvino_model/ |
✅ | imgsz , half , int8 , batch |
TensorRT | engine |
yolo11n-obb.engine |
✅ | imgsz , half , dynamic , simplify , workspace , int8 , batch |
CoreML | coreml |
yolo11n-obb.mlpackage |
✅ | imgsz , half , int8 , nms , batch |
TF SavedModel | saved_model |
yolo11n-obb_saved_model/ |
✅ | imgsz , keras , int8 , batch |
TF GraphDef | pb |
yolo11n-obb.pb |
❌ | imgsz , batch |
TF 轻型 | tflite |
yolo11n-obb.tflite |
✅ | imgsz , half , int8 , batch |
TF 边缘TPU | edgetpu |
yolo11n-obb_edgetpu.tflite |
✅ | imgsz |
TF.js | tfjs |
yolo11n-obb_web_model/ |
✅ | imgsz , half , int8 , batch |
PaddlePaddle | paddle |
yolo11n-obb_paddle_model/ |
✅ | imgsz , batch |
NCNN | ncnn |
yolo11n-obb_ncnn_model/ |
✅ | imgsz , half , batch |
查看全文 export
中的详细信息 出口 page.
常见问题
什么是定向包围盒 (OBB),它们与普通包围盒有何不同?
定向边框(OBB)包含一个额外的角度,可提高图像中物体定位的准确性。与轴对齐矩形的常规边界框不同,OBB 可以旋转,以更好地适应对象的方向。这对于航空或卫星图像等需要精确定位对象的应用尤其有用(数据集指南)。
How do I train a YOLO11n-obb model using a custom dataset?
To train a YOLO11n-obb model with a custom dataset, follow the example below using Python or CLI:
示例
有关更多培训参数,请查看配置部分。
What datasets can I use for training YOLO11-OBB models?
YOLO11-OBB models are pretrained on datasets like DOTAv1 but you can use any dataset formatted for OBB. Detailed information on OBB dataset formats can be found in the Dataset Guide.
How can I export a YOLO11-OBB model to ONNX format?
Exporting a YOLO11-OBB model to ONNX format is straightforward using either Python or CLI:
示例
有关更多导出格式和详情,请参阅导出页面。
How do I validate the accuracy of a YOLO11n-obb model?
To validate a YOLO11n-obb model, you can use Python or CLI commands as shown below:
示例
请参阅Val部分的全部验证详情。