关于模型评估和微调的见解
导言
一旦训练好了计算机视觉模型,就必须对其进行评估和改进,使其发挥最佳性能。仅仅训练模型是不够的。您需要确保模型准确、高效,并能实现计算机视觉项目的目标。通过评估和微调模型,您可以找出不足之处,提高模型的准确性,并提升整体性能。
在本指南中,我们将分享有关模型评估和微调的见解,让计算机视觉项目的这一步骤变得更加容易理解。我们将讨论如何理解评估指标和实施微调技术,为您提供提升模型能力的知识。
使用指标评估模型性能
评估一个模型的性能如何,有助于我们了解它的工作效率。我们使用各种指标来衡量性能。这些性能指标提供了清晰的数字见解,可以指导改进工作,确保模型达到预期目标。让我们仔细看看几个关键指标。
置信度得分
置信度分数表示模型对检测到的对象属于特定类的确定性。它的范围从 0 到 1,分数越高表示置信度越高。置信度分数有助于筛选预测;只有置信度分数高于指定阈值的检测才被视为有效。
快速提示:在运行推论时,如果没有看到任何预测结果,并且已经检查了其他所有内容,请尝试降低置信度分数。有时,阈值过高会导致模型忽略有效预测。降低分数可以让模型考虑更多的可能性。这可能无法满足您的项目目标,但却是了解模型功能并决定如何微调的好方法。
联合交叉点
Intersection over Union (IoU) is a metric in object detection that measures how well the predicted bounding box overlaps with the ground truth bounding box. IoU values range from 0 to 1, where one stands for a perfect match. IoU is essential because it measures how closely the predicted boundaries match the actual object boundaries.
Mean Average Precision
Mean Average Precision (mAP) is a way to measure how well an object detection model performs. It looks at the precision of detecting each object class, averages these scores, and gives an overall number that shows how accurately the model can identify and classify objects.
让我们关注两个特定的 mAP 指标:
- mAP@.5: Measures the average precision at a single IoU (Intersection over Union) threshold of 0.5. This metric checks if the model can correctly find objects with a looser accuracy requirement. It focuses on whether the object is roughly in the right place, not needing perfect placement. It helps see if the model is generally good at spotting objects.
- mAP@.5:.95:在多个 IoU 阈值(从 0.5 到 0.95,增量为 0.05)下计算的 mAP 值的平均值。这一指标更为详细和严格。它能更全面地反映模型在不同严格程度下发现物体的准确性,对于需要精确物体检测的应用尤其有用。
其他 mAP 指标包括 mAP@0.75(使用更严格的 0.75 IoU 门限)和 mAP@small、medium 和 large(评估不同大小对象的精度)。
Evaluating YOLO11 Model Performance
With respect to YOLO11, you can use the validation mode to evaluate the model. Also, be sure to take a look at our guide that goes in-depth into YOLO11 performance metrics and how they can be interpreted.
常见社区问题
When evaluating your YOLO11 model, you might run into a few hiccups. Based on common community questions, here are some tips to help you get the most out of your YOLO11 model:
处理可变图像大小
Evaluating your YOLO11 model with images of different sizes can help you understand its performance on diverse datasets. Using the rect=true
validation parameter, YOLO11 adjusts the network's stride for each batch based on the image sizes, allowing the model to handle rectangular images without forcing them to a single size.
"(《世界人权宣言》) imgsz
验证参数设置图像大小调整的最大尺寸,默认为 640。您可以根据数据集的最大尺寸和GPU 可用内存来调整。即使使用 imgsz
设置 rect=true
允许模型通过动态调整步幅来有效地管理不同的图像大小。
Accessing YOLO11 Metrics
If you want to get a deeper understanding of your YOLO11 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
使用方法
from ultralytics import YOLO
# Load the model
model = YOLO("yolo11n.pt")
# Run the evaluation
results = model.val(data="coco8.yaml")
# Print specific metrics
print("Class indices with average precision:", results.ap_class_index)
print("Average precision for all classes:", results.box.all_ap)
print("Average precision:", results.box.ap)
print("Average precision at IoU=0.50:", results.box.ap50)
print("Class indices for average precision:", results.box.ap_class_index)
print("Class-specific results:", results.box.class_result)
print("F1 score:", results.box.f1)
print("F1 score curve:", results.box.f1_curve)
print("Overall fitness score:", results.box.fitness)
print("Mean average precision:", results.box.map)
print("Mean average precision at IoU=0.50:", results.box.map50)
print("Mean average precision at IoU=0.75:", results.box.map75)
print("Mean average precision for different IoU thresholds:", results.box.maps)
print("Mean results for different metrics:", results.box.mean_results)
print("Mean precision:", results.box.mp)
print("Mean recall:", results.box.mr)
print("Precision:", results.box.p)
print("Precision curve:", results.box.p_curve)
print("Precision values:", results.box.prec_values)
print("Specific precision metrics:", results.box.px)
print("Recall:", results.box.r)
print("Recall curve:", results.box.r_curve)
The results object also includes speed metrics like preprocess time, inference time, loss, and postprocess time. By analyzing these metrics, you can fine-tune and optimize your YOLO11 model for better performance, making it more effective for your specific use case.
微调如何进行?
微调涉及采用预先训练的模型并调整其参数以提高特定任务或数据集的性能。该过程也称为模型再训练,使模型能够更好地理解和预测在实际应用中将遇到的特定数据的结果。您可以根据模型评估重新训练模型,以获得最佳结果。
微调模型的技巧
对模型进行微调意味着密切关注几个重要参数和技术,以实现最佳性能。以下是一些基本提示,可指导您完成整个过程。
Starting With a Higher Learning Rate
Usually, during the initial training epochs, the learning rate starts low and gradually increases to stabilize the training process. However, since your model has already learned some features from the previous dataset, starting with a higher learning rate right away can be more beneficial.
When evaluating your YOLO11 model, you can set the warmup_epochs
验证参数设置为 warmup_epochs=0
以防止学习率开始过高。通过遵循此过程,训练将从提供的权重继续,并根据新数据的细微差别进行调整。
小物体的图像平铺
Image tiling can improve detection accuracy for small objects. By dividing larger images into smaller segments, such as splitting 1280x1280 images into multiple 640x640 segments, you maintain the original resolution, and the model can learn from high-resolution fragments. When using YOLO11, make sure to adjust your labels for these new segments correctly.
与社区互动
Sharing your ideas and questions with other computer vision enthusiasts can inspire creative solutions to roadblocks in your projects. Here are some excellent ways to learn, troubleshoot, and connect.
寻求帮助和支持
- GitHub Issues: Explore the YOLO11 GitHub repository and use the Issues tab to ask questions, report bugs, and suggest features. The community and maintainers are available to assist with any issues you encounter.
- Ultralytics Discord 服务器: 加入 Ultralytics Discord 服务器 ,用于与其他用户和开发人员联系、获得支持、分享知识和集思广益。
官方文件
- Ultralytics YOLO11 Documentation: Check out the official YOLO11 documentation for comprehensive guides and valuable insights on various computer vision tasks and projects.
最后的思考
Evaluating and fine-tuning your computer vision model are important steps for successful model deployment. These steps help make sure that your model is accurate, efficient, and suited to your overall application. The key to training the best model possible is continuous experimentation and learning. Don't hesitate to tweak parameters, try new techniques, and explore different datasets. Keep experimenting and pushing the boundaries of what's possible!
常见问题
What are the key metrics for evaluating YOLO11 model performance?
To evaluate YOLO11 model performance, important metrics include Confidence Score, Intersection over Union (IoU), and Mean Average Precision (mAP). Confidence Score measures the model's certainty for each detected object class. IoU evaluates how well the predicted bounding box overlaps with the ground truth. Mean Average Precision (mAP) aggregates precision scores across classes, with mAP@.5 and mAP@.5:.95 being two common types for varying IoU thresholds. Learn more about these metrics in our YOLO11 performance metrics guide.
How can I fine-tune a pre-trained YOLO11 model for my specific dataset?
Fine-tuning a pre-trained YOLO11 model involves adjusting its parameters to improve performance on a specific task or dataset. Start by evaluating your model using metrics, then set a higher initial learning rate by adjusting the warmup_epochs
参数设置为 0 以获得即时稳定性。使用以下参数 rect=true
以有效处理不同尺寸的图像。有关更详细的指导,请参阅我们的 fine-tuning YOLO11 models.
How can I handle variable image sizes when evaluating my YOLO11 model?
要在评估过程中处理不同的图像尺寸,请使用 rect=true
parameter in YOLO11, which adjusts the network's stride for each batch based on image sizes. The imgsz
参数设置图像大小调整的最大尺寸,默认为 640。调整 imgsz
以适应您的数据集和GPU 内存。更多详情,请访问我们的 处理可变图像尺寸的部分.
What practical steps can I take to improve mean average precision for my YOLO11 model?
Improving mean average precision (mAP) for a YOLO11 model involves several steps:
- Tuning Hyperparameters: Experiment with different learning rates, batch sizes, and image augmentations.
- Data Augmentation: Use techniques like Mosaic and MixUp to create diverse training samples.
- Image Tiling: Split larger images into smaller tiles to improve detection accuracy for small objects. Refer to our detailed guide on model fine-tuning for specific strategies.
How do I access YOLO11 model evaluation metrics in Python?
You can access YOLO11 model evaluation metrics using Python with the following steps:
使用方法
from ultralytics import YOLO
# Load the model
model = YOLO("yolo11n.pt")
# Run the evaluation
results = model.val(data="coco8.yaml")
# Print specific metrics
print("Class indices with average precision:", results.ap_class_index)
print("Average precision for all classes:", results.box.all_ap)
print("Mean average precision at IoU=0.50:", results.box.map50)
print("Mean recall:", results.box.mr)
Analyzing these metrics helps fine-tune and optimize your YOLO11 model. For a deeper dive, check out our guide on YOLO11 metrics.