Reference for ultralytics/engine/validator.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/validator.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.engine.validator.BaseValidator
A base class for creating validators.
This class provides the foundation for validation processes, including model evaluation, metric computation, and result visualization.
Attributes:
Name | Type | Description |
---|---|---|
args |
SimpleNamespace
|
Configuration for the validator. |
dataloader |
DataLoader
|
Dataloader to use for validation. |
pbar |
tqdm
|
Progress bar to update during validation. |
model |
Module
|
Model to validate. |
data |
dict
|
Data dictionary containing dataset information. |
device |
device
|
Device to use for validation. |
batch_i |
int
|
Current batch index. |
training |
bool
|
Whether the model is in training mode. |
names |
dict
|
Class names mapping. |
seen |
int
|
Number of images seen so far during validation. |
stats |
dict
|
Statistics collected during validation. |
confusion_matrix |
Confusion matrix for classification evaluation. |
|
nc |
int
|
Number of classes. |
iouv |
Tensor
|
IoU thresholds from 0.50 to 0.95 in spaces of 0.05. |
jdict |
list
|
List to store JSON validation results. |
speed |
dict
|
Dictionary with keys 'preprocess', 'inference', 'loss', 'postprocess' and their respective batch processing times in milliseconds. |
save_dir |
Path
|
Directory to save results. |
plots |
dict
|
Dictionary to store plots for visualization. |
callbacks |
dict
|
Dictionary to store various callback functions. |
Methods:
Name | Description |
---|---|
__call__ |
Execute validation process, running inference on dataloader and computing performance metrics. |
match_predictions |
Match predictions to ground truth objects using IoU. |
add_callback |
Append the given callback to the specified event. |
run_callbacks |
Run all callbacks associated with a specified event. |
get_dataloader |
Get data loader from dataset path and batch size. |
build_dataset |
Build dataset from image path. |
preprocess |
Preprocess an input batch. |
postprocess |
Postprocess the predictions. |
init_metrics |
Initialize performance metrics for the YOLO model. |
update_metrics |
Update metrics based on predictions and batch. |
finalize_metrics |
Finalize and return all metrics. |
get_stats |
Return statistics about the model's performance. |
check_stats |
Check statistics. |
print_results |
Print the results of the model's predictions. |
get_desc |
Get description of the YOLO model. |
on_plot |
Register plots (e.g. to be consumed in callbacks). |
plot_val_samples |
Plot validation samples during training. |
plot_predictions |
Plot YOLO model predictions on batch images. |
pred_to_json |
Convert predictions to JSON format. |
eval_json |
Evaluate and return JSON format of prediction statistics. |
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataloader
|
DataLoader
|
Dataloader to be used for validation. |
None
|
save_dir
|
Path
|
Directory to save results. |
None
|
pbar
|
tqdm
|
Progress bar for displaying progress. |
None
|
args
|
SimpleNamespace
|
Configuration for the validator. |
None
|
_callbacks
|
dict
|
Dictionary to store various callback functions. |
None
|
Source code in ultralytics/engine/validator.py
__call__
Execute validation process, running inference on dataloader and computing performance metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
trainer
|
object
|
Trainer object that contains the model to validate. |
None
|
model
|
Module
|
Model to validate if not using a trainer. |
None
|
Returns:
Name | Type | Description |
---|---|---|
stats |
dict
|
Dictionary containing validation statistics. |
Source code in ultralytics/engine/validator.py
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
|
add_callback
build_dataset
check_stats
eval_json
finalize_metrics
get_dataloader
Get data loader from dataset path and batch size.
get_desc
get_stats
init_metrics
match_predictions
match_predictions(
pred_classes: Tensor,
true_classes: Tensor,
iou: Tensor,
use_scipy: bool = False,
) -> torch.Tensor
Match predictions to ground truth objects using IoU.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pred_classes
|
Tensor
|
Predicted class indices of shape (N,). |
required |
true_classes
|
Tensor
|
Target class indices of shape (M,). |
required |
iou
|
Tensor
|
An NxM tensor containing the pairwise IoU values for predictions and ground truth. |
required |
use_scipy
|
bool
|
Whether to use scipy for matching (more precise). |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
Correct tensor of shape (N, 10) for 10 IoU thresholds. |