Reference for ultralytics/models/yolo/classify/val.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/classify/val.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.models.yolo.classify.val.ClassificationValidator
Bases: BaseValidator
A class extending the BaseValidator class for validation based on a classification model.
This validator handles the validation process for classification models, including metrics calculation, confusion matrix generation, and visualization of results.
Attributes:
Name | Type | Description |
---|---|---|
targets |
List[Tensor]
|
Ground truth class labels. |
pred |
List[Tensor]
|
Model predictions. |
metrics |
ClassifyMetrics
|
Object to calculate and store classification metrics. |
names |
dict
|
Mapping of class indices to class names. |
nc |
int
|
Number of classes. |
confusion_matrix |
ConfusionMatrix
|
Matrix to evaluate model performance across classes. |
Methods:
Name | Description |
---|---|
get_desc |
Return a formatted string summarizing classification metrics. |
init_metrics |
Initialize confusion matrix, class names, and tracking containers. |
preprocess |
Preprocess input batch by moving data to device. |
update_metrics |
Update running metrics with model predictions and batch targets. |
finalize_metrics |
Finalize metrics including confusion matrix and processing speed. |
postprocess |
Extract the primary prediction from model output. |
get_stats |
Calculate and return a dictionary of metrics. |
build_dataset |
Create a ClassificationDataset instance for validation. |
get_dataloader |
Build and return a data loader for classification validation. |
print_results |
Print evaluation metrics for the classification model. |
plot_val_samples |
Plot validation image samples with their ground truth labels. |
plot_predictions |
Plot images with their predicted class labels. |
Examples:
>>> from ultralytics.models.yolo.classify import ClassificationValidator
>>> args = dict(model="yolo11n-cls.pt", data="imagenet10")
>>> validator = ClassificationValidator(args=args)
>>> validator()
Notes
Torchvision classification models can also be passed to the 'model' argument, i.e. model='resnet18'.
This validator handles the validation process for classification models, including metrics calculation, confusion matrix generation, and visualization of results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataloader
|
DataLoader
|
Dataloader to use for validation. |
None
|
save_dir
|
str | Path
|
Directory to save results. |
None
|
pbar
|
bool
|
Display a progress bar. |
None
|
args
|
dict
|
Arguments containing model and validation configuration. |
None
|
_callbacks
|
list
|
List of callback functions to be called during validation. |
None
|
Examples:
>>> from ultralytics.models.yolo.classify import ClassificationValidator
>>> args = dict(model="yolo11n-cls.pt", data="imagenet10")
>>> validator = ClassificationValidator(args=args)
>>> validator()
Source code in ultralytics/models/yolo/classify/val.py
build_dataset
Create a ClassificationDataset instance for validation.
finalize_metrics
Finalize metrics including confusion matrix and processing speed.
This method processes the accumulated predictions and targets to generate the confusion matrix, optionally plots it, and updates the metrics object with speed information.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
Any
|
Variable length argument list. |
()
|
**kwargs
|
Any
|
Arbitrary keyword arguments. |
{}
|
Examples:
>>> validator = ClassificationValidator()
>>> validator.pred = [torch.tensor([[0, 1, 2]])] # Top-3 predictions for one sample
>>> validator.targets = [torch.tensor([0])] # Ground truth class
>>> validator.finalize_metrics()
>>> print(validator.metrics.confusion_matrix) # Access the confusion matrix
Source code in ultralytics/models/yolo/classify/val.py
get_dataloader
Build and return a data loader for classification validation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_path
|
str | Path
|
Path to the dataset directory. |
required |
batch_size
|
int
|
Number of samples per batch. |
required |
Returns:
Type | Description |
---|---|
DataLoader
|
DataLoader object for the classification validation dataset. |
Source code in ultralytics/models/yolo/classify/val.py
get_desc
get_stats
Calculate and return a dictionary of metrics by processing targets and predictions.
init_metrics
Initialize confusion matrix, class names, and tracking containers for predictions and targets.
Source code in ultralytics/models/yolo/classify/val.py
plot_predictions
Plot images with their predicted class labels and save the visualization.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch
|
dict
|
Batch data containing images and other information. |
required |
preds
|
Tensor
|
Model predictions with shape (batch_size, num_classes). |
required |
ni
|
int
|
Batch index used for naming the output file. |
required |
Examples:
>>> validator = ClassificationValidator()
>>> batch = {"img": torch.rand(16, 3, 224, 224)}
>>> preds = torch.rand(16, 10) # 16 images, 10 classes
>>> validator.plot_predictions(batch, preds, 0)
Source code in ultralytics/models/yolo/classify/val.py
plot_val_samples
Plot validation image samples with their ground truth labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch
|
dict
|
Dictionary containing batch data with 'img' (images) and 'cls' (class labels). |
required |
ni
|
int
|
Batch index used for naming the output file. |
required |
Examples:
>>> validator = ClassificationValidator()
>>> batch = {"img": torch.rand(16, 3, 224, 224), "cls": torch.randint(0, 10, (16,))}
>>> validator.plot_val_samples(batch, 0)
Source code in ultralytics/models/yolo/classify/val.py
postprocess
Extract the primary prediction from model output if it's in a list or tuple format.
preprocess
Preprocess input batch by moving data to device and converting to appropriate dtype.
Source code in ultralytics/models/yolo/classify/val.py
print_results
Print evaluation metrics for the classification model.
update_metrics
Update running metrics with model predictions and batch targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
preds
|
Tensor
|
Model predictions, typically logits or probabilities for each class. |
required |
batch
|
dict
|
Batch data containing images and class labels. |
required |
This method appends the top-N predictions (sorted by confidence in descending order) to the prediction list for later evaluation. N is limited to the minimum of 5 and the number of classes.