Dog-Pose Dataset
Introduction
The Ultralytics Dog-pose dataset is a high-quality and extensive dataset specifically curated for dog keypoint estimation. With 6,773 training images and 1,703 test images, this dataset provides a solid foundation for training robust pose estimation models. Each annotated image includes 24 keypoints with 3 dimensions per keypoint (x, y, visibility), making it a valuable resource for advanced research and development in computer vision.
This dataset is intended for use with Ultralytics HUB and YOLO11.
Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It includes paths, keypoint details, and other relevant information. In the case of the Dog-pose dataset, The dog-pose.yaml
is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dog-pose.yaml.
ultralytics/cfg/datasets/dog-pose.yaml
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Dogs dataset http://vision.stanford.edu/aditya86/ImageNetDogs/ by Stanford
# Documentation: https://docs.ultralytics.com/datasets/pose/dog-pose/
# Example usage: yolo train data=dog-pose.yaml
# parent
# ├── ultralytics
# └── datasets
# └── dog-pose ← downloads here (337 MB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/dog-pose # dataset root dir
train: train # train images (relative to 'path') 6773 images
val: val # val images (relative to 'path') 1703 images
# Keypoints
kpt_shape: [24, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
# Classes
names:
0: dog
# Download script/URL (optional)
download: https://github.com/ultralytics/assets/releases/download/v0.0.0/dog-pose.zip
Usage
To train a YOLO11n-pose model on the Dog-pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model Training page.
Train Example
Sample Images and Annotations
Here are some examples of images from the Dog-pose dataset, along with their corresponding annotations:
- Mosaiced Image: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
The example showcases the variety and complexity of the images in the Dog-pose dataset and the benefits of using mosaicing during the training process.
Citations and Acknowledgments
If you use the Dog-pose dataset in your research or development work, please cite the following paper:
@inproceedings{khosla2011fgvc,
title={Novel dataset for Fine-Grained Image Categorization},
author={Aditya Khosla and Nityananda Jayadevaprakash and Bangpeng Yao and Li Fei-Fei},
booktitle={First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2011}
}
@inproceedings{deng2009imagenet,
title={ImageNet: A Large-Scale Hierarchical Image Database},
author={Jia Deng and Wei Dong and Richard Socher and Li-Jia Li and Kai Li and Li Fei-Fei},
booktitle={IEEE Computer Vision and Pattern Recognition (CVPR)},
year={2009}
}
We would like to acknowledge the Stanford team for creating and maintaining this valuable resource for the computer vision community. For more information about the Dog-pose dataset and its creators, visit the Stanford Dogs Dataset website.
FAQ
What is the Dog-pose dataset, and how is it used with Ultralytics YOLO11?
The Dog-Pose dataset features 6,000 images annotated with 17 keypoints for dog pose estimation. Ideal for training and validating models with Ultralytics YOLO11, it supports applications like animal behavior analysis and veterinary studies.
How do I train a YOLO11 model using the Dog-pose dataset in Ultralytics?
To train a YOLO11n-pose model on the Dog-pose dataset for 100 epochs with an image size of 640, follow these examples:
Train Example
For a comprehensive list of training arguments, refer to the model Training page.
What are the benefits of using the Dog-pose dataset?
The Dog-pose dataset offers several benefits:
Large and Diverse Dataset: With 6,000 images, it provides a substantial amount of data covering a wide range of dog poses, breeds, and contexts, enabling robust model training and evaluation.
Pose-specific Annotations: Offers detailed annotations for pose estimation, ensuring high-quality data for training pose detection models.
Real-World Scenarios: Includes images from varied environments, enhancing the model's ability to generalize to real-world applications.
Model Performance Improvement: The diversity and scale of the dataset help improve model accuracy and robustness, particularly for tasks involving fine-grained pose estimation.
For more about its features and usage, see the Dataset Introduction section.
How does mosaicing benefit the YOLO11 training process using the Dog-pose dataset?
Mosaicing, as illustrated in the sample images from the Dog-pose dataset, merges multiple images into a single composite, enriching the diversity of objects and scenes in each training batch. This approach enhances the model's capacity to generalize across different object sizes, aspect ratios, and contexts, leading to improved performance. For example images, refer to the Sample Images and Annotations section.
Where can I find the Dog-pose dataset YAML file and how do I use it?
The Dog-pose dataset YAML file can be found here. This file defines the dataset configuration, including paths, classes, and other relevant information. Use this file with the YOLO11 training scripts as mentioned in the Train Example section.
For more FAQs and detailed documentation, visit the Ultralytics Documentation.