Skip to content

Datasets Overview

Ultralytics provides support for various datasets to facilitate computer vision tasks such as detection, instance segmentation, pose estimation, classification, and multi-object tracking. Below is a list of the main Ultralytics datasets, followed by a summary of each computer vision task and the respective datasets.



Watch: Ultralytics Datasets Overview

Ultralytics Explorer

Community Note ⚠️

As of ultralytics>=8.3.10, Ultralytics explorer support has been deprecated. But don't worry! You can now access similar and even enhanced functionality through Ultralytics HUB, our intuitive no-code platform designed to streamline your workflow. With Ultralytics HUB, you can continue exploring, visualizing, and managing your data effortlessly, all without writing a single line of code. Make sure to check it out and take advantage of its powerful features!🚀

Create embeddings for your dataset, search for similar images, run SQL queries, perform semantic search and even search using natural language! You can get started with our GUI app or build your own using the API. Learn more here.

Ultralytics Explorer Screenshot

Object Detection

Bounding box object detection is a computer vision technique that involves detecting and localizing objects in an image by drawing a bounding box around each object.

  • Argoverse: A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
  • COCO: Common Objects in Context (COCO) is a large-scale object detection, segmentation, and captioning dataset with 80 object categories.
  • LVIS: A large-scale object detection, segmentation, and captioning dataset with 1203 object categories.
  • COCO8: A smaller subset of the first 4 images from COCO train and COCO val, suitable for quick tests.
  • COCO128: A smaller subset of the first 128 images from COCO train and COCO val, suitable for tests.
  • Global Wheat 2020: A dataset containing images of wheat heads for the Global Wheat Challenge 2020.
  • Objects365: A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images.
  • OpenImagesV7: A comprehensive dataset by Google with 1.7M train images and 42k validation images.
  • SKU-110K: A dataset featuring dense object detection in retail environments with over 11K images and 1.7 million bounding boxes.
  • VisDrone: A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences.
  • VOC: The Pascal Visual Object Classes (VOC) dataset for object detection and segmentation with 20 object classes and over 11K images.
  • xView: A dataset for object detection in overhead imagery with 60 object categories and over 1 million annotated objects.
  • RF100: A diverse object detection benchmark with 100 datasets spanning seven imagery domains for comprehensive model evaluation.
  • Brain-tumor: A dataset for detecting brain tumors includes MRI or CT scan images with details on tumor presence, location, and characteristics.
  • African-wildlife: A dataset featuring images of African wildlife, including buffalo, elephant, rhino, and zebras.
  • Signature: A dataset featuring images of various documents with annotated signatures, supporting document verification and fraud detection research.

Instance Segmentation

Instance segmentation is a computer vision technique that involves identifying and localizing objects in an image at the pixel level.

  • COCO: A large-scale dataset designed for object detection, segmentation, and captioning tasks with over 200K labeled images.
  • COCO8-seg: A smaller dataset for instance segmentation tasks, containing a subset of 8 COCO images with segmentation annotations.
  • COCO128-seg: A smaller dataset for instance segmentation tasks, containing a subset of 128 COCO images with segmentation annotations.
  • Crack-seg: Specifically crafted dataset for detecting cracks on roads and walls, applicable for both object detection and segmentation tasks.
  • Package-seg: Tailored dataset for identifying packages in warehouses or industrial settings, suitable for both object detection and segmentation applications.
  • Carparts-seg: Purpose-built dataset for identifying vehicle parts, catering to design, manufacturing, and research needs. It serves for both object detection and segmentation tasks.

Pose Estimation

Pose estimation is a technique used to determine the pose of the object relative to the camera or the world coordinate system.

  • COCO: A large-scale dataset with human pose annotations designed for pose estimation tasks.
  • COCO8-pose: A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations.
  • Tiger-pose: A compact dataset consisting of 263 images focused on tigers, annotated with 12 keypoints per tiger for pose estimation tasks.
  • Hand-Keypoints: A concise dataset featuring over 26,000 images centered on human hands, annotated with 21 keypoints per hand, designed for pose estimation tasks.
  • Dog-pose: A comprehensive dataset featuring approximately 6,000 images focused on dogs, annotated with 24 keypoints per dog, tailored for pose estimation tasks.

Classification

Image classification is a computer vision task that involves categorizing an image into one or more predefined classes or categories based on its visual content.

  • Caltech 101: A dataset containing images of 101 object categories for image classification tasks.
  • Caltech 256: An extended version of Caltech 101 with 256 object categories and more challenging images.
  • CIFAR-10: A dataset of 60K 32x32 color images in 10 classes, with 6K images per class.
  • CIFAR-100: An extended version of CIFAR-10 with 100 object categories and 600 images per class.
  • Fashion-MNIST: A dataset consisting of 70,000 grayscale images of 10 fashion categories for image classification tasks.
  • ImageNet: A large-scale dataset for object detection and image classification with over 14 million images and 20,000 categories.
  • ImageNet-10: A smaller subset of ImageNet with 10 categories for faster experimentation and testing.
  • Imagenette: A smaller subset of ImageNet that contains 10 easily distinguishable classes for quicker training and testing.
  • Imagewoof: A more challenging subset of ImageNet containing 10 dog breed categories for image classification tasks.
  • MNIST: A dataset of 70,000 grayscale images of handwritten digits for image classification tasks.
  • MNIST160: First 8 images of each MNIST category from the MNIST dataset. Dataset contains 160 images total.

Oriented Bounding Boxes (OBB)

Oriented Bounding Boxes (OBB) is a method in computer vision for detecting angled objects in images using rotated bounding boxes, often applied to aerial and satellite imagery.

  • DOTA-v2: A popular OBB aerial imagery dataset with 1.7 million instances and 11,268 images.
  • DOTA8: A smaller subset of the first 8 images from the DOTAv1 split set, 4 for training and 4 for validation, suitable for quick tests.

Multi-Object Tracking

Multi-object tracking is a computer vision technique that involves detecting and tracking multiple objects over time in a video sequence.

  • Argoverse: A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations for multi-object tracking tasks.
  • VisDrone: A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences.

Contribute New Datasets

Contributing a new dataset involves several steps to ensure that it aligns well with the existing infrastructure. Below are the necessary steps:

Steps to Contribute a New Dataset

  1. Collect Images: Gather the images that belong to the dataset. These could be collected from various sources, such as public databases or your own collection.
  2. Annotate Images: Annotate these images with bounding boxes, segments, or keypoints, depending on the task.
  3. Export Annotations: Convert these annotations into the YOLO *.txt file format which Ultralytics supports.
  4. Organize Dataset: Arrange your dataset into the correct folder structure. You should have train/ and val/ top-level directories, and within each, an images/ and labels/ subdirectory.

    dataset/
    ├── train/
    │   ├── images/
    │   └── labels/
    └── val/
        ├── images/
        └── labels/
    
  5. Create a data.yaml File: In your dataset's root directory, create a data.yaml file that describes the dataset, classes, and other necessary information.

  6. Optimize Images (Optional): If you want to reduce the size of the dataset for more efficient processing, you can optimize the images using the code below. This is not required, but recommended for smaller dataset sizes and faster download speeds.
  7. Zip Dataset: Compress the entire dataset folder into a zip file.
  8. Document and PR: Create a documentation page describing your dataset and how it fits into the existing framework. After that, submit a Pull Request (PR). Refer to Ultralytics Contribution Guidelines for more details on how to submit a PR.

Example Code to Optimize and Zip a Dataset

Optimize and Zip a Dataset

from pathlib import Path

from ultralytics.data.utils import compress_one_image
from ultralytics.utils.downloads import zip_directory

# Define dataset directory
path = Path("path/to/dataset")

# Optimize images in dataset (optional)
for f in path.rglob("*.jpg"):
    compress_one_image(f)

# Zip dataset into 'path/to/dataset.zip'
zip_directory(path)

By following these steps, you can contribute a new dataset that integrates well with Ultralytics' existing structure.

FAQ

What datasets does Ultralytics support for object detection?

Ultralytics supports a wide variety of datasets for object detection, including:

  • COCO: A large-scale object detection, segmentation, and captioning dataset with 80 object categories.
  • LVIS: An extensive dataset with 1203 object categories, designed for more fine-grained object detection and segmentation.
  • Argoverse: A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
  • VisDrone: A dataset with object detection and multi-object tracking data from drone-captured imagery.
  • SKU-110K: Featuring dense object detection in retail environments with over 11K images.

These datasets facilitate training robust models for various object detection applications.

How do I contribute a new dataset to Ultralytics?

Contributing a new dataset involves several steps:

  1. Collect Images: Gather images from public databases or personal collections.
  2. Annotate Images: Apply bounding boxes, segments, or keypoints, depending on the task.
  3. Export Annotations: Convert annotations into the YOLO *.txt format.
  4. Organize Dataset: Use the folder structure with train/ and val/ directories, each containing images/ and labels/ subdirectories.
  5. Create a data.yaml File: Include dataset descriptions, classes, and other relevant information.
  6. Optimize Images (Optional): Reduce dataset size for efficiency.
  7. Zip Dataset: Compress the dataset into a zip file.
  8. Document and PR: Describe your dataset and submit a Pull Request following Ultralytics Contribution Guidelines.

Visit Contribute New Datasets for a comprehensive guide.

Why should I use Ultralytics Explorer for my dataset?

Ultralytics Explorer offers powerful features for dataset analysis, including:

  • Embeddings Generation: Create vector embeddings for images.
  • Semantic Search: Search for similar images using embeddings or AI.
  • SQL Queries: Run advanced SQL queries for detailed data analysis.
  • Natural Language Search: Search using plain language queries for ease of use.

Explore the Ultralytics Explorer for more information and to try the GUI Demo.

What are the unique features of Ultralytics YOLO models for computer vision?

Ultralytics YOLO models provide several unique features:

  • Real-time Performance: High-speed inference and training.
  • Versatility: Suitable for detection, segmentation, classification, and pose estimation tasks.
  • Pretrained Models: Access to high-performing, pretrained models for various applications.
  • Extensive Community Support: Active community and comprehensive documentation for troubleshooting and development.

Discover more about YOLO on the Ultralytics YOLO page.

How can I optimize and zip a dataset using Ultralytics tools?

To optimize and zip a dataset using Ultralytics tools, follow this example code:

Optimize and Zip a Dataset

from pathlib import Path

from ultralytics.data.utils import compress_one_image
from ultralytics.utils.downloads import zip_directory

# Define dataset directory
path = Path("path/to/dataset")

# Optimize images in dataset (optional)
for f in path.rglob("*.jpg"):
    compress_one_image(f)

# Zip dataset into 'path/to/dataset.zip'
zip_directory(path)

Learn more on how to Optimize and Zip a Dataset.

📅 Created 1 year ago ✏️ Updated 28 days ago

Comments