Skip to content

TT100K Dataset

The Tsinghua-Tencent 100K (TT100K) is a large-scale traffic sign benchmark dataset created from 100,000 Tencent Street View panoramas. This dataset is specifically designed for traffic sign detection and classification in real-world conditions, providing researchers and developers with a comprehensive resource for building robust traffic sign recognition systems.

The dataset contains 100,000 images with over 30,000 traffic sign instances across 221 different categories. These images capture large variations in illuminance, weather conditions, viewing angles, and distances, making it ideal for training models that need to perform reliably in diverse real-world scenarios.

This dataset is particularly valuable for:

  • Autonomous driving systems
  • Advanced driver assistance systems (ADAS)
  • Traffic monitoring applications
  • Urban planning and traffic analysis
  • Computer vision research in real-world conditions

Key Features

The TT100K dataset provides several key advantages:

  • Scale: 100,000 high-resolution images (2048×2048 pixels)
  • Diversity: 221 traffic sign categories covering Chinese traffic signs
  • Real-world conditions: Large variations in weather, illumination, and viewing angles
  • Rich annotations: Each sign includes class label, bounding box, and pixel mask
  • Comprehensive coverage: Includes prohibitory, warning, mandatory, and informative signs
  • Train/Test split: Pre-defined splits for consistent evaluation

Dataset Structure

The TT100K dataset is split into three subsets:

  1. Training Set: The primary collection of traffic-scene images used to train models for detecting and classifying different types of traffic signs.
  2. Validation Set: A subset used during model development to monitor performance and tune hyperparameters.
  3. Test Set: A held-out collection of images used to evaluate the final model's ability to detect and classify traffic signs in real-world scenarios.

The TT100K dataset includes 221 traffic sign categories organized into several major groups:

Speed Limit Signs (pl, pm)

  1. pl_: Prohibitory speed limits (pl5, pl10, pl20, pl30, pl40, pl50, pl60, pl70, pl80, pl100, pl120)
  2. pm_: Minimum speed limits (pm5, pm10, pm20, pm30, pm40, pm50, pm55)

Prohibitory Signs (p, pn, pr_)

  1. p1-p28: General prohibitory signs (no entry, no parking, no stopping, etc.)
  2. pn/pne: No entry and no parking signs
  3. pr: Various restriction signs (pr10, pr20, pr30, pr40, pr50, etc.)

Warning Signs (w_)

  1. w1-w66: Warning signs for various road hazards, conditions, and situations
  2. Includes pedestrian crossings, sharp turns, slippery roads, animals, construction, etc.

Height/Width Limit Signs (ph, pb)

  1. ph_: Height limit signs (ph2, ph2.5, ph3, ph3.5, ph4, ph4.5, ph5, etc.)
  2. pb_: Width limit signs

Informative Signs (i, il, io, ip)

  1. i1-i15: General informative signs
  2. il_: Speed limit information (il60, il80, il100, il110)
  3. io: Other informative signs
  4. ip: Information plates

Dataset YAML

A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the TT100K dataset, the TT100K.yaml file includes automatic download and conversion functionality.

ultralytics/cfg/datasets/TT100K.yaml

# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license

# Tsinghua-Tencent 100K (TT100K) dataset https://cg.cs.tsinghua.edu.cn/traffic-sign/ by Tsinghua University
# Documentation: https://cg.cs.tsinghua.edu.cn/traffic-sign/tutorial.html
# Paper: Traffic-Sign Detection and Classification in the Wild (CVPR 2016)
# License: CC BY-NC 2.0 license for non-commercial use only
# Example usage: yolo train data=TT100K.yaml
# parent
# ├── ultralytics
# └── datasets
#     └── TT100K ← downloads here (~18 GB)

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: TT100K # dataset root dir
train: images/train # train images (relative to 'path') 6105 images
val: images/val # val images (relative to 'path') 7641 images (original 'other' split)
test: images/test # test images (relative to 'path') 3071 images

# Classes (221 traffic sign categories, 45 with sufficient training instances)
names:
  0: pl5
  1: pl10
  2: pl15
  3: pl20
  4: pl25
  5: pl30
  6: pl40
  7: pl50
  8: pl60
  9: pl70
  10: pl80
  11: pl90
  12: pl100
  13: pl110
  14: pl120
  15: pm5
  16: pm10
  17: pm13
  18: pm15
  19: pm20
  20: pm25
  21: pm30
  22: pm35
  23: pm40
  24: pm46
  25: pm50
  26: pm55
  27: pm8
  28: pn
  29: pne
  30: ph4
  31: ph4.5
  32: ph5
  33: ps
  34: pg
  35: ph1.5
  36: ph2
  37: ph2.1
  38: ph2.2
  39: ph2.4
  40: ph2.5
  41: ph2.8
  42: ph2.9
  43: ph3
  44: ph3.2
  45: ph3.5
  46: ph3.8
  47: ph4.2
  48: ph4.3
  49: ph4.8
  50: ph5.3
  51: ph5.5
  52: pb
  53: pr10
  54: pr100
  55: pr20
  56: pr30
  57: pr40
  58: pr45
  59: pr50
  60: pr60
  61: pr70
  62: pr80
  63: pr90
  64: p1
  65: p2
  66: p3
  67: p4
  68: p5
  69: p6
  70: p7
  71: p8
  72: p9
  73: p10
  74: p11
  75: p12
  76: p13
  77: p14
  78: p15
  79: p16
  80: p17
  81: p18
  82: p19
  83: p20
  84: p21
  85: p22
  86: p23
  87: p24
  88: p25
  89: p26
  90: p27
  91: p28
  92: pa8
  93: pa10
  94: pa12
  95: pa13
  96: pa14
  97: pb5
  98: pc
  99: pg
  100: ph1
  101: ph1.3
  102: ph1.5
  103: ph2
  104: ph3
  105: ph4
  106: ph5
  107: pi
  108: pl0
  109: pl4
  110: pl5
  111: pl8
  112: pl10
  113: pl15
  114: pl20
  115: pl25
  116: pl30
  117: pl35
  118: pl40
  119: pl50
  120: pl60
  121: pl65
  122: pl70
  123: pl80
  124: pl90
  125: pl100
  126: pl110
  127: pl120
  128: pm2
  129: pm8
  130: pm10
  131: pm13
  132: pm15
  133: pm20
  134: pm25
  135: pm30
  136: pm35
  137: pm40
  138: pm46
  139: pm50
  140: pm55
  141: pn
  142: pne
  143: po
  144: pr10
  145: pr100
  146: pr20
  147: pr30
  148: pr40
  149: pr45
  150: pr50
  151: pr60
  152: pr70
  153: pr80
  154: ps
  155: w1
  156: w2
  157: w3
  158: w5
  159: w8
  160: w10
  161: w12
  162: w13
  163: w16
  164: w18
  165: w20
  166: w21
  167: w22
  168: w24
  169: w28
  170: w30
  171: w31
  172: w32
  173: w34
  174: w35
  175: w37
  176: w38
  177: w41
  178: w42
  179: w43
  180: w44
  181: w45
  182: w46
  183: w47
  184: w48
  185: w49
  186: w50
  187: w51
  188: w52
  189: w53
  190: w54
  191: w55
  192: w56
  193: w57
  194: w58
  195: w59
  196: w60
  197: w62
  198: w63
  199: w66
  200: i1
  201: i2
  202: i3
  203: i4
  204: i5
  205: i6
  206: i7
  207: i8
  208: i9
  209: i10
  210: i11
  211: i12
  212: i13
  213: i14
  214: i15
  215: il60
  216: il80
  217: il100
  218: il110
  219: io
  220: ip

# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  import json
  import shutil
  from pathlib import Path

  from PIL import Image

  from ultralytics.utils import TQDM
  from ultralytics.utils.downloads import download


  def tt100k2yolo(dir):
      """Convert TT100K annotations to YOLO format with images/{split} and labels/{split} structure."""
      data_dir = dir / "data"
      anno_file = data_dir / "annotations.json"

      print("Loading annotations...")
      with open(anno_file, encoding="utf-8") as f:
          data = json.load(f)

      # Build class name to index mapping from yaml
      names = yaml["names"]
      class_to_idx = {v: k for k, v in names.items()}

      # Create directories
      for split in ["train", "val", "test"]:
          (dir / "images" / split).mkdir(parents=True, exist_ok=True)
          (dir / "labels" / split).mkdir(parents=True, exist_ok=True)

      print("Converting annotations to YOLO format...")
      skipped = 0
      for img_id, img_data in TQDM(data["imgs"].items(), desc="Processing"):
          img_path_str = img_data["path"]
          if "train" in img_path_str:
              split = "train"
          elif "test" in img_path_str:
              split = "test"
          else:
              split = "val"

          # Source and destination paths
          src_img = data_dir / img_path_str
          if not src_img.exists():
              continue

          dst_img = dir / "images" / split / src_img.name

          # Get image dimensions
          try:
              with Image.open(src_img) as img:
                  img_width, img_height = img.size
          except Exception as e:
              print(f"Error reading {src_img}: {e}")
              continue

          # Copy image to destination
          shutil.copy2(src_img, dst_img)

          # Convert annotations
          label_file = dir / "labels" / split / f"{src_img.stem}.txt"
          lines = []

          for obj in img_data.get("objects", []):
              category = obj["category"]
              if category not in class_to_idx:
                  skipped += 1
                  continue

              bbox = obj["bbox"]
              xmin, ymin = bbox["xmin"], bbox["ymin"]
              xmax, ymax = bbox["xmax"], bbox["ymax"]

              # Convert to YOLO format (normalized center coordinates and dimensions)
              x_center = ((xmin + xmax) / 2.0) / img_width
              y_center = ((ymin + ymax) / 2.0) / img_height
              width = (xmax - xmin) / img_width
              height = (ymax - ymin) / img_height

              # Clip to valid range
              x_center = max(0, min(1, x_center))
              y_center = max(0, min(1, y_center))
              width = max(0, min(1, width))
              height = max(0, min(1, height))

              cls_idx = class_to_idx[category]
              lines.append(f"{cls_idx} {x_center:.6f} {y_center:.6f} {width:.6f} {height:.6f}\n")

          # Write label file
          if lines:
              label_file.write_text("".join(lines), encoding="utf-8")

      if skipped:
          print(f"Skipped {skipped} annotations with unknown categories")
      print("Conversion complete!")


  # Download
  dir = Path(yaml["path"])  # dataset root dir
  urls = ["https://cg.cs.tsinghua.edu.cn/traffic-sign/data_model_code/data.zip"]
  download(urls, dir=dir, curl=True, threads=1)

  # Convert
  tt100k2yolo(dir)

Usage

To train a YOLO11 model on the TT100K dataset for 100 epochs with an image size of 640, you can use the following code snippets. The dataset will be automatically downloaded and converted to YOLO format on first use.

Train Example

from ultralytics import YOLO

# Load a model
model = YOLO("yolo11n.pt")  # load a pretrained model (recommended for training)

# Train the model - dataset will auto-download on first run
results = model.train(data="TT100K.yaml", epochs=100, imgsz=640)
# Start training from a pretrained *.pt model
# Dataset will auto-download and convert on first run
yolo detect train data=TT100K.yaml model=yolo11n.pt epochs=100 imgsz=640

Sample Images and Annotations

Here are typical examples from the TT100K dataset:

  1. Urban environments: Street scenes with multiple traffic signs at various distances
  2. Highway scenes: High-speed road signs including speed limits and direction indicators
  3. Complex intersections: Multiple signs in close proximity with varying orientations
  4. Challenging conditions: Signs under different lighting (day/night), weather (rain/fog), and viewing angles

The dataset includes:

  1. Close-up signs: Large, clearly visible signs occupying significant image area
  2. Distant signs: Small signs requiring fine-grained detection capabilities
  3. Partially occluded signs: Signs partially blocked by vehicles, trees, or other objects
  4. Multiple signs per image: Images containing several different sign types

Citations and Acknowledgments

If you use the TT100K dataset in your research or development work, please cite the following paper:

@InProceedings{Zhu_2016_CVPR,
    author = {Zhu, Zhe and Liang, Dun and Zhang, Songhai and Huang, Xiaolei and Li, Baoli and Hu, Shimin},
    title = {Traffic-Sign Detection and Classification in the Wild},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2016}
}

We would like to acknowledge the Tsinghua University and Tencent collaboration for creating and maintaining this valuable resource for the computer vision and autonomous driving communities. For more information about the TT100K dataset, visit the official dataset website.

FAQ

What is the TT100K dataset used for?

The Tsinghua-Tencent 100K (TT100K) dataset is specifically designed for traffic sign detection and classification in real-world conditions. It's primarily used for:

  1. Training autonomous driving perception systems
  2. Developing Advanced Driver Assistance Systems (ADAS)
  3. Research in robust object detection under varying conditions
  4. Benchmarking traffic sign recognition algorithms
  5. Testing model performance on small objects in large images

With 100,000 diverse street view images and 221 traffic sign categories, it provides a comprehensive testbed for real-world traffic sign detection.

How many traffic sign categories are in TT100K?

The TT100K dataset contains 221 different traffic sign categories, including:

  1. Speed limits: pl5 through pl120 (prohibitory limits) and pm5 through pm55 (minimum speeds)
  2. Prohibitory signs: 28+ general prohibition types (p1-p28) plus restrictions (pr*, pn, pne)
  3. Warning signs: 60+ warning categories (w1-w66)
  4. Height/width limits: ph and pb series for physical restrictions
  5. Informative signs: i1-i15, il*, io, ip for guidance and information

This comprehensive coverage includes most traffic signs found in Chinese road networks.

How can I train a YOLO11n model using the TT100K dataset?

To train a YOLO11n model on the TT100K dataset for 100 epochs with an image size of 640, use the example below.

Train Example

from ultralytics import YOLO

# Load a model
model = YOLO("yolo11n.pt")  # load a pretrained model (recommended for training)

# Train the model
results = model.train(data="TT100K.yaml", epochs=100, imgsz=640)
# Start training from a pretrained *.pt model
yolo detect train data=TT100K.yaml model=yolo11n.pt epochs=100 imgsz=640

For detailed training configurations, refer to the Training documentation.

What makes TT100K challenging compared to other datasets?

TT100K presents several unique challenges:

  1. Scale variation: Signs range from very small (distant highway signs) to large (close-up urban signs)
  2. Real-world conditions: Extreme variations in lighting, weather, and viewing angles
  3. High resolution: 2048×2048 pixel images require significant processing power
  4. Class imbalance: Some sign types are much more common than others
  5. Dense scenes: Multiple signs may appear in a single image
  6. Partial occlusion: Signs may be partially blocked by vehicles, vegetation, or structures

These challenges make TT100K a valuable benchmark for developing robust detection algorithms.

How do I handle the large image sizes in TT100K?

The TT100K dataset uses 2048×2048 pixel images, which can be resource-intensive. Here are recommended strategies:

For Training:

# Option 1: Resize to standard YOLO size
model.train(data="TT100K.yaml", imgsz=640, batch=16)

# Option 2: Use larger size for better small object detection
model.train(data="TT100K.yaml", imgsz=1280, batch=4)

# Option 3: Multi-scale training
model.train(data="TT100K.yaml", imgsz=640, scale=0.5)  # trains at varying scales

Recommendations:

  • Start with imgsz=640 for initial experiments
  • Use imgsz=1280 if you have sufficient GPU memory (24GB+)
  • Consider tiling strategies for very small signs
  • Use gradient accumulation to simulate larger batch sizes


📅 Created 0 days ago ✏️ Updated 0 days ago
glenn-jocherPrashantDixit0

Comments