Skip to content

Annotation Editor

Ultralytics Platform includes a powerful annotation editor for labeling images with bounding boxes, polygons, keypoints, oriented boxes, and classifications. The editor supports manual annotation, SAM-powered smart annotation, and YOLO auto-labeling.

graph TB
    subgraph Manual["✏️ Manual Tools"]
        A[Box] & B[Polygon] & C[Keypoint] & D[OBB] & E[Classify]
    end
    subgraph AI["🤖 AI-Assisted"]
        F[SAM Smart] & G[Auto-Annotate]
    end
    Manual --> H[📁 Save Labels]
    AI --> H

Supported Task Types

The annotation editor supports all 5 YOLO task types:

TaskToolAnnotation Format
DetectRectangleBounding boxes (x, y, width, height)
SegmentPolygonPixel-precise masks (polygon vertices)
PoseKeypoint17-point COCO skeleton
OBBOriented BoxRotated bounding boxes (4 corners)
ClassifyClass SelectorImage-level labels

Task Details

Object Detection

What it does: Identifies objects and their locations with axis-aligned bounding boxes.

Label format: class_id center_x center_y width height (all normalized 0-1)

Example: 0 0.5 0.5 0.2 0.3 — Class 0 centered at (50%, 50%) with 20% width and 30% height

Use cases: Inventory counting, traffic monitoring, wildlife detection, security systems

Instance Segmentation

What it does: Creates pixel-precise masks for each object instance.

Label format: class_id x1 y1 x2 y2 x3 y3 ... (polygon vertices, normalized 0-1)

Example: 0 0.1 0.1 0.9 0.1 0.9 0.9 0.1 0.9 — Quadrilateral mask

Use cases: Medical imaging, autonomous vehicles, photo editing, agricultural analysis

Pose Estimation

What it does: Detects body keypoints for skeleton tracking.

Label format: class_id cx cy w h kx1 ky1 v1 kx2 ky2 v2 ...

  • Visibility flags: 0=not labeled, 1=labeled but occluded, 2=labeled and visible

Example: 0 0.5 0.5 0.2 0.3 0.6 0.7 2 0.4 0.8 1 — Person with 2 keypoints

Use cases: Sports analysis, physical therapy, animation, gesture recognition

Oriented Bounding Box (OBB)

What it does: Detects rotated objects with angle-aware bounding boxes.

Label format: class_id x1 y1 x2 y2 x3 y3 x4 y4 (four corner points, normalized)

Example: 0 0.1 0.1 0.9 0.1 0.9 0.9 0.1 0.9 — Rotated rectangle

Use cases: Aerial imagery, document analysis, manufacturing inspection, ship detection

Image Classification

What it does: Assigns a single label to the entire image.

Label format: Folder-based — images organized by class name (train/cats/, train/dogs/)

Use cases: Content moderation, quality control, medical diagnosis, scene recognition

Getting Started

To annotate images:

  1. Navigate to your dataset
  2. Click on an image to open the fullscreen viewer
  3. Click Edit to enter annotation mode
  4. Select your annotation tool
  5. Draw annotations on the image
  6. Click Save when finished

Manual Annotation Tools

Bounding Box (Detect)

Draw rectangular boxes around objects:

  1. Select the Box tool or press B
  2. Click and drag to draw a rectangle
  3. Release to complete the box
  4. Select a class from the dropdown

Resize and Move

  • Drag corners or edges to resize
  • Drag the center to move
  • Press Delete to remove selected annotation

Polygon (Segment)

Draw precise polygon masks:

  1. Select the Polygon tool or press P
  2. Click to add vertices
  3. Double-click or press Enter to close the polygon
  4. Select a class from the dropdown

Edit Vertices

  • Drag individual vertices to adjust
  • Drag the entire polygon to move
  • Click on a vertex and press Delete to remove it

Keypoint (Pose)

Place 17 COCO keypoints for human pose:

  1. Select the Keypoint tool or press K
  2. Click to place keypoints in sequence
  3. Follow the COCO skeleton order

The 17 COCO keypoints are:

#Keypoint#Keypoint
1Nose10Right wrist
2Left eye11Left hip
3Right eye12Right hip
4Left ear13Left knee
5Right ear14Right knee
6Left shoulder15Left ankle
7Right shoulder16Right ankle
8Left elbow17(reserved)
9Right elbow

Oriented Bounding Box (OBB)

Draw rotated boxes for angled objects:

  1. Select the OBB tool or press O
  2. Click and drag to draw an initial box
  3. Use the rotation handle to adjust angle
  4. Select a class from the dropdown

Classification (Classify)

Assign image-level class labels:

  1. Select the Classify mode
  2. Click on class buttons or press number keys 1-9
  3. Multiple classes can be assigned per image

SAM Smart Annotation

Segment Anything Model (SAM) enables intelligent annotation with just a few clicks:

  1. Select SAM mode or press S
  2. Left-click to add positive points (include this area)
  3. Right-click to add negative points (exclude this area)
  4. SAM generates a precise mask in real-time
  5. Click Accept to convert to annotation

SAM Tips

  • Start with a positive click on the object center
  • Add negative clicks to exclude background
  • Works best for distinct objects with clear edges

SAM smart annotation can generate:

  • Polygons for segmentation tasks
  • Bounding boxes for detection tasks
  • Oriented boxes for OBB tasks

YOLO Auto-Annotation

Use trained YOLO models to automatically label images:

  1. Select Auto-Annotate mode or press A
  2. Choose a model (official or your trained models)
  3. Set confidence threshold
  4. Click Run to generate predictions
  5. Review and edit results as needed

Auto-Annotation Models

You can use:

  • Official Ultralytics models (YOLO26n, YOLO26s, etc.)
  • Your own trained models from the Platform

class Management

Creating Classes

Define annotation classes for your dataset:

  1. Click Add Class in the class panel
  2. Enter the class name
  3. A color is assigned automatically

Add New Class During Annotation

You can create new classes directly while annotating without leaving the editor:

  1. Draw an annotation on the image
  2. In the class dropdown, click Add New Class
  3. Enter the class name
  4. Press Enter to create and assign

This allows for a seamless workflow where you can define classes as you encounter new object types in your data.

Unified Classes Table

All classes across your dataset are managed in a unified table. Changes to class names or colors apply throughout the entire dataset automatically.

Editing Classes

  • Click on a class to select it for new annotations
  • Double-click to rename
  • Drag to reorder
  • Right-click for more options

class Colors

Each class is assigned a color from the Ultralytics palette. Colors are consistent across the Platform for easy recognition.

Keyboard Shortcuts

Efficient annotation with keyboard shortcuts:

ShortcutAction
BBox tool (detect)
PPolygon tool (segment)
KKeypoint tool (pose)
OOBB tool
SSAM smart annotation
AAuto-annotate
VSelect/move mode
1-9Select class 1-9
DeleteDelete selected annotation
Ctrl+ZUndo
Ctrl+YRedo
EscapeCancel current operation
EnterComplete polygon
←/→Previous/next image
View All Shortcuts

Press ? to open the keyboard shortcuts dialog.

Undo/Redo

The annotation editor maintains a full history:

  • Undo: Ctrl+Z (Cmd+Z on Mac)
  • Redo: Ctrl+Y (Cmd+Y on Mac)

History includes:

  • Adding annotations
  • Editing annotations
  • Deleting annotations
  • Changing classes

Saving Annotations

Annotations are saved when you click Save or navigate away:

  • Save: Click the save button or press Ctrl+S
  • Cancel: Click cancel to discard changes
  • Auto-save warning: Unsaved changes prompt before leaving

Save Your Work

Always save before navigating to another image. Unsaved changes will be lost.

FAQ

How accurate is SAM annotation?

SAM provides high-quality masks for most objects. Accuracy depends on:

  • Object distinctiveness from background
  • Image quality and resolution
  • Number of positive/negative points provided

For best results, start with a positive point on the object center and add negative points to exclude nearby objects.

Can I import existing annotations?

Yes, upload your dataset with YOLO-format label files. The Platform automatically parses and displays them in the editor.

How do I annotate multiple objects of the same class?

After drawing an annotation:

  1. Keep the same class selected
  2. Draw the next annotation
  3. Repeat until all objects are labeled

The keyboard shortcut 1-9 quickly selects classes.

What's the difference between SAM and auto-annotate?

FeatureSAMAuto-Annotate
MethodInteractive point promptsModel inference
SpeedOne object at a timeAll objects at once
PrecisionVery high with guidanceDepends on model
Best forComplex objects, fine detailsBulk labeling, simple objects

Can I train on partially annotated datasets?

Yes, but for best results:

  • Label all objects of your target classes in each image
  • Use the unknown split for unlabeled images
  • Exclude unlabeled images from training configuration


📅 Created 20 days ago ✏️ Updated 13 days ago
glenn-jocher

Comments