Skip to content

Annotation Editor

Ultralytics Platform includes a powerful annotation editor for labeling images with bounding boxes, polygons, keypoints, oriented boxes, and classifications. The editor supports manual annotation, SAM-powered smart annotation, and YOLO auto-labeling.

graph TB
    subgraph Manual["✏️ Manual Tools"]
        A[Box] & B[Polygon] & C[Keypoint] & D[OBB] & E[Classify]
    end
    subgraph AI["🤖 AI-Assisted"]
        F[SAM Smart] & G[Auto-Annotate]
    end
    Manual --> H[📁 Save Labels]
    AI --> H

Supported Task Types

The annotation editor supports all 5 YOLO task types:

TaskToolAnnotation Format
DetectRectangleBounding boxes (x, y, width, height)
SegmentPolygonPixel-precise masks (polygon vertices)
PoseKeypoint17-point COCO skeleton
OBBOriented BoxRotated bounding boxes (4 corners)
ClassifyClass SelectorImage-level labels

Getting Started

To annotate images:

  1. Navigate to your dataset
  2. Click on an image to open the fullscreen viewer
  3. Click Edit to enter annotation mode
  4. Select your annotation tool
  5. Draw annotations on the image
  6. Click Save when finished

Manual Annotation Tools

Bounding Box (Detect)

Draw rectangular boxes around objects:

  1. Select the Box tool or press B
  2. Click and drag to draw a rectangle
  3. Release to complete the box
  4. Select a class from the dropdown

Resize and Move

  • Drag corners or edges to resize
  • Drag the center to move
  • Press Delete to remove selected annotation

Polygon (Segment)

Draw precise polygon masks:

  1. Select the Polygon tool or press P
  2. Click to add vertices
  3. Double-click or press Enter to close the polygon
  4. Select a class from the dropdown

Edit Vertices

  • Drag individual vertices to adjust
  • Drag the entire polygon to move
  • Click on a vertex and press Delete to remove it

Keypoint (Pose)

Place 17 COCO keypoints for human pose:

  1. Select the Keypoint tool or press K
  2. Click to place keypoints in sequence
  3. Follow the COCO skeleton order

The 17 COCO keypoints are:

#Keypoint#Keypoint
1Nose10Right wrist
2Left eye11Left hip
3Right eye12Right hip
4Left ear13Left knee
5Right ear14Right knee
6Left shoulder15Left ankle
7Right shoulder16Right ankle
8Left elbow17(reserved)
9Right elbow

Oriented Bounding Box (OBB)

Draw rotated boxes for angled objects:

  1. Select the OBB tool or press O
  2. Click and drag to draw an initial box
  3. Use the rotation handle to adjust angle
  4. Select a class from the dropdown

Classification (Classify)

Assign image-level class labels:

  1. Select the Classify mode
  2. Click on class buttons or press number keys 1-9
  3. Multiple classes can be assigned per image

SAM Smart Annotation

Segment Anything Model (SAM) enables intelligent annotation with just a few clicks:

  1. Select SAM mode or press S
  2. Left-click to add positive points (include this area)
  3. Right-click to add negative points (exclude this area)
  4. SAM generates a precise mask in real-time
  5. Click Accept to convert to annotation

SAM Tips

  • Start with a positive click on the object center
  • Add negative clicks to exclude background
  • Works best for distinct objects with clear edges

SAM smart annotation can generate:

  • Polygons for segmentation tasks
  • Bounding boxes for detection tasks
  • Oriented boxes for OBB tasks

YOLO Auto-Annotation

Use trained YOLO models to automatically label images:

  1. Select Auto-Annotate mode or press A
  2. Choose a model (official or your trained models)
  3. Set confidence threshold
  4. Click Run to generate predictions
  5. Review and edit results as needed

Auto-Annotation Models

You can use:

  • Official Ultralytics models (YOLO11n, YOLO11s, etc.)
  • Your own trained models from the Platform

class Management

Creating Classes

Define annotation classes for your dataset:

  1. Click Add Class in the class panel
  2. Enter the class name
  3. A color is assigned automatically

Editing Classes

  • Click on a class to select it for new annotations
  • Double-click to rename
  • Drag to reorder
  • Right-click for more options

class Colors

Each class is assigned a color from the Ultralytics palette. Colors are consistent across the Platform for easy recognition.

Keyboard Shortcuts

Efficient annotation with keyboard shortcuts:

ShortcutAction
BBox tool (detect)
PPolygon tool (segment)
KKeypoint tool (pose)
OOBB tool
SSAM smart annotation
AAuto-annotate
VSelect/move mode
1-9Select class 1-9
DeleteDelete selected annotation
Ctrl+ZUndo
Ctrl+YRedo
EscapeCancel current operation
EnterComplete polygon
←/→Previous/next image
View All Shortcuts

Press ? to open the keyboard shortcuts dialog.

Undo/Redo

The annotation editor maintains a full history:

  • Undo: Ctrl+Z (Cmd+Z on Mac)
  • Redo: Ctrl+Y (Cmd+Y on Mac)

History includes:

  • Adding annotations
  • Editing annotations
  • Deleting annotations
  • Changing classes

Saving Annotations

Annotations are saved when you click Save or navigate away:

  • Save: Click the save button or press Ctrl+S
  • Cancel: Click cancel to discard changes
  • Auto-save warning: Unsaved changes prompt before leaving

Save Your Work

Always save before navigating to another image. Unsaved changes will be lost.

FAQ

How accurate is SAM annotation?

SAM provides high-quality masks for most objects. Accuracy depends on:

  • Object distinctiveness from background
  • Image quality and resolution
  • Number of positive/negative points provided

For best results, start with a positive point on the object center and add negative points to exclude nearby objects.

Can I import existing annotations?

Yes, upload your dataset with YOLO-format label files. The Platform automatically parses and displays them in the editor.

How do I annotate multiple objects of the same class?

After drawing an annotation:

  1. Keep the same class selected
  2. Draw the next annotation
  3. Repeat until all objects are labeled

The keyboard shortcut 1-9 quickly selects classes.

What's the difference between SAM and auto-annotate?

FeatureSAMAuto-Annotate
MethodInteractive point promptsModel inference
SpeedOne object at a timeAll objects at once
PrecisionVery high with guidanceDepends on model
Best forComplex objects, fine detailsBulk labeling, simple objects

Can I train on partially annotated datasets?

Yes, but for best results:

  • Label all objects of your target classes in each image
  • Use the unknown split for unlabeled images
  • Exclude unlabeled images from training configuration


📅 Created 0 days ago ✏️ Updated 0 days ago
glenn-jocher

Comments