Annotation Editor
Ultralytics Platform includes a powerful annotation editor for labeling images with bounding boxes, polygons, keypoints, oriented boxes, and classifications. The editor supports manual annotation, SAM-powered smart annotation, and YOLO auto-labeling.
graph TB
subgraph Manual["✏️ Manual Tools"]
A[Box] & B[Polygon] & C[Keypoint] & D[OBB] & E[Classify]
end
subgraph AI["🤖 AI-Assisted"]
F[SAM Smart] & G[Auto-Annotate]
end
Manual --> H[📁 Save Labels]
AI --> H
Supported Task Types
The annotation editor supports all 5 YOLO task types:
| Task | Tool | Annotation Format |
|---|---|---|
| Detect | Rectangle | Bounding boxes (x, y, width, height) |
| Segment | Polygon | Pixel-precise masks (polygon vertices) |
| Pose | Keypoint | 17-point COCO skeleton |
| OBB | Oriented Box | Rotated bounding boxes (4 corners) |
| Classify | Class Selector | Image-level labels |
Getting Started
To annotate images:
- Navigate to your dataset
- Click on an image to open the fullscreen viewer
- Click Edit to enter annotation mode
- Select your annotation tool
- Draw annotations on the image
- Click Save when finished
Manual Annotation Tools
Bounding Box (Detect)
Draw rectangular boxes around objects:
- Select the Box tool or press
B - Click and drag to draw a rectangle
- Release to complete the box
- Select a class from the dropdown
Resize and Move
- Drag corners or edges to resize
- Drag the center to move
- Press
Deleteto remove selected annotation
Polygon (Segment)
Draw precise polygon masks:
- Select the Polygon tool or press
P - Click to add vertices
- Double-click or press
Enterto close the polygon - Select a class from the dropdown
Edit Vertices
- Drag individual vertices to adjust
- Drag the entire polygon to move
- Click on a vertex and press
Deleteto remove it
Keypoint (Pose)
Place 17 COCO keypoints for human pose:
- Select the Keypoint tool or press
K - Click to place keypoints in sequence
- Follow the COCO skeleton order
The 17 COCO keypoints are:
| # | Keypoint | # | Keypoint |
|---|---|---|---|
| 1 | Nose | 10 | Right wrist |
| 2 | Left eye | 11 | Left hip |
| 3 | Right eye | 12 | Right hip |
| 4 | Left ear | 13 | Left knee |
| 5 | Right ear | 14 | Right knee |
| 6 | Left shoulder | 15 | Left ankle |
| 7 | Right shoulder | 16 | Right ankle |
| 8 | Left elbow | 17 | (reserved) |
| 9 | Right elbow |
Oriented Bounding Box (OBB)
Draw rotated boxes for angled objects:
- Select the OBB tool or press
O - Click and drag to draw an initial box
- Use the rotation handle to adjust angle
- Select a class from the dropdown
Classification (Classify)
Assign image-level class labels:
- Select the Classify mode
- Click on class buttons or press number keys
1-9 - Multiple classes can be assigned per image
SAM Smart Annotation
Segment Anything Model (SAM) enables intelligent annotation with just a few clicks:
- Select SAM mode or press
S - Left-click to add positive points (include this area)
- Right-click to add negative points (exclude this area)
- SAM generates a precise mask in real-time
- Click Accept to convert to annotation
SAM Tips
- Start with a positive click on the object center
- Add negative clicks to exclude background
- Works best for distinct objects with clear edges
SAM smart annotation can generate:
- Polygons for segmentation tasks
- Bounding boxes for detection tasks
- Oriented boxes for OBB tasks
YOLO Auto-Annotation
Use trained YOLO models to automatically label images:
- Select Auto-Annotate mode or press
A - Choose a model (official or your trained models)
- Set confidence threshold
- Click Run to generate predictions
- Review and edit results as needed
Auto-Annotation Models
You can use:
- Official Ultralytics models (YOLO11n, YOLO11s, etc.)
- Your own trained models from the Platform
class Management
Creating Classes
Define annotation classes for your dataset:
- Click Add Class in the class panel
- Enter the class name
- A color is assigned automatically
Editing Classes
- Click on a class to select it for new annotations
- Double-click to rename
- Drag to reorder
- Right-click for more options
class Colors
Each class is assigned a color from the Ultralytics palette. Colors are consistent across the Platform for easy recognition.
Keyboard Shortcuts
Efficient annotation with keyboard shortcuts:
| Shortcut | Action |
|---|---|
B | Box tool (detect) |
P | Polygon tool (segment) |
K | Keypoint tool (pose) |
O | OBB tool |
S | SAM smart annotation |
A | Auto-annotate |
V | Select/move mode |
1-9 | Select class 1-9 |
Delete | Delete selected annotation |
Ctrl+Z | Undo |
Ctrl+Y | Redo |
Escape | Cancel current operation |
Enter | Complete polygon |
←/→ | Previous/next image |
View All Shortcuts
Press ? to open the keyboard shortcuts dialog.
Undo/Redo
The annotation editor maintains a full history:
- Undo:
Ctrl+Z(Cmd+Z on Mac) - Redo:
Ctrl+Y(Cmd+Y on Mac)
History includes:
- Adding annotations
- Editing annotations
- Deleting annotations
- Changing classes
Saving Annotations
Annotations are saved when you click Save or navigate away:
- Save: Click the save button or press
Ctrl+S - Cancel: Click cancel to discard changes
- Auto-save warning: Unsaved changes prompt before leaving
Save Your Work
Always save before navigating to another image. Unsaved changes will be lost.
FAQ
How accurate is SAM annotation?
SAM provides high-quality masks for most objects. Accuracy depends on:
- Object distinctiveness from background
- Image quality and resolution
- Number of positive/negative points provided
For best results, start with a positive point on the object center and add negative points to exclude nearby objects.
Can I import existing annotations?
Yes, upload your dataset with YOLO-format label files. The Platform automatically parses and displays them in the editor.
How do I annotate multiple objects of the same class?
After drawing an annotation:
- Keep the same class selected
- Draw the next annotation
- Repeat until all objects are labeled
The keyboard shortcut 1-9 quickly selects classes.
What's the difference between SAM and auto-annotate?
| Feature | SAM | Auto-Annotate |
|---|---|---|
| Method | Interactive point prompts | Model inference |
| Speed | One object at a time | All objects at once |
| Precision | Very high with guidance | Depends on model |
| Best for | Complex objects, fine details | Bulk labeling, simple objects |
Can I train on partially annotated datasets?
Yes, but for best results:
- Label all objects of your target classes in each image
- Use the unknown split for unlabeled images
- Exclude unlabeled images from training configuration