Skip to content

Reference for ultralytics/data/annotator.py

Note

This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/annotator.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!


ultralytics.data.annotator.auto_annotate

auto_annotate(
    data,
    det_model="yolo11x.pt",
    sam_model="sam_b.pt",
    device="",
    conf=0.25,
    iou=0.45,
    imgsz=640,
    max_det=300,
    classes=None,
    output_dir=None,
)

Automatically annotates images using a YOLO object detection model and a SAM segmentation model.

This function processes images in a specified directory, detects objects using a YOLO model, and then generates segmentation masks using a SAM model. The resulting annotations are saved as text files.

Parameters:

NameTypeDescriptionDefault
datastr

Path to a folder containing images to be annotated.

required
det_modelstr

Path or name of the pre-trained YOLO detection model.

'yolo11x.pt'
sam_modelstr

Path or name of the pre-trained SAM segmentation model.

'sam_b.pt'
devicestr

Device to run the models on (e.g., 'cpu', 'cuda', '0').

''
conffloat

Confidence threshold for detection model; default is 0.25.

0.25
ioufloat

IoU threshold for filtering overlapping boxes in detection results; default is 0.45.

0.45
imgszint

Input image resize dimension; default is 640.

640
max_detint

Limits detections per image to control outputs in dense scenes.

300
classeslist

Filters predictions to specified class IDs, returning only relevant detections.

None
output_dirstr | None

Directory to save the annotated results. If None, a default directory is created.

None

Examples:

>>> from ultralytics.data.annotator import auto_annotate
>>> auto_annotate(data="ultralytics/assets", det_model="yolo11n.pt", sam_model="mobile_sam.pt")
Notes
  • The function creates a new directory for output if not specified.
  • Annotation results are saved as text files with the same names as the input images.
  • Each line in the output text file represents a detected object with its class ID and segmentation points.
Source code in ultralytics/data/annotator.py
def auto_annotate(
    data,
    det_model="yolo11x.pt",
    sam_model="sam_b.pt",
    device="",
    conf=0.25,
    iou=0.45,
    imgsz=640,
    max_det=300,
    classes=None,
    output_dir=None,
):
    """
    Automatically annotates images using a YOLO object detection model and a SAM segmentation model.

    This function processes images in a specified directory, detects objects using a YOLO model, and then generates
    segmentation masks using a SAM model. The resulting annotations are saved as text files.

    Args:
        data (str): Path to a folder containing images to be annotated.
        det_model (str): Path or name of the pre-trained YOLO detection model.
        sam_model (str): Path or name of the pre-trained SAM segmentation model.
        device (str): Device to run the models on (e.g., 'cpu', 'cuda', '0').
        conf (float): Confidence threshold for detection model; default is 0.25.
        iou (float): IoU threshold for filtering overlapping boxes in detection results; default is 0.45.
        imgsz (int): Input image resize dimension; default is 640.
        max_det (int): Limits detections per image to control outputs in dense scenes.
        classes (list): Filters predictions to specified class IDs, returning only relevant detections.
        output_dir (str | None): Directory to save the annotated results. If None, a default directory is created.

    Examples:
        >>> from ultralytics.data.annotator import auto_annotate
        >>> auto_annotate(data="ultralytics/assets", det_model="yolo11n.pt", sam_model="mobile_sam.pt")

    Notes:
        - The function creates a new directory for output if not specified.
        - Annotation results are saved as text files with the same names as the input images.
        - Each line in the output text file represents a detected object with its class ID and segmentation points.
    """
    det_model = YOLO(det_model)
    sam_model = SAM(sam_model)

    data = Path(data)
    if not output_dir:
        output_dir = data.parent / f"{data.stem}_auto_annotate_labels"
    Path(output_dir).mkdir(exist_ok=True, parents=True)

    det_results = det_model(
        data, stream=True, device=device, conf=conf, iou=iou, imgsz=imgsz, max_det=max_det, classes=classes
    )

    for result in det_results:
        class_ids = result.boxes.cls.int().tolist()  # noqa
        if len(class_ids):
            boxes = result.boxes.xyxy  # Boxes object for bbox outputs
            sam_results = sam_model(result.orig_img, bboxes=boxes, verbose=False, save=False, device=device)
            segments = sam_results[0].masks.xyn  # noqa

            with open(f"{Path(output_dir) / Path(result.path).stem}.txt", "w") as f:
                for i in range(len(segments)):
                    s = segments[i]
                    if len(s) == 0:
                        continue
                    segment = map(str, segments[i].reshape(-1).tolist())
                    f.write(f"{class_ids[i]} " + " ".join(segment) + "\n")



📅 Created 1 year ago ✏️ Updated 2 months ago