跳至内容

参考资料 ultralytics/data/annotator.py

备注

该文件可从https://github.com/ultralytics/ultralytics/blob/main/ ultralytics/data/annotator .py 获取。如果您发现问题,请通过提交 Pull Request🛠️ 帮助修复。谢谢🙏!



ultralytics.data.annotator.auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='', output_dir=None)

使用YOLO 物体检测模型和SAM 分割模型自动注释图像。

参数

名称 类型 说明 默认值
data str

包含要注释的图像的文件夹路径。

所需
det_model str

预先训练的YOLO 检测模型。默认为 "yolov8x.pt"。

'yolov8x.pt'
sam_model str

预先训练的SAM 分割模型。默认为 "sam_b.pt"。

'sam_b.pt'
device str

运行模型的设备。默认为空字符串(CPU 或 GPU,如果可用)。

''
output_dir str | None | optional

保存注释结果的目录。 默认为 "数据 "同一目录下的 "标签 "文件夹。

None
示例
from ultralytics.data.annotator import auto_annotate

auto_annotate(data='ultralytics/assets', det_model='yolov8n.pt', sam_model='mobile_sam.pt')
源代码 ultralytics/data/annotator.py
def auto_annotate(data, det_model="yolov8x.pt", sam_model="sam_b.pt", device="", output_dir=None):
    """
    Automatically annotates images using a YOLO object detection model and a SAM segmentation model.

    Args:
        data (str): Path to a folder containing images to be annotated.
        det_model (str, optional): Pre-trained YOLO detection model. Defaults to 'yolov8x.pt'.
        sam_model (str, optional): Pre-trained SAM segmentation model. Defaults to 'sam_b.pt'.
        device (str, optional): Device to run the models on. Defaults to an empty string (CPU or GPU, if available).
        output_dir (str | None | optional): Directory to save the annotated results.
            Defaults to a 'labels' folder in the same directory as 'data'.

    Example:
        ```python
        from ultralytics.data.annotator import auto_annotate

        auto_annotate(data='ultralytics/assets', det_model='yolov8n.pt', sam_model='mobile_sam.pt')
        ```
    """
    det_model = YOLO(det_model)
    sam_model = SAM(sam_model)

    data = Path(data)
    if not output_dir:
        output_dir = data.parent / f"{data.stem}_auto_annotate_labels"
    Path(output_dir).mkdir(exist_ok=True, parents=True)

    det_results = det_model(data, stream=True, device=device)

    for result in det_results:
        class_ids = result.boxes.cls.int().tolist()  # noqa
        if len(class_ids):
            boxes = result.boxes.xyxy  # Boxes object for bbox outputs
            sam_results = sam_model(result.orig_img, bboxes=boxes, verbose=False, save=False, device=device)
            segments = sam_results[0].masks.xyn  # noqa

            with open(f"{Path(output_dir) / Path(result.path).stem}.txt", "w") as f:
                for i in range(len(segments)):
                    s = segments[i]
                    if len(s) == 0:
                        continue
                    segment = map(str, segments[i].reshape(-1).tolist())
                    f.write(f"{class_ids[i]} " + " ".join(segment) + "\n")





创建于 2023-11-12,更新于 2023-11-25
作者:glenn-jocher(3)