Skip to content

Reference for ultralytics/models/sam/predict.py

Note

This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/predict.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!


ultralytics.models.sam.predict.Predictor

Predictor(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

Bases: BasePredictor

Predictor class for SAM, enabling real-time image segmentation with promptable capabilities.

This class extends BasePredictor and implements the Segment Anything Model (SAM) for advanced image segmentation tasks. It supports various input prompts like points, bounding boxes, and masks for fine-grained control over segmentation results.

Attributes:

Name Type Description
args SimpleNamespace

Configuration arguments for the predictor.

model Module

The loaded SAM model.

device device

The device (CPU or GPU) on which the model is loaded.

im Tensor

The preprocessed input image.

features Tensor

Extracted image features.

prompts Dict[str, Any]

Dictionary to store various types of prompts (e.g., bboxes, points, masks).

segment_all bool

Flag to indicate if full image segmentation should be performed.

mean Tensor

Mean values for image normalization.

std Tensor

Standard deviation values for image normalization.

Methods:

Name Description
preprocess

Prepare input images for model inference.

pre_transform

Perform initial transformations on the input image.

inference

Perform segmentation inference based on input prompts.

prompt_inference

Internal function for prompt-based segmentation inference.

generate

Generate segmentation masks for an entire image.

setup_model

Initialize the SAM model for inference.

get_model

Build and return a SAM model.

postprocess

Post-process model outputs to generate final results.

setup_source

Set up the data source for inference.

set_image

Set and preprocess a single image for inference.

get_im_features

Extract image features using the SAM image encoder.

set_prompts

Set prompts for subsequent inference.

reset_image

Reset the current image and its features.

remove_small_regions

Remove small disconnected regions and holes from masks.

Examples:

>>> predictor = Predictor()
>>> predictor.setup_model(model_path="sam_model.pt")
>>> predictor.set_image("image.jpg")
>>> bboxes = [[100, 100, 200, 200]]
>>> results = predictor(bboxes=bboxes)

Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

Parameters:

Name Type Description Default
cfg dict

Configuration dictionary containing default settings.

DEFAULT_CFG
overrides dict | None

Dictionary of values to override default configuration.

None
_callbacks dict | None

Dictionary of callback functions to customize behavior.

None

Examples:

>>> predictor_example = Predictor(cfg=DEFAULT_CFG)
>>> predictor_example_with_imgsz = Predictor(overrides={"imgsz": 640})
>>> predictor_example_with_callback = Predictor(_callbacks={"on_predict_start": custom_callback})
Source code in ultralytics/models/sam/predict.py
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the Predictor with configuration, overrides, and callbacks.

    Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or
    callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True
    for optimal results.

    Args:
        cfg (dict): Configuration dictionary containing default settings.
        overrides (dict | None): Dictionary of values to override default configuration.
        _callbacks (dict | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor_example = Predictor(cfg=DEFAULT_CFG)
        >>> predictor_example_with_imgsz = Predictor(overrides={"imgsz": 640})
        >>> predictor_example_with_callback = Predictor(_callbacks={"on_predict_start": custom_callback})
    """
    if overrides is None:
        overrides = {}
    overrides.update(dict(task="segment", mode="predict", batch=1))
    super().__init__(cfg, overrides, _callbacks)
    self.args.retina_masks = True
    self.im = None
    self.features = None
    self.prompts = {}
    self.segment_all = False

generate

generate(
    im,
    crop_n_layers=0,
    crop_overlap_ratio=512 / 1500,
    crop_downscale_factor=1,
    point_grids=None,
    points_stride=32,
    points_batch_size=64,
    conf_thres=0.88,
    stability_score_thresh=0.95,
    stability_score_offset=0.95,
    crop_nms_thresh=0.7,
)

Perform image segmentation using the Segment Anything Model (SAM).

This method segments an entire image into constituent parts by leveraging SAM's advanced architecture and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

Parameters:

Name Type Description Default
im Tensor

Input tensor representing the preprocessed image with shape (N, C, H, W).

required
crop_n_layers int

Number of layers for additional mask predictions on image crops.

0
crop_overlap_ratio float

Overlap between crops, scaled down in subsequent layers.

512 / 1500
crop_downscale_factor int

Scaling factor for sampled points-per-side in each layer.

1
point_grids List[ndarray] | None

Custom grids for point sampling normalized to [0,1].

None
points_stride int

Number of points to sample along each side of the image.

32
points_batch_size int

Batch size for the number of points processed simultaneously.

64
conf_thres float

Confidence threshold [0,1] for filtering based on mask quality prediction.

0.88
stability_score_thresh float

Stability threshold [0,1] for mask filtering based on stability.

0.95
stability_score_offset float

Offset value for calculating stability score.

0.95
crop_nms_thresh float

IoU cutoff for NMS to remove duplicate masks between crops.

0.7

Returns:

Name Type Description
pred_masks Tensor

Segmented masks with shape (N, H, W).

pred_scores Tensor

Confidence scores for each mask with shape (N,).

pred_bboxes Tensor

Bounding boxes for each mask with shape (N, 4).

Examples:

>>> predictor = Predictor()
>>> im = torch.rand(1, 3, 1024, 1024)  # Example input image
>>> masks, scores, boxes = predictor.generate(im)
Source code in ultralytics/models/sam/predict.py
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
def generate(
    self,
    im,
    crop_n_layers=0,
    crop_overlap_ratio=512 / 1500,
    crop_downscale_factor=1,
    point_grids=None,
    points_stride=32,
    points_batch_size=64,
    conf_thres=0.88,
    stability_score_thresh=0.95,
    stability_score_offset=0.95,
    crop_nms_thresh=0.7,
):
    """
    Perform image segmentation using the Segment Anything Model (SAM).

    This method segments an entire image into constituent parts by leveraging SAM's advanced architecture
    and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

    Args:
        im (torch.Tensor): Input tensor representing the preprocessed image with shape (N, C, H, W).
        crop_n_layers (int): Number of layers for additional mask predictions on image crops.
        crop_overlap_ratio (float): Overlap between crops, scaled down in subsequent layers.
        crop_downscale_factor (int): Scaling factor for sampled points-per-side in each layer.
        point_grids (List[np.ndarray] | None): Custom grids for point sampling normalized to [0,1].
        points_stride (int): Number of points to sample along each side of the image.
        points_batch_size (int): Batch size for the number of points processed simultaneously.
        conf_thres (float): Confidence threshold [0,1] for filtering based on mask quality prediction.
        stability_score_thresh (float): Stability threshold [0,1] for mask filtering based on stability.
        stability_score_offset (float): Offset value for calculating stability score.
        crop_nms_thresh (float): IoU cutoff for NMS to remove duplicate masks between crops.

    Returns:
        pred_masks (torch.Tensor): Segmented masks with shape (N, H, W).
        pred_scores (torch.Tensor): Confidence scores for each mask with shape (N,).
        pred_bboxes (torch.Tensor): Bounding boxes for each mask with shape (N, 4).

    Examples:
        >>> predictor = Predictor()
        >>> im = torch.rand(1, 3, 1024, 1024)  # Example input image
        >>> masks, scores, boxes = predictor.generate(im)
    """
    import torchvision  # scope for faster 'import ultralytics'

    self.segment_all = True
    ih, iw = im.shape[2:]
    crop_regions, layer_idxs = generate_crop_boxes((ih, iw), crop_n_layers, crop_overlap_ratio)
    if point_grids is None:
        point_grids = build_all_layer_point_grids(points_stride, crop_n_layers, crop_downscale_factor)
    pred_masks, pred_scores, pred_bboxes, region_areas = [], [], [], []
    for crop_region, layer_idx in zip(crop_regions, layer_idxs):
        x1, y1, x2, y2 = crop_region
        w, h = x2 - x1, y2 - y1
        area = torch.tensor(w * h, device=im.device)
        points_scale = np.array([[w, h]])  # w, h
        # Crop image and interpolate to input size
        crop_im = F.interpolate(im[..., y1:y2, x1:x2], (ih, iw), mode="bilinear", align_corners=False)
        # (num_points, 2)
        points_for_image = point_grids[layer_idx] * points_scale
        crop_masks, crop_scores, crop_bboxes = [], [], []
        for (points,) in batch_iterator(points_batch_size, points_for_image):
            pred_mask, pred_score = self.prompt_inference(crop_im, points=points, multimask_output=True)
            # Interpolate predicted masks to input size
            pred_mask = F.interpolate(pred_mask[None], (h, w), mode="bilinear", align_corners=False)[0]
            idx = pred_score > conf_thres
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]

            stability_score = calculate_stability_score(
                pred_mask, self.model.mask_threshold, stability_score_offset
            )
            idx = stability_score > stability_score_thresh
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]
            # Bool type is much more memory-efficient.
            pred_mask = pred_mask > self.model.mask_threshold
            # (N, 4)
            pred_bbox = batched_mask_to_box(pred_mask).float()
            keep_mask = ~is_box_near_crop_edge(pred_bbox, crop_region, [0, 0, iw, ih])
            if not torch.all(keep_mask):
                pred_bbox, pred_mask, pred_score = pred_bbox[keep_mask], pred_mask[keep_mask], pred_score[keep_mask]

            crop_masks.append(pred_mask)
            crop_bboxes.append(pred_bbox)
            crop_scores.append(pred_score)

        # Do nms within this crop
        crop_masks = torch.cat(crop_masks)
        crop_bboxes = torch.cat(crop_bboxes)
        crop_scores = torch.cat(crop_scores)
        keep = torchvision.ops.nms(crop_bboxes, crop_scores, self.args.iou)  # NMS
        crop_bboxes = uncrop_boxes_xyxy(crop_bboxes[keep], crop_region)
        crop_masks = uncrop_masks(crop_masks[keep], crop_region, ih, iw)
        crop_scores = crop_scores[keep]

        pred_masks.append(crop_masks)
        pred_bboxes.append(crop_bboxes)
        pred_scores.append(crop_scores)
        region_areas.append(area.expand(len(crop_masks)))

    pred_masks = torch.cat(pred_masks)
    pred_bboxes = torch.cat(pred_bboxes)
    pred_scores = torch.cat(pred_scores)
    region_areas = torch.cat(region_areas)

    # Remove duplicate masks between crops
    if len(crop_regions) > 1:
        scores = 1 / region_areas
        keep = torchvision.ops.nms(pred_bboxes, scores, crop_nms_thresh)
        pred_masks, pred_bboxes, pred_scores = pred_masks[keep], pred_bboxes[keep], pred_scores[keep]

    return pred_masks, pred_scores, pred_bboxes

get_im_features

get_im_features(im)

Extract image features using the SAM model's image encoder for subsequent mask prediction.

Source code in ultralytics/models/sam/predict.py
589
590
591
592
593
594
595
def get_im_features(self, im):
    """Extract image features using the SAM model's image encoder for subsequent mask prediction."""
    assert isinstance(self.imgsz, (tuple, list)) and self.imgsz[0] == self.imgsz[1], (
        f"SAM models only support square image size, but got {self.imgsz}."
    )
    self.model.set_imgsz(self.imgsz)
    return self.model.image_encoder(im)

get_model

get_model()

Retrieve or build the Segment Anything Model (SAM) for image segmentation tasks.

Source code in ultralytics/models/sam/predict.py
472
473
474
475
476
def get_model(self):
    """Retrieve or build the Segment Anything Model (SAM) for image segmentation tasks."""
    from .build import build_sam  # slow import

    return build_sam(self.args.model)

inference

inference(
    im,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
    *args,
    **kwargs
)

Perform image segmentation inference based on the given input cues, using the currently loaded image.

This method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and mask decoder for real-time and promptable segmentation tasks.

Parameters:

Name Type Description Default
im Tensor

The preprocessed input image in tensor format, with shape (N, C, H, W).

required
bboxes ndarray | List | None

Bounding boxes with shape (N, 4), in XYXY format.

None
points ndarray | List | None

Points indicating object locations with shape (N, 2), in pixels.

None
labels ndarray | List | None

Labels for point prompts, shape (N,). 1 = foreground, 0 = background.

None
masks ndarray | None

Low-resolution masks from previous predictions, shape (N, H, W). For SAM H=W=256.

None
multimask_output bool

Flag to return multiple masks. Helpful for ambiguous prompts.

False
*args Any

Additional positional arguments.

()
**kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
pred_masks Tensor

The output masks in shape (C, H, W), where C is the number of generated masks.

pred_scores Tensor

An array of length C containing quality scores predicted by the model for each mask.

Examples:

>>> predictor = Predictor()
>>> predictor.setup_model(model_path="sam_model.pt")
>>> predictor.set_image("image.jpg")
>>> results = predictor(bboxes=[[0, 0, 100, 100]])
Source code in ultralytics/models/sam/predict.py
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
def inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs):
    """
    Perform image segmentation inference based on the given input cues, using the currently loaded image.

    This method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt
    encoder, and mask decoder for real-time and promptable segmentation tasks.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List | None): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List | None): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List | None): Labels for point prompts, shape (N,). 1 = foreground, 0 = background.
        masks (np.ndarray | None): Low-resolution masks from previous predictions, shape (N, H, W). For SAM H=W=256.
        multimask_output (bool): Flag to return multiple masks. Helpful for ambiguous prompts.
        *args (Any): Additional positional arguments.
        **kwargs (Any): Additional keyword arguments.

    Returns:
        pred_masks (torch.Tensor): The output masks in shape (C, H, W), where C is the number of generated masks.
        pred_scores (torch.Tensor): An array of length C containing quality scores predicted by the model for each mask.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.setup_model(model_path="sam_model.pt")
        >>> predictor.set_image("image.jpg")
        >>> results = predictor(bboxes=[[0, 0, 100, 100]])
    """
    # Override prompts if any stored in self.prompts
    bboxes = self.prompts.pop("bboxes", bboxes)
    points = self.prompts.pop("points", points)
    masks = self.prompts.pop("masks", masks)
    labels = self.prompts.pop("labels", labels)

    if all(i is None for i in [bboxes, points, masks]):
        return self.generate(im, *args, **kwargs)

    return self.prompt_inference(im, bboxes, points, labels, masks, multimask_output)

inference_features

inference_features(
    features,
    src_shape,
    dst_shape=None,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
)

Perform prompts preprocessing and inference on provided image features using the SAM model.

Parameters:

Name Type Description Default
features Tensor | Dict[str, Any]

Extracted image features from the SAM/SAM2 model image encoder.

required
src_shape Tuple[int, int]

The source shape (height, width) of the input image.

required
dst_shape Tuple[int, int] | None

The target shape (height, width) for the prompts. If None, defaults to (imgsz, imgsz).

None
bboxes ndarray | List[List[float]] | None

Bounding boxes in xyxy format with shape (N, 4).

None
points ndarray | List[List[float]] | None

Points indicating object locations with shape (N, 2), in pixels.

None
labels ndarray | List[int] | None

Point prompt labels with shape (N, ).

None
masks List[ndarray] | ndarray | None

Masks for the objects, where each mask is a 2D array.

None
multimask_output bool

Flag to return multiple masks for ambiguous prompts.

False

Returns:

Name Type Description
pred_masks Tensor

The output masks in shape (C, H, W), where C is the number of generated masks.

pred_bboxes Tensor

Bounding boxes for each mask with shape (N, 6), where N is the number of boxes. Each box is in xyxy format with additional columns for score and class.

Notes
  • The input features is a torch.Tensor of shape (B, C, H, W) if performing on SAM, or a Dict[str, Any] if performing on SAM2.
Source code in ultralytics/models/sam/predict.py
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
@smart_inference_mode()
def inference_features(
    self,
    features,
    src_shape,
    dst_shape=None,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
):
    """
    Perform prompts preprocessing and inference on provided image features using the SAM model.

    Args:
        features (torch.Tensor | Dict[str, Any]): Extracted image features from the SAM/SAM2 model image encoder.
        src_shape (Tuple[int, int]): The source shape (height, width) of the input image.
        dst_shape (Tuple[int, int] | None): The target shape (height, width) for the prompts. If None, defaults to (imgsz, imgsz).
        bboxes (np.ndarray | List[List[float]] | None): Bounding boxes in xyxy format with shape (N, 4).
        points (np.ndarray | List[List[float]] | None): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List[int] | None): Point prompt labels with shape (N, ).
        masks (List[np.ndarray] | np.ndarray | None): Masks for the objects, where each mask is a 2D array.
        multimask_output (bool): Flag to return multiple masks for ambiguous prompts.

    Returns:
        pred_masks (torch.Tensor): The output masks in shape (C, H, W), where C is the number of generated masks.
        pred_bboxes (torch.Tensor): Bounding boxes for each mask with shape (N, 6), where N is the number of boxes.
            Each box is in xyxy format with additional columns for score and class.

    Notes:
        - The input features is a torch.Tensor of shape (B, C, H, W) if performing on SAM, or a Dict[str, Any] if performing on SAM2.
    """
    dst_shape = dst_shape or (self.args.imgsz, self.args.imgsz)
    prompts = self._prepare_prompts(dst_shape, src_shape, bboxes, points, labels, masks)
    pred_masks, pred_scores = self._inference_features(features, *prompts, multimask_output)
    if len(pred_masks) == 0:
        pred_masks, pred_bboxes = None, torch.zeros((0, 6), device=pred_masks.device)
    else:
        pred_masks = ops.scale_masks(pred_masks[None].float(), src_shape, padding=False)[0]
        pred_masks = pred_masks > self.model.mask_threshold  # to bool
        pred_bboxes = batched_mask_to_box(pred_masks)
        # NOTE: SAM models do not return cls info. This `cls` here is just a placeholder for consistency.
        cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
        pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)
    return pred_masks, pred_bboxes

postprocess

postprocess(preds, img, orig_imgs)

Post-process SAM's inference outputs to generate object detection masks and bounding boxes.

This method scales masks and boxes to the original image size and applies a threshold to the mask predictions. It leverages SAM's advanced architecture for real-time, promptable segmentation tasks.

Parameters:

Name Type Description Default
preds tuple

The output from SAM model inference, containing: - pred_masks (torch.Tensor): Predicted masks with shape (N, 1, H, W). - pred_scores (torch.Tensor): Confidence scores for each mask with shape (N, 1). - pred_bboxes (torch.Tensor, optional): Predicted bounding boxes if segment_all is True.

required
img Tensor

The processed input image tensor with shape (C, H, W).

required
orig_imgs List[ndarray] | Tensor

The original, unprocessed images.

required

Returns:

Type Description
List[Results]

List of Results objects containing detection masks, bounding boxes, and other metadata for each processed image.

Examples:

>>> predictor = Predictor()
>>> preds = predictor.inference(img)
>>> results = predictor.postprocess(preds, img, orig_imgs)
Source code in ultralytics/models/sam/predict.py
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def postprocess(self, preds, img, orig_imgs):
    """
    Post-process SAM's inference outputs to generate object detection masks and bounding boxes.

    This method scales masks and boxes to the original image size and applies a threshold to the mask
    predictions. It leverages SAM's advanced architecture for real-time, promptable segmentation tasks.

    Args:
        preds (tuple): The output from SAM model inference, containing:
            - pred_masks (torch.Tensor): Predicted masks with shape (N, 1, H, W).
            - pred_scores (torch.Tensor): Confidence scores for each mask with shape (N, 1).
            - pred_bboxes (torch.Tensor, optional): Predicted bounding boxes if segment_all is True.
        img (torch.Tensor): The processed input image tensor with shape (C, H, W).
        orig_imgs (List[np.ndarray] | torch.Tensor): The original, unprocessed images.

    Returns:
        (List[Results]): List of Results objects containing detection masks, bounding boxes, and other
            metadata for each processed image.

    Examples:
        >>> predictor = Predictor()
        >>> preds = predictor.inference(img)
        >>> results = predictor.postprocess(preds, img, orig_imgs)
    """
    # (N, 1, H, W), (N, 1)
    pred_masks, pred_scores = preds[:2]
    pred_bboxes = preds[2] if self.segment_all else None
    names = dict(enumerate(str(i) for i in range(len(pred_masks))))

    if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
        orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

    results = []
    for masks, orig_img, img_path in zip([pred_masks], orig_imgs, self.batch[0]):
        if len(masks) == 0:
            masks, pred_bboxes = None, torch.zeros((0, 6), device=pred_masks.device)
        else:
            masks = ops.scale_masks(masks[None].float(), orig_img.shape[:2], padding=False)[0]
            masks = masks > self.model.mask_threshold  # to bool
            if pred_bboxes is not None:
                pred_bboxes = ops.scale_boxes(img.shape[2:], pred_bboxes.float(), orig_img.shape, padding=False)
            else:
                pred_bboxes = batched_mask_to_box(masks)
            # NOTE: SAM models do not return cls info. This `cls` here is just a placeholder for consistency.
            cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
            idx = pred_scores > self.args.conf
            pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)[idx]
            masks = masks[idx]
        results.append(Results(orig_img, path=img_path, names=names, masks=masks, boxes=pred_bboxes))
    # Reset segment-all mode.
    self.segment_all = False
    return results

pre_transform

pre_transform(im)

Perform initial transformations on the input image for preprocessing.

This method applies transformations such as resizing to prepare the image for further preprocessing. Currently, batched inference is not supported; hence the list length should be 1.

Parameters:

Name Type Description Default
im List[ndarray]

List containing a single image in HWC numpy array format.

required

Returns:

Type Description
List[ndarray]

List containing the transformed image.

Raises:

Type Description
AssertionError

If the input list contains more than one image.

Examples:

>>> predictor = Predictor()
>>> image = np.random.rand(480, 640, 3)  # Single HWC image
>>> transformed = predictor.pre_transform([image])
>>> print(len(transformed))
1
Source code in ultralytics/models/sam/predict.py
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
def pre_transform(self, im):
    """
    Perform initial transformations on the input image for preprocessing.

    This method applies transformations such as resizing to prepare the image for further preprocessing.
    Currently, batched inference is not supported; hence the list length should be 1.

    Args:
        im (List[np.ndarray]): List containing a single image in HWC numpy array format.

    Returns:
        (List[np.ndarray]): List containing the transformed image.

    Raises:
        AssertionError: If the input list contains more than one image.

    Examples:
        >>> predictor = Predictor()
        >>> image = np.random.rand(480, 640, 3)  # Single HWC image
        >>> transformed = predictor.pre_transform([image])
        >>> print(len(transformed))
        1
    """
    assert len(im) == 1, "SAM model does not currently support batched inference"
    letterbox = LetterBox(self.args.imgsz, auto=False, center=False)
    return [letterbox(image=x) for x in im]

preprocess

preprocess(im)

Preprocess the input image for model inference.

This method prepares the input image by applying transformations and normalization. It supports both torch.Tensor and list of np.ndarray as input formats.

Parameters:

Name Type Description Default
im Tensor | List[ndarray]

Input image(s) in BCHW tensor format or list of HWC numpy arrays.

required

Returns:

Type Description
Tensor

The preprocessed image tensor, normalized and converted to the appropriate dtype.

Examples:

>>> predictor = Predictor()
>>> image = torch.rand(1, 3, 640, 640)
>>> preprocessed_image = predictor.preprocess(image)
Source code in ultralytics/models/sam/predict.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
def preprocess(self, im):
    """
    Preprocess the input image for model inference.

    This method prepares the input image by applying transformations and normalization. It supports both
    torch.Tensor and list of np.ndarray as input formats.

    Args:
        im (torch.Tensor | List[np.ndarray]): Input image(s) in BCHW tensor format or list of HWC numpy arrays.

    Returns:
        (torch.Tensor): The preprocessed image tensor, normalized and converted to the appropriate dtype.

    Examples:
        >>> predictor = Predictor()
        >>> image = torch.rand(1, 3, 640, 640)
        >>> preprocessed_image = predictor.preprocess(image)
    """
    if self.im is not None:
        return self.im
    not_tensor = not isinstance(im, torch.Tensor)
    if not_tensor:
        im = np.stack(self.pre_transform(im))
        im = im[..., ::-1].transpose((0, 3, 1, 2))
        im = np.ascontiguousarray(im)
        im = torch.from_numpy(im)

    im = im.to(self.device)
    if not_tensor:
        im = (im - self.mean) / self.std
    im = im.half() if self.model.fp16 else im.float()
    return im

prompt_inference

prompt_inference(
    im,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
)

Perform image segmentation inference based on input cues using SAM's specialized architecture.

This internal function leverages the Segment Anything Model (SAM) for prompt-based, real-time segmentation. It processes various input prompts such as bounding boxes, points, and masks to generate segmentation masks.

Parameters:

Name Type Description Default
im Tensor

Preprocessed input image tensor with shape (N, C, H, W).

required
bboxes ndarray | List | None

Bounding boxes in XYXY format with shape (N, 4).

None
points ndarray | List | None

Points indicating object locations with shape (N, 2) or (N, num_points, 2), in pixels.

None
labels ndarray | List | None

Point prompt labels with shape (N) or (N, num_points). 1 for foreground, 0 for background.

None
masks ndarray | None

Low-res masks from previous predictions with shape (N, H, W). For SAM, H=W=256.

None
multimask_output bool

Flag to return multiple masks for ambiguous prompts.

False

Returns:

Name Type Description
pred_masks Tensor

Output masks with shape (C, H, W), where C is the number of generated masks.

pred_scores Tensor

Quality scores predicted by the model for each mask, with length C.

Examples:

>>> predictor = Predictor()
>>> im = torch.rand(1, 3, 1024, 1024)
>>> bboxes = [[100, 100, 200, 200]]
>>> masks, scores, logits = predictor.prompt_inference(im, bboxes=bboxes)
Source code in ultralytics/models/sam/predict.py
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
def prompt_inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False):
    """
    Perform image segmentation inference based on input cues using SAM's specialized architecture.

    This internal function leverages the Segment Anything Model (SAM) for prompt-based, real-time segmentation.
    It processes various input prompts such as bounding boxes, points, and masks to generate segmentation masks.

    Args:
        im (torch.Tensor): Preprocessed input image tensor with shape (N, C, H, W).
        bboxes (np.ndarray | List | None): Bounding boxes in XYXY format with shape (N, 4).
        points (np.ndarray | List | None): Points indicating object locations with shape (N, 2) or (N, num_points, 2), in pixels.
        labels (np.ndarray | List | None): Point prompt labels with shape (N) or (N, num_points). 1 for foreground, 0 for background.
        masks (np.ndarray | None): Low-res masks from previous predictions with shape (N, H, W). For SAM, H=W=256.
        multimask_output (bool): Flag to return multiple masks for ambiguous prompts.

    Returns:
        pred_masks (torch.Tensor): Output masks with shape (C, H, W), where C is the number of generated masks.
        pred_scores (torch.Tensor): Quality scores predicted by the model for each mask, with length C.

    Examples:
        >>> predictor = Predictor()
        >>> im = torch.rand(1, 3, 1024, 1024)
        >>> bboxes = [[100, 100, 200, 200]]
        >>> masks, scores, logits = predictor.prompt_inference(im, bboxes=bboxes)
    """
    features = self.get_im_features(im) if self.features is None else self.features

    prompts = self._prepare_prompts(im.shape[2:], self.batch[1][0].shape[:2], bboxes, points, labels, masks)
    return self._inference_features(features, *prompts, multimask_output)

remove_small_regions staticmethod

remove_small_regions(masks, min_area=0, nms_thresh=0.7)

Remove small disconnected regions and holes from segmentation masks.

This function performs post-processing on segmentation masks generated by the Segment Anything Model (SAM). It removes small disconnected regions and holes from the input masks, and then performs Non-Maximum Suppression (NMS) to eliminate any newly created duplicate boxes.

Parameters:

Name Type Description Default
masks Tensor

Segmentation masks to be processed, with shape (N, H, W) where N is the number of masks, H is height, and W is width.

required
min_area int

Minimum area threshold for removing disconnected regions and holes. Regions smaller than this will be removed.

0
nms_thresh float

IoU threshold for the NMS algorithm to remove duplicate boxes.

0.7

Returns:

Name Type Description
new_masks Tensor

Processed masks with small regions removed, shape (N, H, W).

keep List[int]

Indices of remaining masks after NMS, for filtering corresponding boxes.

Examples:

>>> masks = torch.rand(5, 640, 640) > 0.5  # 5 random binary masks
>>> new_masks, keep = remove_small_regions(masks, min_area=100, nms_thresh=0.7)
>>> print(f"Original masks: {masks.shape}, Processed masks: {new_masks.shape}")
>>> print(f"Indices of kept masks: {keep}")
Source code in ultralytics/models/sam/predict.py
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
@staticmethod
def remove_small_regions(masks, min_area=0, nms_thresh=0.7):
    """
    Remove small disconnected regions and holes from segmentation masks.

    This function performs post-processing on segmentation masks generated by the Segment Anything Model (SAM).
    It removes small disconnected regions and holes from the input masks, and then performs Non-Maximum
    Suppression (NMS) to eliminate any newly created duplicate boxes.

    Args:
        masks (torch.Tensor): Segmentation masks to be processed, with shape (N, H, W) where N is the number of
            masks, H is height, and W is width.
        min_area (int): Minimum area threshold for removing disconnected regions and holes. Regions smaller than
            this will be removed.
        nms_thresh (float): IoU threshold for the NMS algorithm to remove duplicate boxes.

    Returns:
        new_masks (torch.Tensor): Processed masks with small regions removed, shape (N, H, W).
        keep (List[int]): Indices of remaining masks after NMS, for filtering corresponding boxes.

    Examples:
        >>> masks = torch.rand(5, 640, 640) > 0.5  # 5 random binary masks
        >>> new_masks, keep = remove_small_regions(masks, min_area=100, nms_thresh=0.7)
        >>> print(f"Original masks: {masks.shape}, Processed masks: {new_masks.shape}")
        >>> print(f"Indices of kept masks: {keep}")
    """
    import torchvision  # scope for faster 'import ultralytics'

    if len(masks) == 0:
        return masks

    # Filter small disconnected regions and holes
    new_masks = []
    scores = []
    for mask in masks:
        mask = mask.cpu().numpy().astype(np.uint8)
        mask, changed = remove_small_regions(mask, min_area, mode="holes")
        unchanged = not changed
        mask, changed = remove_small_regions(mask, min_area, mode="islands")
        unchanged = unchanged and not changed

        new_masks.append(torch.as_tensor(mask).unsqueeze(0))
        # Give score=0 to changed masks and 1 to unchanged masks so NMS prefers masks not needing postprocessing
        scores.append(float(unchanged))

    # Recalculate boxes and remove any new duplicates
    new_masks = torch.cat(new_masks, dim=0)
    boxes = batched_mask_to_box(new_masks)
    keep = torchvision.ops.nms(boxes.float(), torch.as_tensor(scores), nms_thresh)

    return new_masks[keep].to(device=masks.device, dtype=masks.dtype), keep

reset_image

reset_image()

Reset the current image and its features, clearing them for subsequent inference.

Source code in ultralytics/models/sam/predict.py
601
602
603
604
def reset_image(self):
    """Reset the current image and its features, clearing them for subsequent inference."""
    self.im = None
    self.features = None

set_image

set_image(image)

Preprocess and set a single image for inference.

This method prepares the model for inference on a single image by setting up the model if not already initialized, configuring the data source, and preprocessing the image for feature extraction. It ensures that only one image is set at a time and extracts image features for subsequent use.

Parameters:

Name Type Description Default
image str | ndarray

Path to the image file as a string, or a numpy array representing an image read by cv2.

required

Examples:

>>> predictor = Predictor()
>>> predictor.set_image("path/to/image.jpg")
>>> predictor.set_image(cv2.imread("path/to/image.jpg"))

Raises:

Type Description
AssertionError

If more than one image is attempted to be set.

Notes
  • This method should be called before performing inference on a new image.
  • The extracted features are stored in the self.features attribute for later use.
Source code in ultralytics/models/sam/predict.py
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
def set_image(self, image):
    """
    Preprocess and set a single image for inference.

    This method prepares the model for inference on a single image by setting up the model if not already
    initialized, configuring the data source, and preprocessing the image for feature extraction. It
    ensures that only one image is set at a time and extracts image features for subsequent use.

    Args:
        image (str | np.ndarray): Path to the image file as a string, or a numpy array representing
            an image read by cv2.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.set_image("path/to/image.jpg")
        >>> predictor.set_image(cv2.imread("path/to/image.jpg"))

    Raises:
        AssertionError: If more than one image is attempted to be set.

    Notes:
        - This method should be called before performing inference on a new image.
        - The extracted features are stored in the `self.features` attribute for later use.
    """
    if self.model is None:
        self.setup_model()
    self.setup_source(image)
    assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
    for batch in self.dataset:
        im = self.preprocess(batch[1])
        self.features = self.get_im_features(im)
        break

set_prompts

set_prompts(prompts)

Set prompts for subsequent inference operations.

Source code in ultralytics/models/sam/predict.py
597
598
599
def set_prompts(self, prompts):
    """Set prompts for subsequent inference operations."""
    self.prompts = prompts

setup_model

setup_model(model=None, verbose=True)

Initialize the Segment Anything Model (SAM) for inference.

This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary parameters for image normalization and other Ultralytics compatibility settings.

Parameters:

Name Type Description Default
model Module | None

A pretrained SAM model. If None, a new model is built based on config.

None
verbose bool

If True, prints selected device information.

True

Examples:

>>> predictor = Predictor()
>>> predictor.setup_model(model=sam_model, verbose=True)
Source code in ultralytics/models/sam/predict.py
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
def setup_model(self, model=None, verbose=True):
    """
    Initialize the Segment Anything Model (SAM) for inference.

    This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary
    parameters for image normalization and other Ultralytics compatibility settings.

    Args:
        model (torch.nn.Module | None): A pretrained SAM model. If None, a new model is built based on config.
        verbose (bool): If True, prints selected device information.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.setup_model(model=sam_model, verbose=True)
    """
    device = select_device(self.args.device, verbose=verbose)
    if model is None:
        model = self.get_model()
    model.eval()
    model = model.to(device)
    self.model = model.half() if self.args.half else model.float()
    self.device = device
    self.mean = torch.tensor([123.675, 116.28, 103.53]).view(-1, 1, 1).to(device)
    self.std = torch.tensor([58.395, 57.12, 57.375]).view(-1, 1, 1).to(device)

    # Ultralytics compatibility settings
    self.model.pt = False
    self.model.triton = False
    self.model.stride = 32
    self.model.fp16 = self.args.half
    self.done_warmup = True
    self.torch_dtype = torch.float16 if self.model.fp16 else torch.float32

setup_source

setup_source(source)

Set up the data source for inference.

This method configures the data source from which images will be fetched for inference. It supports various input types such as image files, directories, video files, and other compatible data sources.

Parameters:

Name Type Description Default
source str | Path | None

The path or identifier for the image data source. Can be a file path, directory path, URL, or other supported source types.

required

Examples:

>>> predictor = Predictor()
>>> predictor.setup_source("path/to/images")
>>> predictor.setup_source("video.mp4")
>>> predictor.setup_source(None)  # Uses default source if available
Notes
  • If source is None, the method may use a default source if configured.
  • The method adapts to different source types and prepares them for subsequent inference steps.
  • Supported source types may include local files, directories, URLs, and video streams.
Source code in ultralytics/models/sam/predict.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
def setup_source(self, source):
    """
    Set up the data source for inference.

    This method configures the data source from which images will be fetched for inference. It supports
    various input types such as image files, directories, video files, and other compatible data sources.

    Args:
        source (str | Path | None): The path or identifier for the image data source. Can be a file path,
            directory path, URL, or other supported source types.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.setup_source("path/to/images")
        >>> predictor.setup_source("video.mp4")
        >>> predictor.setup_source(None)  # Uses default source if available

    Notes:
        - If source is None, the method may use a default source if configured.
        - The method adapts to different source types and prepares them for subsequent inference steps.
        - Supported source types may include local files, directories, URLs, and video streams.
    """
    if source is not None:
        super().setup_source(source)





ultralytics.models.sam.predict.SAM2Predictor

SAM2Predictor(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

Bases: Predictor

SAM2Predictor class for advanced image segmentation using Segment Anything Model 2 architecture.

This class extends the base Predictor class to implement SAM2-specific functionality for image segmentation tasks. It provides methods for model initialization, feature extraction, and prompt-based inference.

Attributes:

Name Type Description
_bb_feat_sizes List[tuple]

Feature sizes for different backbone levels.

model Module

The loaded SAM2 model.

device device

The device (CPU or GPU) on which the model is loaded.

features dict

Cached image features for efficient inference.

segment_all bool

Flag to indicate if all segments should be predicted.

prompts Dict[str, Any]

Dictionary to store various types of prompts for inference.

Methods:

Name Description
get_model

Retrieve and initialize the SAM2 model.

prompt_inference

Perform image segmentation inference based on various prompts.

set_image

Preprocess and set a single image for inference.

get_im_features

Extract and process image features using SAM2's image encoder.

Examples:

>>> predictor = SAM2Predictor(cfg)
>>> predictor.set_image("path/to/image.jpg")
>>> bboxes = [[100, 100, 200, 200]]
>>> result = predictor(bboxes=bboxes)[0]
>>> print(f"Predicted {len(result.masks)} masks with average score {result.boxes.conf.mean():.2f}")
Source code in ultralytics/models/sam/predict.py
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the Predictor with configuration, overrides, and callbacks.

    Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or
    callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True
    for optimal results.

    Args:
        cfg (dict): Configuration dictionary containing default settings.
        overrides (dict | None): Dictionary of values to override default configuration.
        _callbacks (dict | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor_example = Predictor(cfg=DEFAULT_CFG)
        >>> predictor_example_with_imgsz = Predictor(overrides={"imgsz": 640})
        >>> predictor_example_with_callback = Predictor(_callbacks={"on_predict_start": custom_callback})
    """
    if overrides is None:
        overrides = {}
    overrides.update(dict(task="segment", mode="predict", batch=1))
    super().__init__(cfg, overrides, _callbacks)
    self.args.retina_masks = True
    self.im = None
    self.features = None
    self.prompts = {}
    self.segment_all = False

get_im_features

get_im_features(im)

Extract image features from the SAM image encoder for subsequent processing.

Source code in ultralytics/models/sam/predict.py
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
def get_im_features(self, im):
    """Extract image features from the SAM image encoder for subsequent processing."""
    assert isinstance(self.imgsz, (tuple, list)) and self.imgsz[0] == self.imgsz[1], (
        f"SAM 2 models only support square image size, but got {self.imgsz}."
    )
    self.model.set_imgsz(self.imgsz)
    self._bb_feat_sizes = [[x // (4 * i) for x in self.imgsz] for i in [1, 2, 4]]

    backbone_out = self.model.forward_image(im)
    _, vision_feats, _, _ = self.model._prepare_backbone_features(backbone_out)
    if self.model.directly_add_no_mem_embed:
        vision_feats[-1] = vision_feats[-1] + self.model.no_mem_embed
    feats = [
        feat.permute(1, 2, 0).view(1, -1, *feat_size) for feat, feat_size in zip(vision_feats, self._bb_feat_sizes)
    ]
    return {"image_embed": feats[-1], "high_res_feats": feats[:-1]}

get_model

get_model()

Retrieve and initialize the Segment Anything Model 2 (SAM2) for image segmentation tasks.

Source code in ultralytics/models/sam/predict.py
742
743
744
745
746
def get_model(self):
    """Retrieve and initialize the Segment Anything Model 2 (SAM2) for image segmentation tasks."""
    from .build import build_sam  # slow import

    return build_sam(self.args.model)

set_image

set_image(image)

Preprocess and set a single image for inference using the SAM2 model.

This method initializes the model if not already done, configures the data source to the specified image, and preprocesses the image for feature extraction. It supports setting only one image at a time.

Parameters:

Name Type Description Default
image str | ndarray

Path to the image file as a string, or a numpy array representing the image.

required

Examples:

>>> predictor = SAM2Predictor()
>>> predictor.set_image("path/to/image.jpg")
>>> predictor.set_image(np.array([...]))  # Using a numpy array

Raises:

Type Description
AssertionError

If more than one image is attempted to be set.

Notes
  • This method must be called before performing any inference on a new image.
  • The method caches the extracted features for efficient subsequent inferences on the same image.
  • Only one image can be set at a time. To process multiple images, call this method for each new image.
Source code in ultralytics/models/sam/predict.py
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
def set_image(self, image):
    """
    Preprocess and set a single image for inference using the SAM2 model.

    This method initializes the model if not already done, configures the data source to the specified image,
    and preprocesses the image for feature extraction. It supports setting only one image at a time.

    Args:
        image (str | np.ndarray): Path to the image file as a string, or a numpy array representing the image.

    Examples:
        >>> predictor = SAM2Predictor()
        >>> predictor.set_image("path/to/image.jpg")
        >>> predictor.set_image(np.array([...]))  # Using a numpy array

    Raises:
        AssertionError: If more than one image is attempted to be set.

    Notes:
        - This method must be called before performing any inference on a new image.
        - The method caches the extracted features for efficient subsequent inferences on the same image.
        - Only one image can be set at a time. To process multiple images, call this method for each new image.
    """
    if self.model is None:
        self.setup_model(model=None)
    self.setup_source(image)
    assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
    for batch in self.dataset:
        im = self.preprocess(batch[1])
        self.features = self.get_im_features(im)
        break





ultralytics.models.sam.predict.SAM2VideoPredictor

SAM2VideoPredictor(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

Bases: SAM2Predictor

SAM2VideoPredictor to handle user interactions with videos and manage inference states.

This class extends the functionality of SAM2Predictor to support video processing and maintains the state of inference operations. It includes configurations for managing non-overlapping masks, clearing memory for non-conditional inputs, and setting up callbacks for prediction events.

Attributes:

Name Type Description
inference_state dict

A dictionary to store the current state of inference operations.

non_overlap_masks bool

A flag indicating whether masks should be non-overlapping.

clear_non_cond_mem_around_input bool

A flag to control clearing non-conditional memory around inputs.

clear_non_cond_mem_for_multi_obj bool

A flag to control clearing non-conditional memory for multi-object scenarios.

callbacks dict

A dictionary of callbacks for various prediction lifecycle events.

Methods:

Name Description
get_model

Retrieve and configure the model with binarization enabled.

inference

Perform image segmentation inference based on the given input cues.

postprocess

Post-process the predictions to apply non-overlapping constraints if required.

add_new_prompts

Add new points or masks to a specific frame for a given object ID.

propagate_in_video_preflight

Prepare inference_state and consolidate temporary outputs before tracking.

init_state

Initialize an inference state for the predictor.

get_im_features

Extract and process image features using SAM2's image encoder for subsequent segmentation tasks.

Examples:

>>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
>>> predictor.set_image("path/to/video_frame.jpg")
>>> bboxes = [[100, 100, 200, 200]]
>>> results = predictor(bboxes=bboxes)
Note

The fill_hole_area attribute is defined but not used in the current implementation.

This constructor initializes the SAM2VideoPredictor with a given configuration, applies any specified overrides, and sets up the inference state along with certain flags that control the behavior of the predictor.

Parameters:

Name Type Description Default
cfg dict

Configuration dictionary containing default settings.

DEFAULT_CFG
overrides dict | None

Dictionary of values to override default configuration.

None
_callbacks dict | None

Dictionary of callback functions to customize behavior.

None

Examples:

>>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
>>> predictor_example_with_imgsz = SAM2VideoPredictor(overrides={"imgsz": 640})
>>> predictor_example_with_callback = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
Source code in ultralytics/models/sam/predict.py
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the predictor with configuration and optional overrides.

    This constructor initializes the SAM2VideoPredictor with a given configuration, applies any
    specified overrides, and sets up the inference state along with certain flags
    that control the behavior of the predictor.

    Args:
        cfg (dict): Configuration dictionary containing default settings.
        overrides (dict | None): Dictionary of values to override default configuration.
        _callbacks (dict | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
        >>> predictor_example_with_imgsz = SAM2VideoPredictor(overrides={"imgsz": 640})
        >>> predictor_example_with_callback = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
    """
    super().__init__(cfg, overrides, _callbacks)
    self.inference_state = {}
    self.non_overlap_masks = True
    self.clear_non_cond_mem_around_input = False
    self.clear_non_cond_mem_for_multi_obj = False
    self.callbacks["on_predict_start"].append(self.init_state)

add_new_prompts

add_new_prompts(obj_id, points=None, labels=None, masks=None, frame_idx=0)

Add new points or masks to a specific frame for a given object ID.

This method updates the inference state with new prompts (points or masks) for a specified object and frame index. It ensures that the prompts are either points or masks, but not both, and updates the internal state accordingly. It also handles the generation of new segmentations based on the provided prompts and the existing state.

Parameters:

Name Type Description Default
obj_id int

The ID of the object to which the prompts are associated.

required
points Tensor

The coordinates of the points of interest.

None
labels Tensor

The labels corresponding to the points.

None
masks Tensor

Binary masks for the object.

None
frame_idx int

The index of the frame to which the prompts are applied.

0

Returns:

Name Type Description
pred_masks Tensor

The flattened predicted masks.

pred_scores Tensor

A tensor of ones indicating the number of objects.

Raises:

Type Description
AssertionError

If both masks and points are provided, or neither is provided.

Note
  • Only one type of prompt (either points or masks) can be added per call.
  • If the frame is being tracked for the first time, it is treated as an initial conditioning frame.
  • The method handles the consolidation of outputs and resizing of masks to the original video resolution.
Source code in ultralytics/models/sam/predict.py
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
@smart_inference_mode()
def add_new_prompts(
    self,
    obj_id,
    points=None,
    labels=None,
    masks=None,
    frame_idx=0,
):
    """
    Add new points or masks to a specific frame for a given object ID.

    This method updates the inference state with new prompts (points or masks) for a specified
    object and frame index. It ensures that the prompts are either points or masks, but not both,
    and updates the internal state accordingly. It also handles the generation of new segmentations
    based on the provided prompts and the existing state.

    Args:
        obj_id (int): The ID of the object to which the prompts are associated.
        points (torch.Tensor, optional): The coordinates of the points of interest.
        labels (torch.Tensor, optional): The labels corresponding to the points.
        masks (torch.Tensor, optional): Binary masks for the object.
        frame_idx (int, optional): The index of the frame to which the prompts are applied.

    Returns:
        pred_masks (torch.Tensor): The flattened predicted masks.
        pred_scores (torch.Tensor): A tensor of ones indicating the number of objects.

    Raises:
        AssertionError: If both `masks` and `points` are provided, or neither is provided.

    Note:
        - Only one type of prompt (either points or masks) can be added per call.
        - If the frame is being tracked for the first time, it is treated as an initial conditioning frame.
        - The method handles the consolidation of outputs and resizing of masks to the original video resolution.
    """
    assert (masks is None) ^ (points is None), "'masks' and 'points' prompts are not compatible with each other."
    obj_idx = self._obj_id_to_idx(obj_id)

    point_inputs = None
    pop_key = "point_inputs_per_obj"
    if points is not None:
        point_inputs = {"point_coords": points, "point_labels": labels}
        self.inference_state["point_inputs_per_obj"][obj_idx][frame_idx] = point_inputs
        pop_key = "mask_inputs_per_obj"
    self.inference_state["mask_inputs_per_obj"][obj_idx][frame_idx] = masks
    self.inference_state[pop_key][obj_idx].pop(frame_idx, None)
    # If this frame hasn't been tracked before, we treat it as an initial conditioning
    # frame, meaning that the inputs points are to generate segments on this frame without
    # using any memory from other frames, like in SAM. Otherwise (if it has been tracked),
    # the input points will be used to correct the already tracked masks.
    is_init_cond_frame = frame_idx not in self.inference_state["frames_already_tracked"]
    obj_output_dict = self.inference_state["output_dict_per_obj"][obj_idx]
    obj_temp_output_dict = self.inference_state["temp_output_dict_per_obj"][obj_idx]
    # Add a frame to conditioning output if it's an initial conditioning frame or
    # if the model sees all frames receiving clicks/mask as conditioning frames.
    is_cond = is_init_cond_frame or self.model.add_all_frames_to_correct_as_cond
    storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"

    # Get any previously predicted mask logits on this object and feed it along with
    # the new clicks into the SAM mask decoder.
    prev_sam_mask_logits = None
    # lookup temporary output dict first, which contains the most recent output
    # (if not found, then lookup conditioning and non-conditioning frame output)
    if point_inputs is not None:
        prev_out = (
            obj_temp_output_dict[storage_key].get(frame_idx)
            or obj_output_dict["cond_frame_outputs"].get(frame_idx)
            or obj_output_dict["non_cond_frame_outputs"].get(frame_idx)
        )

        if prev_out is not None and prev_out.get("pred_masks") is not None:
            prev_sam_mask_logits = prev_out["pred_masks"].to(device=self.device, non_blocking=True)
            # Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues.
            prev_sam_mask_logits.clamp_(-32.0, 32.0)
    current_out = self._run_single_frame_inference(
        output_dict=obj_output_dict,  # run on the slice of a single object
        frame_idx=frame_idx,
        batch_size=1,  # run on the slice of a single object
        is_init_cond_frame=is_init_cond_frame,
        point_inputs=point_inputs,
        mask_inputs=masks,
        reverse=False,
        # Skip the memory encoder when adding clicks or mask. We execute the memory encoder
        # at the beginning of `propagate_in_video` (after user finalize their clicks). This
        # allows us to enforce non-overlapping constraints on all objects before encoding
        # them into memory.
        run_mem_encoder=False,
        prev_sam_mask_logits=prev_sam_mask_logits,
    )
    # Add the output to the output dict (to be used as future memory)
    obj_temp_output_dict[storage_key][frame_idx] = current_out

    # Resize the output mask to the original video resolution
    consolidated_out = self._consolidate_temp_output_across_obj(
        frame_idx,
        is_cond=is_cond,
        run_mem_encoder=False,
    )
    pred_masks = consolidated_out["pred_masks"].flatten(0, 1)
    return pred_masks.flatten(0, 1), torch.ones(1, dtype=pred_masks.dtype, device=pred_masks.device)

get_im_features

get_im_features(im, batch=1)

Extract and process image features using SAM2's image encoder for subsequent segmentation tasks.

Parameters:

Name Type Description Default
im Tensor

The input image tensor.

required
batch int

The batch size for expanding features if there are multiple prompts.

1

Returns:

Name Type Description
vis_feats Tensor

The visual features extracted from the image.

vis_pos_embed Tensor

The positional embeddings for the visual features.

feat_sizes List[tuple]

A list containing the sizes of the extracted features.

Note
  • If batch is greater than 1, the features are expanded to fit the batch size.
  • The method leverages the model's _prepare_backbone_features method to prepare the backbone features.
Source code in ultralytics/models/sam/predict.py
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
def get_im_features(self, im, batch=1):
    """
    Extract and process image features using SAM2's image encoder for subsequent segmentation tasks.

    Args:
        im (torch.Tensor): The input image tensor.
        batch (int, optional): The batch size for expanding features if there are multiple prompts.

    Returns:
        vis_feats (torch.Tensor): The visual features extracted from the image.
        vis_pos_embed (torch.Tensor): The positional embeddings for the visual features.
        feat_sizes (List[tuple]): A list containing the sizes of the extracted features.

    Note:
        - If `batch` is greater than 1, the features are expanded to fit the batch size.
        - The method leverages the model's `_prepare_backbone_features` method to prepare the backbone features.
    """
    self.model.set_imgsz(self.imgsz)
    backbone_out = self.model.forward_image(im)
    if batch > 1:  # expand features if there's more than one prompt
        for i, feat in enumerate(backbone_out["backbone_fpn"]):
            backbone_out["backbone_fpn"][i] = feat.expand(batch, -1, -1, -1)
        for i, pos in enumerate(backbone_out["vision_pos_enc"]):
            pos = pos.expand(batch, -1, -1, -1)
            backbone_out["vision_pos_enc"][i] = pos
    _, vis_feats, vis_pos_embed, feat_sizes = self.model._prepare_backbone_features(backbone_out)
    return vis_feats, vis_pos_embed, feat_sizes

get_model

get_model()

Retrieve and configure the model with binarization enabled.

Note

This method overrides the base class implementation to set the binarize flag to True.

Source code in ultralytics/models/sam/predict.py
944
945
946
947
948
949
950
951
952
953
def get_model(self):
    """
    Retrieve and configure the model with binarization enabled.

    Note:
        This method overrides the base class implementation to set the binarize flag to True.
    """
    model = super().get_model()
    model.set_binarize(True)
    return model

inference

inference(im, bboxes=None, points=None, labels=None, masks=None)

Perform image segmentation inference based on the given input cues, using the currently loaded image. This method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and mask decoder for real-time and promptable segmentation tasks.

Parameters:

Name Type Description Default
im Tensor

The preprocessed input image in tensor format, with shape (N, C, H, W).

required
bboxes ndarray | List

Bounding boxes with shape (N, 4), in XYXY format.

None
points ndarray | List

Points indicating object locations with shape (N, 2), in pixels.

None
labels ndarray | List

Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.

None
masks ndarray

Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.

None

Returns:

Name Type Description
pred_masks Tensor

The output masks in shape CxHxW, where C is the number of generated masks.

pred_scores Tensor

An array of length C containing quality scores predicted by the model for each mask.

Source code in ultralytics/models/sam/predict.py
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
def inference(self, im, bboxes=None, points=None, labels=None, masks=None):
    """
    Perform image segmentation inference based on the given input cues, using the currently loaded image. This
    method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
    mask decoder for real-time and promptable segmentation tasks.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
        masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.

    Returns:
        pred_masks (torch.Tensor): The output masks in shape CxHxW, where C is the number of generated masks.
        pred_scores (torch.Tensor): An array of length C containing quality scores predicted by the model for each mask.
    """
    # Override prompts if any stored in self.prompts
    bboxes = self.prompts.pop("bboxes", bboxes)
    points = self.prompts.pop("points", points)
    masks = self.prompts.pop("masks", masks)

    frame = self.dataset.frame
    self.inference_state["im"] = im
    output_dict = self.inference_state["output_dict"]
    if len(output_dict["cond_frame_outputs"]) == 0:  # initialize prompts
        points, labels, masks = self._prepare_prompts(
            im.shape[2:], self.batch[1][0].shape[:2], bboxes, points, labels, masks
        )
        if points is not None:
            for i in range(len(points)):
                self.add_new_prompts(obj_id=i, points=points[[i]], labels=labels[[i]], frame_idx=frame)
        elif masks is not None:
            for i in range(len(masks)):
                self.add_new_prompts(obj_id=i, masks=masks[[i]], frame_idx=frame)
    self.propagate_in_video_preflight()

    consolidated_frame_inds = self.inference_state["consolidated_frame_inds"]
    batch_size = len(self.inference_state["obj_idx_to_id"])
    if len(output_dict["cond_frame_outputs"]) == 0:
        raise RuntimeError("No points are provided; please add points first")

    if frame in consolidated_frame_inds["cond_frame_outputs"]:
        storage_key = "cond_frame_outputs"
        current_out = output_dict[storage_key][frame]
        if self.clear_non_cond_mem_around_input and (self.clear_non_cond_mem_for_multi_obj or batch_size <= 1):
            # clear non-conditioning memory of the surrounding frames
            self._clear_non_cond_mem_around_input(frame)
    elif frame in consolidated_frame_inds["non_cond_frame_outputs"]:
        storage_key = "non_cond_frame_outputs"
        current_out = output_dict[storage_key][frame]
    else:
        storage_key = "non_cond_frame_outputs"
        current_out = self._run_single_frame_inference(
            output_dict=output_dict,
            frame_idx=frame,
            batch_size=batch_size,
            is_init_cond_frame=False,
            point_inputs=None,
            mask_inputs=None,
            reverse=False,
            run_mem_encoder=True,
        )
        output_dict[storage_key][frame] = current_out
    # Create slices of per-object outputs for subsequent interaction with each
    # individual object after tracking.
    self._add_output_per_object(frame, current_out, storage_key)
    self.inference_state["frames_already_tracked"].append(frame)
    pred_masks = current_out["pred_masks"].flatten(0, 1)
    pred_masks = pred_masks[(pred_masks > self.model.mask_threshold).sum((1, 2)) > 0]  # filter blank masks

    return pred_masks, torch.ones(len(pred_masks), dtype=pred_masks.dtype, device=pred_masks.device)

init_state staticmethod

init_state(predictor)

Initialize an inference state for the predictor.

This function sets up the initial state required for performing inference on video data. It includes initializing various dictionaries and ordered dictionaries that will store inputs, outputs, and other metadata relevant to the tracking process.

Parameters:

Name Type Description Default
predictor SAM2VideoPredictor

The predictor object for which to initialize the state.

required
Source code in ultralytics/models/sam/predict.py
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
@staticmethod
def init_state(predictor):
    """
    Initialize an inference state for the predictor.

    This function sets up the initial state required for performing inference on video data.
    It includes initializing various dictionaries and ordered dictionaries that will store
    inputs, outputs, and other metadata relevant to the tracking process.

    Args:
        predictor (SAM2VideoPredictor): The predictor object for which to initialize the state.
    """
    if len(predictor.inference_state) > 0:  # means initialized
        return
    assert predictor.dataset is not None
    assert predictor.dataset.mode == "video"

    inference_state = {
        "num_frames": predictor.dataset.frames,
        "point_inputs_per_obj": {},  # inputs points on each frame
        "mask_inputs_per_obj": {},  # inputs mask on each frame
        "constants": {},  # values that don't change across frames (so we only need to hold one copy of them)
        # mapping between client-side object id and model-side object index
        "obj_id_to_idx": OrderedDict(),
        "obj_idx_to_id": OrderedDict(),
        "obj_ids": [],
        # A storage to hold the model's tracking results and states on each frame
        "output_dict": {
            "cond_frame_outputs": {},  # dict containing {frame_idx: <out>}
            "non_cond_frame_outputs": {},  # dict containing {frame_idx: <out>}
        },
        # Slice (view) of each object tracking results, sharing the same memory with "output_dict"
        "output_dict_per_obj": {},
        # A temporary storage to hold new outputs when user interact with a frame
        # to add clicks or mask (it's merged into "output_dict" before propagation starts)
        "temp_output_dict_per_obj": {},
        # Frames that already holds consolidated outputs from click or mask inputs
        # (we directly use their consolidated outputs during tracking)
        "consolidated_frame_inds": {
            "cond_frame_outputs": set(),  # set containing frame indices
            "non_cond_frame_outputs": set(),  # set containing frame indices
        },
        # metadata for each tracking frame (e.g. which direction it's tracked)
        "tracking_has_started": False,
        "frames_already_tracked": [],
    }
    predictor.inference_state = inference_state

postprocess

postprocess(preds, img, orig_imgs)

Post-process the predictions to apply non-overlapping constraints if required.

This method extends the post-processing functionality by applying non-overlapping constraints to the predicted masks if the non_overlap_masks flag is set to True. This ensures that the masks do not overlap, which can be useful for certain applications.

Parameters:

Name Type Description Default
preds Tuple[Tensor, Tensor]

The predicted masks and scores from the model.

required
img Tensor

The processed image tensor.

required
orig_imgs List[ndarray]

The original images before processing.

required

Returns:

Type Description
list

The post-processed predictions.

Note

If non_overlap_masks is True, the method applies constraints to ensure non-overlapping masks.

Source code in ultralytics/models/sam/predict.py
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
def postprocess(self, preds, img, orig_imgs):
    """
    Post-process the predictions to apply non-overlapping constraints if required.

    This method extends the post-processing functionality by applying non-overlapping constraints
    to the predicted masks if the `non_overlap_masks` flag is set to True. This ensures that
    the masks do not overlap, which can be useful for certain applications.

    Args:
        preds (Tuple[torch.Tensor, torch.Tensor]): The predicted masks and scores from the model.
        img (torch.Tensor): The processed image tensor.
        orig_imgs (List[np.ndarray]): The original images before processing.

    Returns:
        (list): The post-processed predictions.

    Note:
        If `non_overlap_masks` is True, the method applies constraints to ensure non-overlapping masks.
    """
    results = super().postprocess(preds, img, orig_imgs)
    if self.non_overlap_masks:
        for result in results:
            if result.masks is None or len(result.masks) == 0:
                continue
            result.masks.data = self.model._apply_non_overlapping_constraints(result.masks.data.unsqueeze(0))[0]
    return results

propagate_in_video_preflight

propagate_in_video_preflight()

Prepare inference_state and consolidate temporary outputs before tracking.

This method marks the start of tracking, disallowing the addition of new objects until the session is reset. It consolidates temporary outputs from temp_output_dict_per_obj and merges them into output_dict. Additionally, it clears non-conditioning memory around input frames and ensures that the state is consistent with the provided inputs.

Source code in ultralytics/models/sam/predict.py
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
@smart_inference_mode()
def propagate_in_video_preflight(self):
    """
    Prepare inference_state and consolidate temporary outputs before tracking.

    This method marks the start of tracking, disallowing the addition of new objects until the session is reset.
    It consolidates temporary outputs from `temp_output_dict_per_obj` and merges them into `output_dict`.
    Additionally, it clears non-conditioning memory around input frames and ensures that the state is consistent
    with the provided inputs.
    """
    # Tracking has started and we don't allow adding new objects until session is reset.
    self.inference_state["tracking_has_started"] = True
    batch_size = len(self.inference_state["obj_idx_to_id"])

    # Consolidate per-object temporary outputs in "temp_output_dict_per_obj" and
    # add them into "output_dict".
    temp_output_dict_per_obj = self.inference_state["temp_output_dict_per_obj"]
    output_dict = self.inference_state["output_dict"]
    # "consolidated_frame_inds" contains indices of those frames where consolidated
    # temporary outputs have been added (either in this call or any previous calls
    # to `propagate_in_video_preflight`).
    consolidated_frame_inds = self.inference_state["consolidated_frame_inds"]
    for is_cond in {False, True}:
        # Separately consolidate conditioning and non-conditioning temp outputs
        storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"
        # Find all the frames that contain temporary outputs for any objects
        # (these should be the frames that have just received clicks for mask inputs
        # via `add_new_points` or `add_new_mask`)
        temp_frame_inds = set()
        for obj_temp_output_dict in temp_output_dict_per_obj.values():
            temp_frame_inds.update(obj_temp_output_dict[storage_key].keys())
        consolidated_frame_inds[storage_key].update(temp_frame_inds)
        # consolidate the temporary output across all objects on this frame
        for frame_idx in temp_frame_inds:
            consolidated_out = self._consolidate_temp_output_across_obj(
                frame_idx, is_cond=is_cond, run_mem_encoder=True
            )
            # merge them into "output_dict" and also create per-object slices
            output_dict[storage_key][frame_idx] = consolidated_out
            self._add_output_per_object(frame_idx, consolidated_out, storage_key)
            if self.clear_non_cond_mem_around_input and (self.clear_non_cond_mem_for_multi_obj or batch_size <= 1):
                # clear non-conditioning memory of the surrounding frames
                self._clear_non_cond_mem_around_input(frame_idx)

        # clear temporary outputs in `temp_output_dict_per_obj`
        for obj_temp_output_dict in temp_output_dict_per_obj.values():
            obj_temp_output_dict[storage_key].clear()

    # edge case: if an output is added to "cond_frame_outputs", we remove any prior
    # output on the same frame in "non_cond_frame_outputs"
    for frame_idx in output_dict["cond_frame_outputs"]:
        output_dict["non_cond_frame_outputs"].pop(frame_idx, None)
    for obj_output_dict in self.inference_state["output_dict_per_obj"].values():
        for frame_idx in obj_output_dict["cond_frame_outputs"]:
            obj_output_dict["non_cond_frame_outputs"].pop(frame_idx, None)
    for frame_idx in consolidated_frame_inds["cond_frame_outputs"]:
        assert frame_idx in output_dict["cond_frame_outputs"]
        consolidated_frame_inds["non_cond_frame_outputs"].discard(frame_idx)

    # Make sure that the frame indices in "consolidated_frame_inds" are exactly those frames
    # with either points or mask inputs (which should be true under a correct workflow).
    all_consolidated_frame_inds = (
        consolidated_frame_inds["cond_frame_outputs"] | consolidated_frame_inds["non_cond_frame_outputs"]
    )
    input_frames_inds = set()
    for point_inputs_per_frame in self.inference_state["point_inputs_per_obj"].values():
        input_frames_inds.update(point_inputs_per_frame.keys())
    for mask_inputs_per_frame in self.inference_state["mask_inputs_per_obj"].values():
        input_frames_inds.update(mask_inputs_per_frame.keys())
    assert all_consolidated_frame_inds == input_frames_inds





ultralytics.models.sam.predict.SAM2DynamicInteractivePredictor

SAM2DynamicInteractivePredictor(
    cfg: Any = DEFAULT_CFG,
    overrides: Optional[Dict[str, Any]] = None,
    max_obj_num: int = 3,
    _callbacks: Optional[Dict[str, Any]] = None,
)

Bases: SAM2Predictor

SAM2DynamicInteractivePredictor extends SAM2Predictor to support dynamic interactions with video frames or a sequence of images.

Attributes:

Name Type Description
memory_bank list

OrderedDict: Stores the states of each image with prompts.

obj_idx_set set

A set to keep track of the object indices that have been added.

obj_id_to_idx OrderedDict

Maps object IDs to their corresponding indices.

obj_idx_to_id OrderedDict

Maps object indices to their corresponding IDs.

Methods:

Name Description
get_model

Retrieves and configures the model with binarization enabled.

inference

Performs inference on a single image with optional prompts and object IDs.

postprocess

Post-processes the predictions to apply non-overlapping constraints if required.

update_memory

Append the imgState to the memory_bank and update the memory for the model.

track_step

Tracking step for the current image state to predict masks.

get_maskmem_enc

Get memory and positional encoding from the memory bank.

Examples:

>>> predictor = SAM2DynamicInteractivePredictor(cfg=DEFAULT_CFG)
>>> predictor(source=support_img1, bboxes=bboxes1, obj_ids=labels1, update_memory=True)
>>> results1 = predictor(source=query_img1)
>>> predictor(source=support_img2, bboxes=bboxes2, obj_ids=labels2, update_memory=True)
>>> results2 = predictor(source=query_img2)

This constructor initializes the SAM2DynamicInteractivePredictor with a given configuration, applies any specified overrides

Parameters:

Name Type Description Default
cfg Dict[str, Any]

Configuration dictionary containing default settings.

DEFAULT_CFG
overrides Dict[str, Any] | None

Dictionary of values to override default configuration.

None
max_obj_num int

Maximum number of objects to track. Default is 3. this is set to keep fix feature size for the model.

3
_callbacks Dict[str, Any] | None

Dictionary of callback functions to customize behavior.

None

Examples:

>>> predictor = SAM2DynamicInteractivePredictor(cfg=DEFAULT_CFG)
>>> predictor_example_with_imgsz = SAM2DynamicInteractivePredictor(overrides={"imgsz": 640})
>>> predictor_example_with_callback = SAM2DynamicInteractivePredictor(
...     _callbacks={"on_predict_start": custom_callback}
... )
Source code in ultralytics/models/sam/predict.py
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
def __init__(
    self,
    cfg: Any = DEFAULT_CFG,
    overrides: Optional[Dict[str, Any]] = None,
    max_obj_num: int = 3,
    _callbacks: Optional[Dict[str, Any]] = None,
) -> None:
    """
    Initialize the predictor with configuration and optional overrides.

    This constructor initializes the SAM2DynamicInteractivePredictor with a given configuration, applies any
    specified overrides

    Args:
        cfg (Dict[str, Any]): Configuration dictionary containing default settings.
        overrides (Dict[str, Any] | None): Dictionary of values to override default configuration.
        max_obj_num (int): Maximum number of objects to track. Default is 3. this is set to keep fix feature size for the model.
        _callbacks (Dict[str, Any] | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor = SAM2DynamicInteractivePredictor(cfg=DEFAULT_CFG)
        >>> predictor_example_with_imgsz = SAM2DynamicInteractivePredictor(overrides={"imgsz": 640})
        >>> predictor_example_with_callback = SAM2DynamicInteractivePredictor(
        ...     _callbacks={"on_predict_start": custom_callback}
        ... )
    """
    super().__init__(cfg, overrides, _callbacks)
    self.non_overlap_masks = True

    # Initialize the memory bank to store image states
    # NOTE: probably need to use dict for better query
    self.memory_bank = []

    # Initialize the object index set and mappings
    self.obj_idx_set = set()
    self.obj_id_to_idx = OrderedDict()
    self.obj_idx_to_id = OrderedDict()
    self._max_obj_num = max_obj_num
    for i in range(self._max_obj_num):
        self.obj_id_to_idx[i + 1] = i
        self.obj_idx_to_id[i] = i + 1

get_im_features

get_im_features(img: Union[Tensor, ndarray]) -> None

Initialize the image state by processing the input image and extracting features.

Parameters:

Name Type Description Default
img Tensor | ndarray

The input image tensor or numpy array.

required
Source code in ultralytics/models/sam/predict.py
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
def get_im_features(self, img: Union[torch.Tensor, np.ndarray]) -> None:
    """
    Initialize the image state by processing the input image and extracting features.

    Args:
        img (torch.Tensor | np.ndarray): The input image tensor or numpy array.
    """
    vis_feats, vis_pos_embed, feat_sizes = SAM2VideoPredictor.get_im_features(self, img, batch=self._max_obj_num)
    self.high_res_features = [
        feat.permute(1, 2, 0).view(*feat.shape[1:], *feat_size)
        for feat, feat_size in zip(vis_feats[:-1], feat_sizes[:-1])
    ]

    self.vision_feats = vis_feats
    self.vision_pos_embeds = vis_pos_embed
    self.feat_sizes = feat_sizes

get_maskmem_enc

get_maskmem_enc() -> Tuple[torch.Tensor, torch.Tensor]

Get the memory and positional encoding from the memory, which is used to condition the current image features.

Source code in ultralytics/models/sam/predict.py
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
def get_maskmem_enc(self) -> Tuple[torch.Tensor, torch.Tensor]:
    """Get the memory and positional encoding from the memory, which is used to condition the current image
    features.
    """
    to_cat_memory, to_cat_memory_pos_embed = [], []
    for consolidated_out in self.memory_bank:
        to_cat_memory.append(consolidated_out["maskmem_features"].flatten(2).permute(2, 0, 1))  # (H*W, B, C)
        maskmem_enc = consolidated_out["maskmem_pos_enc"][-1].flatten(2).permute(2, 0, 1)
        maskmem_enc = maskmem_enc + self.model.maskmem_tpos_enc[self.model.num_maskmem - 1]
        to_cat_memory_pos_embed.append(maskmem_enc)

    memory = torch.cat(to_cat_memory, dim=0)
    memory_pos_embed = torch.cat(to_cat_memory_pos_embed, dim=0)
    return memory, memory_pos_embed

inference

inference(
    img: Union[Tensor, ndarray],
    bboxes: Optional[List[List[float]]] = None,
    masks: Optional[Union[Tensor, ndarray]] = None,
    points: Optional[List[List[float]]] = None,
    labels: Optional[List[int]] = None,
    obj_ids: Optional[List[int]] = None,
    update_memory: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor]

Perform inference on a single image with optional bounding boxes, masks, points and object IDs. It has two modes: one is to run inference on a single image without updating the memory, and the other is to update the memory with the provided prompts and object IDs. When update_memory is True, it will update the memory with the provided prompts and obj_ids. When update_memory is False, it will only run inference on the provided image without updating the memory.

Parameters:

Name Type Description Default
img Tensor | ndarray

The input image tensor or numpy array.

required
bboxes List[List[float]] | None

Optional list of bounding boxes to update the memory.

None
masks List[Tensor | ndarray] | None

Optional masks to update the memory.

None
points List[List[float]] | None

Optional list of points to update the memory, each point is [x, y].

None
labels List[int] | None

Optional list of object IDs corresponding to the points (>0 for positive, 0 for negative).

None
obj_ids List[int] | None

Optional list of object IDs corresponding to the prompts.

None
update_memory bool

Flag to indicate whether to update the memory with new objects.

False

Returns:

Name Type Description
res_masks Tensor

The output masks in shape (C, H, W)

object_score_logits Tensor

Quality scores for each mask

Source code in ultralytics/models/sam/predict.py
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
@smart_inference_mode()
def inference(
    self,
    img: Union[torch.Tensor, np.ndarray],
    bboxes: Optional[List[List[float]]] = None,
    masks: Optional[Union[torch.Tensor, np.ndarray]] = None,
    points: Optional[List[List[float]]] = None,
    labels: Optional[List[int]] = None,
    obj_ids: Optional[List[int]] = None,
    update_memory: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor]:
    """
    Perform inference on a single image with optional bounding boxes, masks, points and object IDs.
    It has two modes: one is to run inference on a single image without updating the memory,
    and the other is to update the memory with the provided prompts and object IDs.
    When update_memory is True, it will update the memory with the provided prompts and obj_ids.
    When update_memory is False, it will only run inference on the provided image without updating the memory.

    Args:
        img (torch.Tensor | np.ndarray): The input image tensor or numpy array.
        bboxes (List[List[float]] | None): Optional list of bounding boxes to update the memory.
        masks (List[torch.Tensor | np.ndarray] | None): Optional masks to update the memory.
        points (List[List[float]] | None): Optional list of points to update the memory, each point is [x, y].
        labels (List[int] | None): Optional list of object IDs corresponding to the points (>0 for positive, 0 for negative).
        obj_ids (List[int] | None): Optional list of object IDs corresponding to the prompts.
        update_memory (bool): Flag to indicate whether to update the memory with new objects.

    Returns:
        res_masks (torch.Tensor): The output masks in shape (C, H, W)
        object_score_logits (torch.Tensor): Quality scores for each mask
    """
    self.get_im_features(img)
    points, labels, masks = self._prepare_prompts(
        dst_shape=self.imgsz,
        src_shape=self.batch[1][0].shape[:2],
        points=points,
        bboxes=bboxes,
        labels=labels,
        masks=masks,
    )

    if update_memory:
        if isinstance(obj_ids, int):
            obj_ids = [obj_ids]
        assert obj_ids is not None, "obj_ids must be provided when update_memory is True"
        assert masks is not None or points is not None, (
            "bboxes, masks, or points must be provided when update_memory is True"
        )
        if points is None:  # placeholder
            points = torch.zeros((len(obj_ids), 0, 2), dtype=self.torch_dtype, device=self.device)
            labels = torch.zeros((len(obj_ids), 0), dtype=torch.int32, device=self.device)
        if masks is not None:
            assert len(masks) == len(obj_ids), "masks and obj_ids must have the same length."
        assert len(points) == len(obj_ids), "points and obj_ids must have the same length."
        self.update_memory(obj_ids, points, labels, masks)

    current_out = self.track_step()
    pred_masks, pred_scores = current_out["pred_masks"], current_out["object_score_logits"]
    # filter the masks and logits based on the object indices
    if len(self.obj_idx_set) == 0:
        raise RuntimeError("No objects have been added to the state. Please add objects before inference.")
    idx = list(self.obj_idx_set)  # cls id
    pred_masks, pred_scores = pred_masks[idx], pred_scores[idx]
    # the original score are in [-32,32], and a object score larger than 0 means the object is present, we map it to [-1,1] range,
    # and use a activate function to make sure the object score logits are non-negative, so that we can use it as a mask
    pred_scores = torch.clamp_(pred_scores / 32, min=0)
    return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

track_step

track_step(
    obj_idx: Optional[int] = None,
    point: Optional[Tensor] = None,
    label: Optional[Tensor] = None,
    mask: Optional[Tensor] = None,
) -> Dict[str, Any]

Tracking step for the current image state to predict masks.

This method processes the image features and runs the SAM heads to predict masks. If obj_idx is provided, it processes the features for a specific prompted object in the image. If obj_idx is None, it processes the features for all objects in the image. The method supports both mask-based output without SAM and full SAM processing with memory-conditioned features.

Parameters:

Name Type Description Default
obj_idx int | None

The index of the object for which to predict masks. If None, it processes all objects.

None
point Tensor | None

The coordinates of the points of interest with shape (N, 2).

None
label Tensor | None

The labels corresponding to the points where 1 means positive clicks, 0 means negative clicks.

None
mask Tensor | None

The mask input for the object with shape (H, W).

None

Returns:

Name Type Description
current_out Dict[str, Any]

A dictionary containing the current output with mask predictions and object pointers. Keys include 'point_inputs', 'mask_inputs', 'pred_masks', 'pred_masks_high_res', 'obj_ptr', 'object_score_logits'.

Source code in ultralytics/models/sam/predict.py
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
def track_step(
    self,
    obj_idx: Optional[int] = None,
    point: Optional[torch.Tensor] = None,
    label: Optional[torch.Tensor] = None,
    mask: Optional[torch.Tensor] = None,
) -> Dict[str, Any]:
    """
    Tracking step for the current image state to predict masks.

    This method processes the image features and runs the SAM heads to predict masks. If obj_idx is provided, it
    processes the features for a specific prompted object in the image. If obj_idx is None, it processes the
    features for all objects in the image. The method supports both mask-based output without SAM and full
    SAM processing with memory-conditioned features.

    Args:
        obj_idx (int | None): The index of the object for which to predict masks. If None, it processes all objects.
        point (torch.Tensor | None): The coordinates of the points of interest with shape (N, 2).
        label (torch.Tensor | None): The labels corresponding to the points where 1 means positive clicks, 0 means negative clicks.
        mask (torch.Tensor | None): The mask input for the object with shape (H, W).

    Returns:
        current_out (Dict[str, Any]): A dictionary containing the current output with mask predictions and object pointers.
            Keys include 'point_inputs', 'mask_inputs', 'pred_masks', 'pred_masks_high_res', 'obj_ptr', 'object_score_logits'.
    """
    if mask is not None and self.model.use_mask_input_as_output_without_sam:
        # When use_mask_input_as_output_without_sam=True, we directly output the mask input
        # (see it as a GT mask) without using a SAM prompt encoder + mask decoder.
        pix_feat = self.vision_feats[-1].permute(1, 2, 0)
        pix_feat = pix_feat.view(-1, self.model.memory_attention.d_model, *self.feat_sizes[-1])
        _, _, _, low_res_masks, high_res_masks, obj_ptr, object_score_logits = self.model._use_mask_as_output(mask)
    else:
        # fused the visual feature with previous memory features in the memory bank
        pix_feat_with_mem = self._prepare_memory_conditioned_features(obj_idx)
        # calculate the first feature if adding obj_idx exists(means adding prompts)
        pix_feat_with_mem = pix_feat_with_mem[:1] if obj_idx is not None else pix_feat_with_mem
        _, _, _, low_res_masks, high_res_masks, obj_ptr, object_score_logits = self.model._forward_sam_heads(
            backbone_features=pix_feat_with_mem,
            point_inputs={"point_coords": point, "point_labels": label} if obj_idx is not None else None,
            mask_inputs=mask,
            multimask_output=False,
            high_res_features=[feat[: pix_feat_with_mem.size(0)] for feat in self.high_res_features],
        )
    return {
        "pred_masks": low_res_masks,
        "pred_masks_high_res": high_res_masks,
        "obj_ptr": obj_ptr,
        "object_score_logits": object_score_logits,
    }

update_memory

update_memory(
    obj_ids: List[int] = None,
    points: Optional[Tensor] = None,
    labels: Optional[Tensor] = None,
    masks: Optional[Tensor] = None,
) -> None

Append the imgState to the memory_bank and update the memory for the model.

Parameters:

Name Type Description Default
obj_ids List[int]

List of object IDs corresponding to the prompts.

None
points Tensor | None

Tensor of shape (B, N, 2) representing the input points for N objects.

None
labels Tensor | None

Tensor of shape (B, N) representing the labels for the input points.

None
masks Tensor | None

Optional tensor of shape (N, H, W) representing the input masks for N objects.

None
Source code in ultralytics/models/sam/predict.py
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
@smart_inference_mode()
def update_memory(
    self,
    obj_ids: List[int] = None,
    points: Optional[torch.Tensor] = None,
    labels: Optional[torch.Tensor] = None,
    masks: Optional[torch.Tensor] = None,
) -> None:
    """
    Append the imgState to the memory_bank and update the memory for the model.

    Args:
        obj_ids (List[int]): List of object IDs corresponding to the prompts.
        points (torch.Tensor | None): Tensor of shape (B, N, 2) representing the input points for N objects.
        labels (torch.Tensor | None): Tensor of shape (B, N) representing the labels for the input points.
        masks (torch.Tensor | None): Optional tensor of shape (N, H, W) representing the input masks for N objects.
    """
    consolidated_out = {
        "maskmem_features": None,
        "maskmem_pos_enc": None,
        "pred_masks": torch.full(
            size=(self._max_obj_num, 1, self.imgsz[0] // 4, self.imgsz[1] // 4),
            fill_value=-1024.0,
            dtype=self.torch_dtype,
            device=self.device,
        ),
        "obj_ptr": torch.full(
            size=(self._max_obj_num, self.model.hidden_dim),
            fill_value=-1024.0,
            dtype=self.torch_dtype,
            device=self.device,
        ),
        "object_score_logits": torch.full(
            size=(self._max_obj_num, 1),
            # default to 10.0 for object_score_logits, i.e. assuming the object is
            # present as sigmoid(10)=1, same as in `predict_masks` of `MaskDecoder`
            fill_value=-32,  # 10.0,
            dtype=self.torch_dtype,
            device=self.device,
        ),
    }

    for i, obj_id in enumerate(obj_ids):
        assert obj_id < self._max_obj_num
        obj_idx = self._obj_id_to_idx(int(obj_id))
        self.obj_idx_set.add(obj_idx)
        point, label = points[[i]], labels[[i]]
        mask = masks[[i]][None] if masks is not None else None
        # Currently, only bbox prompt or mask prompt is supported, so we assert that bbox is not None.
        assert point is not None or mask is not None, "Either bbox, points or mask is required"
        out = self.track_step(obj_idx, point, label, mask)
        if out is not None:
            obj_mask = out["pred_masks"]
            assert obj_mask.shape[-2:] == consolidated_out["pred_masks"].shape[-2:], (
                f"Expected mask shape {consolidated_out['pred_masks'].shape[-2:]} but got {obj_mask.shape[-2:]} for object {obj_idx}."
            )
            consolidated_out["pred_masks"][obj_idx : obj_idx + 1] = obj_mask
            consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = out["obj_ptr"]

            if "object_score_logits" in out.keys():
                consolidated_out["object_score_logits"][obj_idx : obj_idx + 1] = out["object_score_logits"]

    high_res_masks = F.interpolate(
        consolidated_out["pred_masks"].to(self.device, non_blocking=True),
        size=self.imgsz,
        mode="bilinear",
        align_corners=False,
    )

    if self.model.non_overlap_masks_for_mem_enc:
        high_res_masks = self.model._apply_non_overlapping_constraints(high_res_masks)
    maskmem_features, maskmem_pos_enc = self.model._encode_new_memory(
        current_vision_feats=self.vision_feats,
        feat_sizes=self.feat_sizes,
        pred_masks_high_res=high_res_masks,
        object_score_logits=consolidated_out["object_score_logits"],
        is_mask_from_pts=True,
    )
    consolidated_out["maskmem_features"] = maskmem_features
    consolidated_out["maskmem_pos_enc"] = maskmem_pos_enc
    self.memory_bank.append(consolidated_out)





📅 Created 1 year ago ✏️ Updated 4 days ago