Skip to content

Reference for ultralytics/models/sam/predict.py

Note

This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/predict.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!


ultralytics.models.sam.predict.Predictor

Predictor(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

Bases: BasePredictor

Predictor class for SAM, enabling real-time image segmentation with promptable capabilities.

This class extends BasePredictor and implements the Segment Anything Model (SAM) for advanced image segmentation tasks. It supports various input prompts like points, bounding boxes, and masks for fine-grained control over segmentation results.

Attributes:

Name Type Description
args SimpleNamespace

Configuration arguments for the predictor.

model Module

The loaded SAM model.

device device

The device (CPU or GPU) on which the model is loaded.

im Tensor

The preprocessed input image.

features Tensor

Extracted image features.

prompts Dict

Dictionary to store various types of prompts (e.g., bboxes, points, masks).

segment_all bool

Flag to indicate if full image segmentation should be performed.

mean Tensor

Mean values for image normalization.

std Tensor

Standard deviation values for image normalization.

Methods:

Name Description
preprocess

Prepares input images for model inference.

pre_transform

Performs initial transformations on the input image.

inference

Performs segmentation inference based on input prompts.

prompt_inference

Internal function for prompt-based segmentation inference.

generate

Generates segmentation masks for an entire image.

setup_model

Initializes the SAM model for inference.

get_model

Builds and returns a SAM model.

postprocess

Post-processes model outputs to generate final results.

setup_source

Sets up the data source for inference.

set_image

Sets and preprocesses a single image for inference.

get_im_features

Extracts image features using the SAM image encoder.

set_prompts

Sets prompts for subsequent inference.

reset_image

Resets the current image and its features.

remove_small_regions

Removes small disconnected regions and holes from masks.

Examples:

>>> predictor = Predictor()
>>> predictor.setup_model(model_path="sam_model.pt")
>>> predictor.set_image("image.jpg")
>>> masks, scores, boxes = predictor.generate()
>>> results = predictor.postprocess((masks, scores, boxes), im, orig_img)

Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

Parameters:

Name Type Description Default
cfg Dict

Configuration dictionary containing default settings.

DEFAULT_CFG
overrides Dict | None

Dictionary of values to override default configuration.

None
_callbacks Dict | None

Dictionary of callback functions to customize behavior.

None

Examples:

>>> predictor = Predictor(cfg=DEFAULT_CFG)
>>> predictor = Predictor(overrides={"imgsz": 640})
>>> predictor = Predictor(_callbacks={"on_predict_start": custom_callback})
Source code in ultralytics/models/sam/predict.py
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the Predictor with configuration, overrides, and callbacks.

    Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or
    callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True
    for optimal results.

    Args:
        cfg (Dict): Configuration dictionary containing default settings.
        overrides (Dict | None): Dictionary of values to override default configuration.
        _callbacks (Dict | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor = Predictor(cfg=DEFAULT_CFG)
        >>> predictor = Predictor(overrides={"imgsz": 640})
        >>> predictor = Predictor(_callbacks={"on_predict_start": custom_callback})
    """
    if overrides is None:
        overrides = {}
    overrides.update(dict(task="segment", mode="predict", batch=1))
    super().__init__(cfg, overrides, _callbacks)
    self.args.retina_masks = True
    self.im = None
    self.features = None
    self.prompts = {}
    self.segment_all = False

generate

generate(
    im,
    crop_n_layers=0,
    crop_overlap_ratio=512 / 1500,
    crop_downscale_factor=1,
    point_grids=None,
    points_stride=32,
    points_batch_size=64,
    conf_thres=0.88,
    stability_score_thresh=0.95,
    stability_score_offset=0.95,
    crop_nms_thresh=0.7,
)

Perform image segmentation using the Segment Anything Model (SAM).

This method segments an entire image into constituent parts by leveraging SAM's advanced architecture and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

Parameters:

Name Type Description Default
im Tensor

Input tensor representing the preprocessed image with shape (N, C, H, W).

required
crop_n_layers int

Number of layers for additional mask predictions on image crops.

0
crop_overlap_ratio float

Overlap between crops, scaled down in subsequent layers.

512 / 1500
crop_downscale_factor int

Scaling factor for sampled points-per-side in each layer.

1
point_grids List[ndarray] | None

Custom grids for point sampling normalized to [0,1].

None
points_stride int

Number of points to sample along each side of the image.

32
points_batch_size int

Batch size for the number of points processed simultaneously.

64
conf_thres float

Confidence threshold [0,1] for filtering based on mask quality prediction.

0.88
stability_score_thresh float

Stability threshold [0,1] for mask filtering based on stability.

0.95
stability_score_offset float

Offset value for calculating stability score.

0.95
crop_nms_thresh float

IoU cutoff for NMS to remove duplicate masks between crops.

0.7

Returns:

Name Type Description
pred_masks Tensor

Segmented masks with shape (N, H, W).

pred_scores Tensor

Confidence scores for each mask with shape (N,).

pred_bboxes Tensor

Bounding boxes for each mask with shape (N, 4).

Examples:

>>> predictor = Predictor()
>>> im = torch.rand(1, 3, 1024, 1024)  # Example input image
>>> masks, scores, boxes = predictor.generate(im)
Source code in ultralytics/models/sam/predict.py
def generate(
    self,
    im,
    crop_n_layers=0,
    crop_overlap_ratio=512 / 1500,
    crop_downscale_factor=1,
    point_grids=None,
    points_stride=32,
    points_batch_size=64,
    conf_thres=0.88,
    stability_score_thresh=0.95,
    stability_score_offset=0.95,
    crop_nms_thresh=0.7,
):
    """
    Perform image segmentation using the Segment Anything Model (SAM).

    This method segments an entire image into constituent parts by leveraging SAM's advanced architecture
    and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

    Args:
        im (torch.Tensor): Input tensor representing the preprocessed image with shape (N, C, H, W).
        crop_n_layers (int): Number of layers for additional mask predictions on image crops.
        crop_overlap_ratio (float): Overlap between crops, scaled down in subsequent layers.
        crop_downscale_factor (int): Scaling factor for sampled points-per-side in each layer.
        point_grids (List[np.ndarray] | None): Custom grids for point sampling normalized to [0,1].
        points_stride (int): Number of points to sample along each side of the image.
        points_batch_size (int): Batch size for the number of points processed simultaneously.
        conf_thres (float): Confidence threshold [0,1] for filtering based on mask quality prediction.
        stability_score_thresh (float): Stability threshold [0,1] for mask filtering based on stability.
        stability_score_offset (float): Offset value for calculating stability score.
        crop_nms_thresh (float): IoU cutoff for NMS to remove duplicate masks between crops.

    Returns:
        pred_masks (torch.Tensor): Segmented masks with shape (N, H, W).
        pred_scores (torch.Tensor): Confidence scores for each mask with shape (N,).
        pred_bboxes (torch.Tensor): Bounding boxes for each mask with shape (N, 4).

    Examples:
        >>> predictor = Predictor()
        >>> im = torch.rand(1, 3, 1024, 1024)  # Example input image
        >>> masks, scores, boxes = predictor.generate(im)
    """
    import torchvision  # scope for faster 'import ultralytics'

    self.segment_all = True
    ih, iw = im.shape[2:]
    crop_regions, layer_idxs = generate_crop_boxes((ih, iw), crop_n_layers, crop_overlap_ratio)
    if point_grids is None:
        point_grids = build_all_layer_point_grids(points_stride, crop_n_layers, crop_downscale_factor)
    pred_masks, pred_scores, pred_bboxes, region_areas = [], [], [], []
    for crop_region, layer_idx in zip(crop_regions, layer_idxs):
        x1, y1, x2, y2 = crop_region
        w, h = x2 - x1, y2 - y1
        area = torch.tensor(w * h, device=im.device)
        points_scale = np.array([[w, h]])  # w, h
        # Crop image and interpolate to input size
        crop_im = F.interpolate(im[..., y1:y2, x1:x2], (ih, iw), mode="bilinear", align_corners=False)
        # (num_points, 2)
        points_for_image = point_grids[layer_idx] * points_scale
        crop_masks, crop_scores, crop_bboxes = [], [], []
        for (points,) in batch_iterator(points_batch_size, points_for_image):
            pred_mask, pred_score = self.prompt_inference(crop_im, points=points, multimask_output=True)
            # Interpolate predicted masks to input size
            pred_mask = F.interpolate(pred_mask[None], (h, w), mode="bilinear", align_corners=False)[0]
            idx = pred_score > conf_thres
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]

            stability_score = calculate_stability_score(
                pred_mask, self.model.mask_threshold, stability_score_offset
            )
            idx = stability_score > stability_score_thresh
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]
            # Bool type is much more memory-efficient.
            pred_mask = pred_mask > self.model.mask_threshold
            # (N, 4)
            pred_bbox = batched_mask_to_box(pred_mask).float()
            keep_mask = ~is_box_near_crop_edge(pred_bbox, crop_region, [0, 0, iw, ih])
            if not torch.all(keep_mask):
                pred_bbox, pred_mask, pred_score = pred_bbox[keep_mask], pred_mask[keep_mask], pred_score[keep_mask]

            crop_masks.append(pred_mask)
            crop_bboxes.append(pred_bbox)
            crop_scores.append(pred_score)

        # Do nms within this crop
        crop_masks = torch.cat(crop_masks)
        crop_bboxes = torch.cat(crop_bboxes)
        crop_scores = torch.cat(crop_scores)
        keep = torchvision.ops.nms(crop_bboxes, crop_scores, self.args.iou)  # NMS
        crop_bboxes = uncrop_boxes_xyxy(crop_bboxes[keep], crop_region)
        crop_masks = uncrop_masks(crop_masks[keep], crop_region, ih, iw)
        crop_scores = crop_scores[keep]

        pred_masks.append(crop_masks)
        pred_bboxes.append(crop_bboxes)
        pred_scores.append(crop_scores)
        region_areas.append(area.expand(len(crop_masks)))

    pred_masks = torch.cat(pred_masks)
    pred_bboxes = torch.cat(pred_bboxes)
    pred_scores = torch.cat(pred_scores)
    region_areas = torch.cat(region_areas)

    # Remove duplicate masks between crops
    if len(crop_regions) > 1:
        scores = 1 / region_areas
        keep = torchvision.ops.nms(pred_bboxes, scores, crop_nms_thresh)
        pred_masks, pred_bboxes, pred_scores = pred_masks[keep], pred_bboxes[keep], pred_scores[keep]

    return pred_masks, pred_scores, pred_bboxes

get_im_features

get_im_features(im)

Extracts image features using the SAM model's image encoder for subsequent mask prediction.

Source code in ultralytics/models/sam/predict.py
def get_im_features(self, im):
    """Extracts image features using the SAM model's image encoder for subsequent mask prediction."""
    assert (
        isinstance(self.imgsz, (tuple, list)) and self.imgsz[0] == self.imgsz[1]
    ), f"SAM models only support square image size, but got {self.imgsz}."
    self.model.set_imgsz(self.imgsz)
    return self.model.image_encoder(im)

get_model

get_model()

Retrieves or builds the Segment Anything Model (SAM) for image segmentation tasks.

Source code in ultralytics/models/sam/predict.py
def get_model(self):
    """Retrieves or builds the Segment Anything Model (SAM) for image segmentation tasks."""
    return build_sam(self.args.model)

inference

inference(
    im,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
    *args,
    **kwargs
)

Perform image segmentation inference based on the given input cues, using the currently loaded image.

This method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and mask decoder for real-time and promptable segmentation tasks.

Parameters:

Name Type Description Default
im Tensor

The preprocessed input image in tensor format, with shape (N, C, H, W).

required
bboxes ndarray | List | None

Bounding boxes with shape (N, 4), in XYXY format.

None
points ndarray | List | None

Points indicating object locations with shape (N, 2), in pixels.

None
labels ndarray | List | None

Labels for point prompts, shape (N,). 1 = foreground, 0 = background.

None
masks ndarray | None

Low-resolution masks from previous predictions, shape (N, H, W). For SAM H=W=256.

None
multimask_output bool

Flag to return multiple masks. Helpful for ambiguous prompts.

False
*args Any

Additional positional arguments.

()
**kwargs Any

Additional keyword arguments.

{}

Returns:

Type Description
ndarray

The output masks in shape (C, H, W), where C is the number of generated masks.

ndarray

An array of length C containing quality scores predicted by the model for each mask.

ndarray

Low-resolution logits of shape (C, H, W) for subsequent inference, where H=W=256.

Examples:

>>> predictor = Predictor()
>>> predictor.setup_model(model_path="sam_model.pt")
>>> predictor.set_image("image.jpg")
>>> masks, scores, logits = predictor.inference(im, bboxes=[[0, 0, 100, 100]])
Source code in ultralytics/models/sam/predict.py
def inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs):
    """
    Perform image segmentation inference based on the given input cues, using the currently loaded image.

    This method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt
    encoder, and mask decoder for real-time and promptable segmentation tasks.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List | None): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List | None): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List | None): Labels for point prompts, shape (N,). 1 = foreground, 0 = background.
        masks (np.ndarray | None): Low-resolution masks from previous predictions, shape (N, H, W). For SAM H=W=256.
        multimask_output (bool): Flag to return multiple masks. Helpful for ambiguous prompts.
        *args (Any): Additional positional arguments.
        **kwargs (Any): Additional keyword arguments.

    Returns:
        (np.ndarray): The output masks in shape (C, H, W), where C is the number of generated masks.
        (np.ndarray): An array of length C containing quality scores predicted by the model for each mask.
        (np.ndarray): Low-resolution logits of shape (C, H, W) for subsequent inference, where H=W=256.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.setup_model(model_path="sam_model.pt")
        >>> predictor.set_image("image.jpg")
        >>> masks, scores, logits = predictor.inference(im, bboxes=[[0, 0, 100, 100]])
    """
    # Override prompts if any stored in self.prompts
    bboxes = self.prompts.pop("bboxes", bboxes)
    points = self.prompts.pop("points", points)
    masks = self.prompts.pop("masks", masks)
    labels = self.prompts.pop("labels", labels)

    if all(i is None for i in [bboxes, points, masks]):
        return self.generate(im, *args, **kwargs)

    return self.prompt_inference(im, bboxes, points, labels, masks, multimask_output)

postprocess

postprocess(preds, img, orig_imgs)

Post-processes SAM's inference outputs to generate object detection masks and bounding boxes.

This method scales masks and boxes to the original image size and applies a threshold to the mask predictions. It leverages SAM's advanced architecture for real-time, promptable segmentation tasks.

Parameters:

Name Type Description Default
preds Tuple[Tensor]

The output from SAM model inference, containing: - pred_masks (torch.Tensor): Predicted masks with shape (N, 1, H, W). - pred_scores (torch.Tensor): Confidence scores for each mask with shape (N, 1). - pred_bboxes (torch.Tensor, optional): Predicted bounding boxes if segment_all is True.

required
img Tensor

The processed input image tensor with shape (C, H, W).

required
orig_imgs List[ndarray] | Tensor

The original, unprocessed images.

required

Returns:

Name Type Description
results List[Results]

List of Results objects containing detection masks, bounding boxes, and other metadata for each processed image.

Examples:

>>> predictor = Predictor()
>>> preds = predictor.inference(img)
>>> results = predictor.postprocess(preds, img, orig_imgs)
Source code in ultralytics/models/sam/predict.py
def postprocess(self, preds, img, orig_imgs):
    """
    Post-processes SAM's inference outputs to generate object detection masks and bounding boxes.

    This method scales masks and boxes to the original image size and applies a threshold to the mask
    predictions. It leverages SAM's advanced architecture for real-time, promptable segmentation tasks.

    Args:
        preds (Tuple[torch.Tensor]): The output from SAM model inference, containing:
            - pred_masks (torch.Tensor): Predicted masks with shape (N, 1, H, W).
            - pred_scores (torch.Tensor): Confidence scores for each mask with shape (N, 1).
            - pred_bboxes (torch.Tensor, optional): Predicted bounding boxes if segment_all is True.
        img (torch.Tensor): The processed input image tensor with shape (C, H, W).
        orig_imgs (List[np.ndarray] | torch.Tensor): The original, unprocessed images.

    Returns:
        results (List[Results]): List of Results objects containing detection masks, bounding boxes, and other
            metadata for each processed image.

    Examples:
        >>> predictor = Predictor()
        >>> preds = predictor.inference(img)
        >>> results = predictor.postprocess(preds, img, orig_imgs)
    """
    # (N, 1, H, W), (N, 1)
    pred_masks, pred_scores = preds[:2]
    pred_bboxes = preds[2] if self.segment_all else None
    names = dict(enumerate(str(i) for i in range(len(pred_masks))))

    if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
        orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

    results = []
    for masks, orig_img, img_path in zip([pred_masks], orig_imgs, self.batch[0]):
        if len(masks) == 0:
            masks, pred_bboxes = None, torch.zeros((0, 6), device=pred_masks.device)
        else:
            masks = ops.scale_masks(masks[None].float(), orig_img.shape[:2], padding=False)[0]
            masks = masks > self.model.mask_threshold  # to bool
            if pred_bboxes is not None:
                pred_bboxes = ops.scale_boxes(img.shape[2:], pred_bboxes.float(), orig_img.shape, padding=False)
            else:
                pred_bboxes = batched_mask_to_box(masks)
            # NOTE: SAM models do not return cls info. This `cls` here is just a placeholder for consistency.
            cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
            pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)
        results.append(Results(orig_img, path=img_path, names=names, masks=masks, boxes=pred_bboxes))
    # Reset segment-all mode.
    self.segment_all = False
    return results

pre_transform

pre_transform(im)

Perform initial transformations on the input image for preprocessing.

This method applies transformations such as resizing to prepare the image for further preprocessing. Currently, batched inference is not supported; hence the list length should be 1.

Parameters:

Name Type Description Default
im List[ndarray]

List containing a single image in HWC numpy array format.

required

Returns:

Type Description
List[ndarray]

List containing the transformed image.

Raises:

Type Description
AssertionError

If the input list contains more than one image.

Examples:

>>> predictor = Predictor()
>>> image = np.random.rand(480, 640, 3)  # Single HWC image
>>> transformed = predictor.pre_transform([image])
>>> print(len(transformed))
1
Source code in ultralytics/models/sam/predict.py
def pre_transform(self, im):
    """
    Perform initial transformations on the input image for preprocessing.

    This method applies transformations such as resizing to prepare the image for further preprocessing.
    Currently, batched inference is not supported; hence the list length should be 1.

    Args:
        im (List[np.ndarray]): List containing a single image in HWC numpy array format.

    Returns:
        (List[np.ndarray]): List containing the transformed image.

    Raises:
        AssertionError: If the input list contains more than one image.

    Examples:
        >>> predictor = Predictor()
        >>> image = np.random.rand(480, 640, 3)  # Single HWC image
        >>> transformed = predictor.pre_transform([image])
        >>> print(len(transformed))
        1
    """
    assert len(im) == 1, "SAM model does not currently support batched inference"
    letterbox = LetterBox(self.args.imgsz, auto=False, center=False)
    return [letterbox(image=x) for x in im]

preprocess

preprocess(im)

Preprocess the input image for model inference.

This method prepares the input image by applying transformations and normalization. It supports both torch.Tensor and list of np.ndarray as input formats.

Parameters:

Name Type Description Default
im Tensor | List[ndarray]

Input image(s) in BCHW tensor format or list of HWC numpy arrays.

required

Returns:

Name Type Description
im Tensor

The preprocessed image tensor, normalized and converted to the appropriate dtype.

Examples:

>>> predictor = Predictor()
>>> image = torch.rand(1, 3, 640, 640)
>>> preprocessed_image = predictor.preprocess(image)
Source code in ultralytics/models/sam/predict.py
def preprocess(self, im):
    """
    Preprocess the input image for model inference.

    This method prepares the input image by applying transformations and normalization. It supports both
    torch.Tensor and list of np.ndarray as input formats.

    Args:
        im (torch.Tensor | List[np.ndarray]): Input image(s) in BCHW tensor format or list of HWC numpy arrays.

    Returns:
        im (torch.Tensor): The preprocessed image tensor, normalized and converted to the appropriate dtype.

    Examples:
        >>> predictor = Predictor()
        >>> image = torch.rand(1, 3, 640, 640)
        >>> preprocessed_image = predictor.preprocess(image)
    """
    if self.im is not None:
        return self.im
    not_tensor = not isinstance(im, torch.Tensor)
    if not_tensor:
        im = np.stack(self.pre_transform(im))
        im = im[..., ::-1].transpose((0, 3, 1, 2))
        im = np.ascontiguousarray(im)
        im = torch.from_numpy(im)

    im = im.to(self.device)
    im = im.half() if self.model.fp16 else im.float()
    if not_tensor:
        im = (im - self.mean) / self.std
    return im

prompt_inference

prompt_inference(
    im,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
)

Performs image segmentation inference based on input cues using SAM's specialized architecture.

This internal function leverages the Segment Anything Model (SAM) for prompt-based, real-time segmentation. It processes various input prompts such as bounding boxes, points, and masks to generate segmentation masks.

Parameters:

Name Type Description Default
im Tensor

Preprocessed input image tensor with shape (N, C, H, W).

required
bboxes ndarray | List | None

Bounding boxes in XYXY format with shape (N, 4).

None
points ndarray | List | None

Points indicating object locations with shape (N, 2) or (N, num_points, 2), in pixels.

None
labels ndarray | List | None

Point prompt labels with shape (N,) or (N, num_points). 1 for foreground, 0 for background.

None
masks ndarray | None

Low-res masks from previous predictions with shape (N, H, W). For SAM, H=W=256.

None
multimask_output bool

Flag to return multiple masks for ambiguous prompts.

False

Raises:

Type Description
AssertionError

If the number of points don't match the number of labels, in case labels were passed.

Returns:

Type Description
ndarray

Output masks with shape (C, H, W), where C is the number of generated masks.

ndarray

Quality scores predicted by the model for each mask, with length C.

Examples:

>>> predictor = Predictor()
>>> im = torch.rand(1, 3, 1024, 1024)
>>> bboxes = [[100, 100, 200, 200]]
>>> masks, scores, logits = predictor.prompt_inference(im, bboxes=bboxes)
Source code in ultralytics/models/sam/predict.py
def prompt_inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False):
    """
    Performs image segmentation inference based on input cues using SAM's specialized architecture.

    This internal function leverages the Segment Anything Model (SAM) for prompt-based, real-time segmentation.
    It processes various input prompts such as bounding boxes, points, and masks to generate segmentation masks.

    Args:
        im (torch.Tensor): Preprocessed input image tensor with shape (N, C, H, W).
        bboxes (np.ndarray | List | None): Bounding boxes in XYXY format with shape (N, 4).
        points (np.ndarray | List | None): Points indicating object locations with shape (N, 2) or (N, num_points, 2), in pixels.
        labels (np.ndarray | List | None): Point prompt labels with shape (N,) or (N, num_points). 1 for foreground, 0 for background.
        masks (np.ndarray | None): Low-res masks from previous predictions with shape (N, H, W). For SAM, H=W=256.
        multimask_output (bool): Flag to return multiple masks for ambiguous prompts.

    Raises:
        AssertionError: If the number of points don't match the number of labels, in case labels were passed.

    Returns:
        (np.ndarray): Output masks with shape (C, H, W), where C is the number of generated masks.
        (np.ndarray): Quality scores predicted by the model for each mask, with length C.

    Examples:
        >>> predictor = Predictor()
        >>> im = torch.rand(1, 3, 1024, 1024)
        >>> bboxes = [[100, 100, 200, 200]]
        >>> masks, scores, logits = predictor.prompt_inference(im, bboxes=bboxes)
    """
    features = self.get_im_features(im) if self.features is None else self.features

    bboxes, points, labels, masks = self._prepare_prompts(im.shape[2:], bboxes, points, labels, masks)
    points = (points, labels) if points is not None else None
    # Embed prompts
    sparse_embeddings, dense_embeddings = self.model.prompt_encoder(points=points, boxes=bboxes, masks=masks)

    # Predict masks
    pred_masks, pred_scores = self.model.mask_decoder(
        image_embeddings=features,
        image_pe=self.model.prompt_encoder.get_dense_pe(),
        sparse_prompt_embeddings=sparse_embeddings,
        dense_prompt_embeddings=dense_embeddings,
        multimask_output=multimask_output,
    )

    # (N, d, H, W) --> (N*d, H, W), (N, d) --> (N*d, )
    # `d` could be 1 or 3 depends on `multimask_output`.
    return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

remove_small_regions staticmethod

remove_small_regions(masks, min_area=0, nms_thresh=0.7)

Remove small disconnected regions and holes from segmentation masks.

This function performs post-processing on segmentation masks generated by the Segment Anything Model (SAM). It removes small disconnected regions and holes from the input masks, and then performs Non-Maximum Suppression (NMS) to eliminate any newly created duplicate boxes.

Parameters:

Name Type Description Default
masks Tensor

Segmentation masks to be processed, with shape (N, H, W) where N is the number of masks, H is height, and W is width.

required
min_area int

Minimum area threshold for removing disconnected regions and holes. Regions smaller than this will be removed.

0
nms_thresh float

IoU threshold for the NMS algorithm to remove duplicate boxes.

0.7

Returns:

Name Type Description
new_masks Tensor

Processed masks with small regions removed, shape (N, H, W).

keep List[int]

Indices of remaining masks after NMS, for filtering corresponding boxes.

Examples:

>>> masks = torch.rand(5, 640, 640) > 0.5  # 5 random binary masks
>>> new_masks, keep = remove_small_regions(masks, min_area=100, nms_thresh=0.7)
>>> print(f"Original masks: {masks.shape}, Processed masks: {new_masks.shape}")
>>> print(f"Indices of kept masks: {keep}")
Source code in ultralytics/models/sam/predict.py
@staticmethod
def remove_small_regions(masks, min_area=0, nms_thresh=0.7):
    """
    Remove small disconnected regions and holes from segmentation masks.

    This function performs post-processing on segmentation masks generated by the Segment Anything Model (SAM).
    It removes small disconnected regions and holes from the input masks, and then performs Non-Maximum
    Suppression (NMS) to eliminate any newly created duplicate boxes.

    Args:
        masks (torch.Tensor): Segmentation masks to be processed, with shape (N, H, W) where N is the number of
            masks, H is height, and W is width.
        min_area (int): Minimum area threshold for removing disconnected regions and holes. Regions smaller than
            this will be removed.
        nms_thresh (float): IoU threshold for the NMS algorithm to remove duplicate boxes.

    Returns:
        new_masks (torch.Tensor): Processed masks with small regions removed, shape (N, H, W).
        keep (List[int]): Indices of remaining masks after NMS, for filtering corresponding boxes.

    Examples:
        >>> masks = torch.rand(5, 640, 640) > 0.5  # 5 random binary masks
        >>> new_masks, keep = remove_small_regions(masks, min_area=100, nms_thresh=0.7)
        >>> print(f"Original masks: {masks.shape}, Processed masks: {new_masks.shape}")
        >>> print(f"Indices of kept masks: {keep}")
    """
    import torchvision  # scope for faster 'import ultralytics'

    if len(masks) == 0:
        return masks

    # Filter small disconnected regions and holes
    new_masks = []
    scores = []
    for mask in masks:
        mask = mask.cpu().numpy().astype(np.uint8)
        mask, changed = remove_small_regions(mask, min_area, mode="holes")
        unchanged = not changed
        mask, changed = remove_small_regions(mask, min_area, mode="islands")
        unchanged = unchanged and not changed

        new_masks.append(torch.as_tensor(mask).unsqueeze(0))
        # Give score=0 to changed masks and 1 to unchanged masks so NMS prefers masks not needing postprocessing
        scores.append(float(unchanged))

    # Recalculate boxes and remove any new duplicates
    new_masks = torch.cat(new_masks, dim=0)
    boxes = batched_mask_to_box(new_masks)
    keep = torchvision.ops.nms(boxes.float(), torch.as_tensor(scores), nms_thresh)

    return new_masks[keep].to(device=masks.device, dtype=masks.dtype), keep

reset_image

reset_image()

Resets the current image and its features, clearing them for subsequent inference.

Source code in ultralytics/models/sam/predict.py
def reset_image(self):
    """Resets the current image and its features, clearing them for subsequent inference."""
    self.im = None
    self.features = None

set_image

set_image(image)

Preprocesses and sets a single image for inference.

This method prepares the model for inference on a single image by setting up the model if not already initialized, configuring the data source, and preprocessing the image for feature extraction. It ensures that only one image is set at a time and extracts image features for subsequent use.

Parameters:

Name Type Description Default
image str | ndarray

Path to the image file as a string, or a numpy array representing an image read by cv2.

required

Raises:

Type Description
AssertionError

If more than one image is attempted to be set.

Examples:

>>> predictor = Predictor()
>>> predictor.set_image("path/to/image.jpg")
>>> predictor.set_image(cv2.imread("path/to/image.jpg"))
Notes
  • This method should be called before performing inference on a new image.
  • The extracted features are stored in the self.features attribute for later use.
Source code in ultralytics/models/sam/predict.py
def set_image(self, image):
    """
    Preprocesses and sets a single image for inference.

    This method prepares the model for inference on a single image by setting up the model if not already
    initialized, configuring the data source, and preprocessing the image for feature extraction. It
    ensures that only one image is set at a time and extracts image features for subsequent use.

    Args:
        image (str | np.ndarray): Path to the image file as a string, or a numpy array representing
            an image read by cv2.

    Raises:
        AssertionError: If more than one image is attempted to be set.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.set_image("path/to/image.jpg")
        >>> predictor.set_image(cv2.imread("path/to/image.jpg"))

    Notes:
        - This method should be called before performing inference on a new image.
        - The extracted features are stored in the `self.features` attribute for later use.
    """
    if self.model is None:
        self.setup_model(model=None)
    self.setup_source(image)
    assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
    for batch in self.dataset:
        im = self.preprocess(batch[1])
        self.features = self.get_im_features(im)
        break

set_prompts

set_prompts(prompts)

Sets prompts for subsequent inference operations.

Source code in ultralytics/models/sam/predict.py
def set_prompts(self, prompts):
    """Sets prompts for subsequent inference operations."""
    self.prompts = prompts

setup_model

setup_model(model=None, verbose=True)

Initializes the Segment Anything Model (SAM) for inference.

This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary parameters for image normalization and other Ultralytics compatibility settings.

Parameters:

Name Type Description Default
model Module | None

A pretrained SAM model. If None, a new model is built based on config.

None
verbose bool

If True, prints selected device information.

True

Examples:

>>> predictor = Predictor()
>>> predictor.setup_model(model=sam_model, verbose=True)
Source code in ultralytics/models/sam/predict.py
def setup_model(self, model=None, verbose=True):
    """
    Initializes the Segment Anything Model (SAM) for inference.

    This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary
    parameters for image normalization and other Ultralytics compatibility settings.

    Args:
        model (torch.nn.Module | None): A pretrained SAM model. If None, a new model is built based on config.
        verbose (bool): If True, prints selected device information.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.setup_model(model=sam_model, verbose=True)
    """
    device = select_device(self.args.device, verbose=verbose)
    if model is None:
        model = self.get_model()
    model.eval()
    self.model = model.to(device)
    self.device = device
    self.mean = torch.tensor([123.675, 116.28, 103.53]).view(-1, 1, 1).to(device)
    self.std = torch.tensor([58.395, 57.12, 57.375]).view(-1, 1, 1).to(device)

    # Ultralytics compatibility settings
    self.model.pt = False
    self.model.triton = False
    self.model.stride = 32
    self.model.fp16 = False
    self.done_warmup = True

setup_source

setup_source(source)

Sets up the data source for inference.

This method configures the data source from which images will be fetched for inference. It supports various input types such as image files, directories, video files, and other compatible data sources.

Parameters:

Name Type Description Default
source str | Path | None

The path or identifier for the image data source. Can be a file path, directory path, URL, or other supported source types.

required

Examples:

>>> predictor = Predictor()
>>> predictor.setup_source("path/to/images")
>>> predictor.setup_source("video.mp4")
>>> predictor.setup_source(None)  # Uses default source if available
Notes
  • If source is None, the method may use a default source if configured.
  • The method adapts to different source types and prepares them for subsequent inference steps.
  • Supported source types may include local files, directories, URLs, and video streams.
Source code in ultralytics/models/sam/predict.py
def setup_source(self, source):
    """
    Sets up the data source for inference.

    This method configures the data source from which images will be fetched for inference. It supports
    various input types such as image files, directories, video files, and other compatible data sources.

    Args:
        source (str | Path | None): The path or identifier for the image data source. Can be a file path,
            directory path, URL, or other supported source types.

    Examples:
        >>> predictor = Predictor()
        >>> predictor.setup_source("path/to/images")
        >>> predictor.setup_source("video.mp4")
        >>> predictor.setup_source(None)  # Uses default source if available

    Notes:
        - If source is None, the method may use a default source if configured.
        - The method adapts to different source types and prepares them for subsequent inference steps.
        - Supported source types may include local files, directories, URLs, and video streams.
    """
    if source is not None:
        super().setup_source(source)





ultralytics.models.sam.predict.SAM2Predictor

SAM2Predictor(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

Bases: Predictor

SAM2Predictor class for advanced image segmentation using Segment Anything Model 2 architecture.

This class extends the base Predictor class to implement SAM2-specific functionality for image segmentation tasks. It provides methods for model initialization, feature extraction, and prompt-based inference.

Attributes:

Name Type Description
_bb_feat_sizes List[Tuple[int, int]]

Feature sizes for different backbone levels.

model Module

The loaded SAM2 model.

device device

The device (CPU or GPU) on which the model is loaded.

features Dict[str, Tensor]

Cached image features for efficient inference.

segment_all bool

Flag to indicate if all segments should be predicted.

prompts Dict

Dictionary to store various types of prompts for inference.

Methods:

Name Description
get_model

Retrieves and initializes the SAM2 model.

prompt_inference

Performs image segmentation inference based on various prompts.

set_image

Preprocesses and sets a single image for inference.

get_im_features

Extracts and processes image features using SAM2's image encoder.

Examples:

>>> predictor = SAM2Predictor(cfg)
>>> predictor.set_image("path/to/image.jpg")
>>> bboxes = [[100, 100, 200, 200]]
>>> masks, scores, _ = predictor.prompt_inference(predictor.im, bboxes=bboxes)
>>> print(f"Predicted {len(masks)} masks with average score {scores.mean():.2f}")

Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

Parameters:

Name Type Description Default
cfg Dict

Configuration dictionary containing default settings.

DEFAULT_CFG
overrides Dict | None

Dictionary of values to override default configuration.

None
_callbacks Dict | None

Dictionary of callback functions to customize behavior.

None

Examples:

>>> predictor = Predictor(cfg=DEFAULT_CFG)
>>> predictor = Predictor(overrides={"imgsz": 640})
>>> predictor = Predictor(_callbacks={"on_predict_start": custom_callback})
Source code in ultralytics/models/sam/predict.py
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the Predictor with configuration, overrides, and callbacks.

    Sets up the Predictor object for SAM (Segment Anything Model) and applies any configuration overrides or
    callbacks provided. Initializes task-specific settings for SAM, such as retina_masks being set to True
    for optimal results.

    Args:
        cfg (Dict): Configuration dictionary containing default settings.
        overrides (Dict | None): Dictionary of values to override default configuration.
        _callbacks (Dict | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor = Predictor(cfg=DEFAULT_CFG)
        >>> predictor = Predictor(overrides={"imgsz": 640})
        >>> predictor = Predictor(_callbacks={"on_predict_start": custom_callback})
    """
    if overrides is None:
        overrides = {}
    overrides.update(dict(task="segment", mode="predict", batch=1))
    super().__init__(cfg, overrides, _callbacks)
    self.args.retina_masks = True
    self.im = None
    self.features = None
    self.prompts = {}
    self.segment_all = False

get_im_features

get_im_features(im)

Extracts image features from the SAM image encoder for subsequent processing.

Source code in ultralytics/models/sam/predict.py
def get_im_features(self, im):
    """Extracts image features from the SAM image encoder for subsequent processing."""
    assert (
        isinstance(self.imgsz, (tuple, list)) and self.imgsz[0] == self.imgsz[1]
    ), f"SAM 2 models only support square image size, but got {self.imgsz}."
    self.model.set_imgsz(self.imgsz)
    self._bb_feat_sizes = [[x // (4 * i) for x in self.imgsz] for i in [1, 2, 4]]

    backbone_out = self.model.forward_image(im)
    _, vision_feats, _, _ = self.model._prepare_backbone_features(backbone_out)
    if self.model.directly_add_no_mem_embed:
        vision_feats[-1] = vision_feats[-1] + self.model.no_mem_embed
    feats = [
        feat.permute(1, 2, 0).view(1, -1, *feat_size)
        for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1])
    ][::-1]
    return {"image_embed": feats[-1], "high_res_feats": feats[:-1]}

get_model

get_model()

Retrieves and initializes the Segment Anything Model 2 (SAM2) for image segmentation tasks.

Source code in ultralytics/models/sam/predict.py
def get_model(self):
    """Retrieves and initializes the Segment Anything Model 2 (SAM2) for image segmentation tasks."""
    return build_sam(self.args.model)

prompt_inference

prompt_inference(
    im,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
    img_idx=-1,
)

Performs image segmentation inference based on various prompts using SAM2 architecture.

This method leverages the Segment Anything Model 2 (SAM2) to generate segmentation masks for input images based on provided prompts such as bounding boxes, points, or existing masks. It supports both single and multi-object prediction scenarios.

Parameters:

Name Type Description Default
im Tensor

Preprocessed input image tensor with shape (N, C, H, W).

required
bboxes ndarray | List[List[float]] | None

Bounding boxes in XYXY format with shape (N, 4).

None
points ndarray | List[List[float]] | None

Object location points with shape (N, 2), in pixels.

None
labels ndarray | List[int] | None

Point prompt labels with shape (N,). 1 = foreground, 0 = background.

None
masks ndarray | None

Low-resolution masks from previous predictions with shape (N, H, W).

None
multimask_output bool

Flag to return multiple masks for ambiguous prompts.

False
img_idx int

Index of the image in the batch to process.

-1

Returns:

Type Description
ndarray

Output masks with shape (C, H, W), where C is the number of generated masks.

ndarray

Quality scores for each mask, with length C.

Examples:

>>> predictor = SAM2Predictor(cfg)
>>> image = torch.rand(1, 3, 640, 640)
>>> bboxes = [[100, 100, 200, 200]]
>>> masks, scores, logits = predictor.prompt_inference(image, bboxes=bboxes)
>>> print(f"Generated {masks.shape[0]} masks with average score {scores.mean():.2f}")
Notes
  • The method supports batched inference for multiple objects when points or bboxes are provided.
  • Input prompts (bboxes, points) are automatically scaled to match the input image dimensions.
  • When both bboxes and points are provided, they are merged into a single 'points' input for the model.
References
  • SAM2 Paper: [Add link to SAM2 paper when available]
Source code in ultralytics/models/sam/predict.py
def prompt_inference(
    self,
    im,
    bboxes=None,
    points=None,
    labels=None,
    masks=None,
    multimask_output=False,
    img_idx=-1,
):
    """
    Performs image segmentation inference based on various prompts using SAM2 architecture.

    This method leverages the Segment Anything Model 2 (SAM2) to generate segmentation masks for input images
    based on provided prompts such as bounding boxes, points, or existing masks. It supports both single and
    multi-object prediction scenarios.

    Args:
        im (torch.Tensor): Preprocessed input image tensor with shape (N, C, H, W).
        bboxes (np.ndarray | List[List[float]] | None): Bounding boxes in XYXY format with shape (N, 4).
        points (np.ndarray | List[List[float]] | None): Object location points with shape (N, 2), in pixels.
        labels (np.ndarray | List[int] | None): Point prompt labels with shape (N,). 1 = foreground, 0 = background.
        masks (np.ndarray | None): Low-resolution masks from previous predictions with shape (N, H, W).
        multimask_output (bool): Flag to return multiple masks for ambiguous prompts.
        img_idx (int): Index of the image in the batch to process.

    Returns:
        (np.ndarray): Output masks with shape (C, H, W), where C is the number of generated masks.
        (np.ndarray): Quality scores for each mask, with length C.

    Examples:
        >>> predictor = SAM2Predictor(cfg)
        >>> image = torch.rand(1, 3, 640, 640)
        >>> bboxes = [[100, 100, 200, 200]]
        >>> masks, scores, logits = predictor.prompt_inference(image, bboxes=bboxes)
        >>> print(f"Generated {masks.shape[0]} masks with average score {scores.mean():.2f}")

    Notes:
        - The method supports batched inference for multiple objects when points or bboxes are provided.
        - Input prompts (bboxes, points) are automatically scaled to match the input image dimensions.
        - When both bboxes and points are provided, they are merged into a single 'points' input for the model.

    References:
        - SAM2 Paper: [Add link to SAM2 paper when available]
    """
    features = self.get_im_features(im) if self.features is None else self.features

    points, labels, masks = self._prepare_prompts(im.shape[2:], bboxes, points, labels, masks)
    points = (points, labels) if points is not None else None

    sparse_embeddings, dense_embeddings = self.model.sam_prompt_encoder(
        points=points,
        boxes=None,
        masks=masks,
    )
    # Predict masks
    batched_mode = points is not None and points[0].shape[0] > 1  # multi object prediction
    high_res_features = [feat_level[img_idx].unsqueeze(0) for feat_level in features["high_res_feats"]]
    pred_masks, pred_scores, _, _ = self.model.sam_mask_decoder(
        image_embeddings=features["image_embed"][img_idx].unsqueeze(0),
        image_pe=self.model.sam_prompt_encoder.get_dense_pe(),
        sparse_prompt_embeddings=sparse_embeddings,
        dense_prompt_embeddings=dense_embeddings,
        multimask_output=multimask_output,
        repeat_image=batched_mode,
        high_res_features=high_res_features,
    )
    # (N, d, H, W) --> (N*d, H, W), (N, d) --> (N*d, )
    # `d` could be 1 or 3 depends on `multimask_output`.
    return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

set_image

set_image(image)

Preprocesses and sets a single image for inference using the SAM2 model.

This method initializes the model if not already done, configures the data source to the specified image, and preprocesses the image for feature extraction. It supports setting only one image at a time.

Parameters:

Name Type Description Default
image str | ndarray

Path to the image file as a string, or a numpy array representing the image.

required

Raises:

Type Description
AssertionError

If more than one image is attempted to be set.

Examples:

>>> predictor = SAM2Predictor()
>>> predictor.set_image("path/to/image.jpg")
>>> predictor.set_image(np.array([...]))  # Using a numpy array
Notes
  • This method must be called before performing any inference on a new image.
  • The method caches the extracted features for efficient subsequent inferences on the same image.
  • Only one image can be set at a time. To process multiple images, call this method for each new image.
Source code in ultralytics/models/sam/predict.py
def set_image(self, image):
    """
    Preprocesses and sets a single image for inference using the SAM2 model.

    This method initializes the model if not already done, configures the data source to the specified image,
    and preprocesses the image for feature extraction. It supports setting only one image at a time.

    Args:
        image (str | np.ndarray): Path to the image file as a string, or a numpy array representing the image.

    Raises:
        AssertionError: If more than one image is attempted to be set.

    Examples:
        >>> predictor = SAM2Predictor()
        >>> predictor.set_image("path/to/image.jpg")
        >>> predictor.set_image(np.array([...]))  # Using a numpy array

    Notes:
        - This method must be called before performing any inference on a new image.
        - The method caches the extracted features for efficient subsequent inferences on the same image.
        - Only one image can be set at a time. To process multiple images, call this method for each new image.
    """
    if self.model is None:
        self.setup_model(model=None)
    self.setup_source(image)
    assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
    for batch in self.dataset:
        im = self.preprocess(batch[1])
        self.features = self.get_im_features(im)
        break





ultralytics.models.sam.predict.SAM2VideoPredictor

SAM2VideoPredictor(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

Bases: SAM2Predictor

SAM2VideoPredictor to handle user interactions with videos and manage inference states.

This class extends the functionality of SAM2Predictor to support video processing and maintains the state of inference operations. It includes configurations for managing non-overlapping masks, clearing memory for non-conditional inputs, and setting up callbacks for prediction events.

Attributes:

Name Type Description
inference_state Dict

A dictionary to store the current state of inference operations.

non_overlap_masks bool

A flag indicating whether masks should be non-overlapping.

clear_non_cond_mem_around_input bool

A flag to control clearing non-conditional memory around inputs.

clear_non_cond_mem_for_multi_obj bool

A flag to control clearing non-conditional memory for multi-object scenarios.

callbacks Dict

A dictionary of callbacks for various prediction lifecycle events.

Parameters:

Name Type Description Default
cfg (Dict, Optional)

Configuration settings for the predictor. Defaults to DEFAULT_CFG.

DEFAULT_CFG
overrides (Dict, Optional)

Additional configuration overrides. Defaults to None.

None
_callbacks (List, Optional)

Custom callbacks to be added. Defaults to None.

None
Note

The fill_hole_area attribute is defined but not used in the current implementation.

This constructor initializes the SAM2VideoPredictor with a given configuration, applies any specified overrides, and sets up the inference state along with certain flags that control the behavior of the predictor.

Parameters:

Name Type Description Default
cfg Dict

Configuration dictionary containing default settings.

DEFAULT_CFG
overrides Dict | None

Dictionary of values to override default configuration.

None
_callbacks Dict | None

Dictionary of callback functions to customize behavior.

None

Examples:

>>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
>>> predictor = SAM2VideoPredictor(overrides={"imgsz": 640})
>>> predictor = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
Source code in ultralytics/models/sam/predict.py
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the predictor with configuration and optional overrides.

    This constructor initializes the SAM2VideoPredictor with a given configuration, applies any
    specified overrides, and sets up the inference state along with certain flags
    that control the behavior of the predictor.

    Args:
        cfg (Dict): Configuration dictionary containing default settings.
        overrides (Dict | None): Dictionary of values to override default configuration.
        _callbacks (Dict | None): Dictionary of callback functions to customize behavior.

    Examples:
        >>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
        >>> predictor = SAM2VideoPredictor(overrides={"imgsz": 640})
        >>> predictor = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
    """
    super().__init__(cfg, overrides, _callbacks)
    self.inference_state = {}
    self.non_overlap_masks = True
    self.clear_non_cond_mem_around_input = False
    self.clear_non_cond_mem_for_multi_obj = False
    self.callbacks["on_predict_start"].append(self.init_state)

add_new_prompts

add_new_prompts(obj_id, points=None, labels=None, masks=None, frame_idx=0)

Adds new points or masks to a specific frame for a given object ID.

This method updates the inference state with new prompts (points or masks) for a specified object and frame index. It ensures that the prompts are either points or masks, but not both, and updates the internal state accordingly. It also handles the generation of new segmentations based on the provided prompts and the existing state.

Parameters:

Name Type Description Default
obj_id int

The ID of the object to which the prompts are associated.

required
points (Tensor, Optional)

The coordinates of the points of interest. Defaults to None.

None
labels (Tensor, Optional)

The labels corresponding to the points. Defaults to None.

None
masks Tensor

Binary masks for the object. Defaults to None.

None
frame_idx int

The index of the frame to which the prompts are applied. Defaults to 0.

0

Returns:

Type Description
tuple

A tuple containing the flattened predicted masks and a tensor of ones indicating the number of objects.

Raises:

Type Description
AssertionError

If both masks and points are provided, or neither is provided.

Note
  • Only one type of prompt (either points or masks) can be added per call.
  • If the frame is being tracked for the first time, it is treated as an initial conditioning frame.
  • The method handles the consolidation of outputs and resizing of masks to the original video resolution.
Source code in ultralytics/models/sam/predict.py
@smart_inference_mode()
def add_new_prompts(
    self,
    obj_id,
    points=None,
    labels=None,
    masks=None,
    frame_idx=0,
):
    """
    Adds new points or masks to a specific frame for a given object ID.

    This method updates the inference state with new prompts (points or masks) for a specified
    object and frame index. It ensures that the prompts are either points or masks, but not both,
    and updates the internal state accordingly. It also handles the generation of new segmentations
    based on the provided prompts and the existing state.

    Args:
        obj_id (int): The ID of the object to which the prompts are associated.
        points (torch.Tensor, Optional): The coordinates of the points of interest. Defaults to None.
        labels (torch.Tensor, Optional): The labels corresponding to the points. Defaults to None.
        masks (torch.Tensor, optional): Binary masks for the object. Defaults to None.
        frame_idx (int, optional): The index of the frame to which the prompts are applied. Defaults to 0.

    Returns:
        (tuple): A tuple containing the flattened predicted masks and a tensor of ones indicating the number of objects.

    Raises:
        AssertionError: If both `masks` and `points` are provided, or neither is provided.

    Note:
        - Only one type of prompt (either points or masks) can be added per call.
        - If the frame is being tracked for the first time, it is treated as an initial conditioning frame.
        - The method handles the consolidation of outputs and resizing of masks to the original video resolution.
    """
    assert (masks is None) ^ (points is None), "'masks' and 'points' prompts are not compatible with each other."
    obj_idx = self._obj_id_to_idx(obj_id)

    point_inputs = None
    pop_key = "point_inputs_per_obj"
    if points is not None:
        point_inputs = {"point_coords": points, "point_labels": labels}
        self.inference_state["point_inputs_per_obj"][obj_idx][frame_idx] = point_inputs
        pop_key = "mask_inputs_per_obj"
    self.inference_state["mask_inputs_per_obj"][obj_idx][frame_idx] = masks
    self.inference_state[pop_key][obj_idx].pop(frame_idx, None)
    # If this frame hasn't been tracked before, we treat it as an initial conditioning
    # frame, meaning that the inputs points are to generate segments on this frame without
    # using any memory from other frames, like in SAM. Otherwise (if it has been tracked),
    # the input points will be used to correct the already tracked masks.
    is_init_cond_frame = frame_idx not in self.inference_state["frames_already_tracked"]
    obj_output_dict = self.inference_state["output_dict_per_obj"][obj_idx]
    obj_temp_output_dict = self.inference_state["temp_output_dict_per_obj"][obj_idx]
    # Add a frame to conditioning output if it's an initial conditioning frame or
    # if the model sees all frames receiving clicks/mask as conditioning frames.
    is_cond = is_init_cond_frame or self.model.add_all_frames_to_correct_as_cond
    storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"

    # Get any previously predicted mask logits on this object and feed it along with
    # the new clicks into the SAM mask decoder.
    prev_sam_mask_logits = None
    # lookup temporary output dict first, which contains the most recent output
    # (if not found, then lookup conditioning and non-conditioning frame output)
    if point_inputs is not None:
        prev_out = (
            obj_temp_output_dict[storage_key].get(frame_idx)
            or obj_output_dict["cond_frame_outputs"].get(frame_idx)
            or obj_output_dict["non_cond_frame_outputs"].get(frame_idx)
        )

        if prev_out is not None and prev_out.get("pred_masks") is not None:
            prev_sam_mask_logits = prev_out["pred_masks"].to(device=self.device, non_blocking=True)
            # Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues.
            prev_sam_mask_logits.clamp_(-32.0, 32.0)
    current_out = self._run_single_frame_inference(
        output_dict=obj_output_dict,  # run on the slice of a single object
        frame_idx=frame_idx,
        batch_size=1,  # run on the slice of a single object
        is_init_cond_frame=is_init_cond_frame,
        point_inputs=point_inputs,
        mask_inputs=masks,
        reverse=False,
        # Skip the memory encoder when adding clicks or mask. We execute the memory encoder
        # at the beginning of `propagate_in_video` (after user finalize their clicks). This
        # allows us to enforce non-overlapping constraints on all objects before encoding
        # them into memory.
        run_mem_encoder=False,
        prev_sam_mask_logits=prev_sam_mask_logits,
    )
    # Add the output to the output dict (to be used as future memory)
    obj_temp_output_dict[storage_key][frame_idx] = current_out

    # Resize the output mask to the original video resolution
    consolidated_out = self._consolidate_temp_output_across_obj(
        frame_idx,
        is_cond=is_cond,
        run_mem_encoder=False,
    )
    pred_masks = consolidated_out["pred_masks"].flatten(0, 1)
    return pred_masks.flatten(0, 1), torch.ones(1, dtype=pred_masks.dtype, device=pred_masks.device)

get_im_features

get_im_features(im, batch=1)

Extracts and processes image features using SAM2's image encoder for subsequent segmentation tasks.

Parameters:

Name Type Description Default
im Tensor

The input image tensor.

required
batch int

The batch size for expanding features if there are multiple prompts. Defaults to 1.

1

Returns:

Name Type Description
vis_feats Tensor

The visual features extracted from the image.

vis_pos_embed Tensor

The positional embeddings for the visual features.

feat_sizes List(Tuple[int])

A list containing the sizes of the extracted features.

Note
  • If batch is greater than 1, the features are expanded to fit the batch size.
  • The method leverages the model's _prepare_backbone_features method to prepare the backbone features.
Source code in ultralytics/models/sam/predict.py
def get_im_features(self, im, batch=1):
    """
    Extracts and processes image features using SAM2's image encoder for subsequent segmentation tasks.

    Args:
        im (torch.Tensor): The input image tensor.
        batch (int, optional): The batch size for expanding features if there are multiple prompts. Defaults to 1.

    Returns:
        vis_feats (torch.Tensor): The visual features extracted from the image.
        vis_pos_embed (torch.Tensor): The positional embeddings for the visual features.
        feat_sizes (List(Tuple[int])): A list containing the sizes of the extracted features.

    Note:
        - If `batch` is greater than 1, the features are expanded to fit the batch size.
        - The method leverages the model's `_prepare_backbone_features` method to prepare the backbone features.
    """
    backbone_out = self.model.forward_image(im)
    if batch > 1:  # expand features if there's more than one prompt
        for i, feat in enumerate(backbone_out["backbone_fpn"]):
            backbone_out["backbone_fpn"][i] = feat.expand(batch, -1, -1, -1)
        for i, pos in enumerate(backbone_out["vision_pos_enc"]):
            pos = pos.expand(batch, -1, -1, -1)
            backbone_out["vision_pos_enc"][i] = pos
    _, vis_feats, vis_pos_embed, feat_sizes = self.model._prepare_backbone_features(backbone_out)
    return vis_feats, vis_pos_embed, feat_sizes

get_model

get_model()

Retrieves and configures the model with binarization enabled.

Note

This method overrides the base class implementation to set the binarize flag to True.

Source code in ultralytics/models/sam/predict.py
def get_model(self):
    """
    Retrieves and configures the model with binarization enabled.

    Note:
        This method overrides the base class implementation to set the binarize flag to True.
    """
    model = super().get_model()
    model.set_binarize(True)
    return model

inference

inference(im, bboxes=None, points=None, labels=None, masks=None)

Perform image segmentation inference based on the given input cues, using the currently loaded image. This method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and mask decoder for real-time and promptable segmentation tasks.

Parameters:

Name Type Description Default
im Tensor

The preprocessed input image in tensor format, with shape (N, C, H, W).

required
bboxes ndarray | List

Bounding boxes with shape (N, 4), in XYXY format.

None
points ndarray | List

Points indicating object locations with shape (N, 2), in pixels.

None
labels ndarray | List

Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.

None
masks ndarray

Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.

None

Returns:

Type Description
ndarray

The output masks in shape CxHxW, where C is the number of generated masks.

ndarray

An array of length C containing quality scores predicted by the model for each mask.

Source code in ultralytics/models/sam/predict.py
def inference(self, im, bboxes=None, points=None, labels=None, masks=None):
    """
    Perform image segmentation inference based on the given input cues, using the currently loaded image. This
    method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
    mask decoder for real-time and promptable segmentation tasks.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
        masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.

    Returns:
        (np.ndarray): The output masks in shape CxHxW, where C is the number of generated masks.
        (np.ndarray): An array of length C containing quality scores predicted by the model for each mask.
    """
    # Override prompts if any stored in self.prompts
    bboxes = self.prompts.pop("bboxes", bboxes)
    points = self.prompts.pop("points", points)
    masks = self.prompts.pop("masks", masks)

    frame = self.dataset.frame
    self.inference_state["im"] = im
    output_dict = self.inference_state["output_dict"]
    if len(output_dict["cond_frame_outputs"]) == 0:  # initialize prompts
        points, labels, masks = self._prepare_prompts(im.shape[2:], bboxes, points, labels, masks)
        if points is not None:
            for i in range(len(points)):
                self.add_new_prompts(obj_id=i, points=points[[i]], labels=labels[[i]], frame_idx=frame)
        elif masks is not None:
            for i in range(len(masks)):
                self.add_new_prompts(obj_id=i, masks=masks[[i]], frame_idx=frame)
    self.propagate_in_video_preflight()

    consolidated_frame_inds = self.inference_state["consolidated_frame_inds"]
    batch_size = len(self.inference_state["obj_idx_to_id"])
    if len(output_dict["cond_frame_outputs"]) == 0:
        raise RuntimeError("No points are provided; please add points first")

    if frame in consolidated_frame_inds["cond_frame_outputs"]:
        storage_key = "cond_frame_outputs"
        current_out = output_dict[storage_key][frame]
        if self.clear_non_cond_mem_around_input and (self.clear_non_cond_mem_for_multi_obj or batch_size <= 1):
            # clear non-conditioning memory of the surrounding frames
            self._clear_non_cond_mem_around_input(frame)
    elif frame in consolidated_frame_inds["non_cond_frame_outputs"]:
        storage_key = "non_cond_frame_outputs"
        current_out = output_dict[storage_key][frame]
    else:
        storage_key = "non_cond_frame_outputs"
        current_out = self._run_single_frame_inference(
            output_dict=output_dict,
            frame_idx=frame,
            batch_size=batch_size,
            is_init_cond_frame=False,
            point_inputs=None,
            mask_inputs=None,
            reverse=False,
            run_mem_encoder=True,
        )
        output_dict[storage_key][frame] = current_out
    # Create slices of per-object outputs for subsequent interaction with each
    # individual object after tracking.
    self._add_output_per_object(frame, current_out, storage_key)
    self.inference_state["frames_already_tracked"].append(frame)
    pred_masks = current_out["pred_masks"].flatten(0, 1)
    pred_masks = pred_masks[(pred_masks > self.model.mask_threshold).sum((1, 2)) > 0]  # filter blank masks

    return pred_masks, torch.ones(len(pred_masks), dtype=pred_masks.dtype, device=pred_masks.device)

init_state staticmethod

init_state(predictor)

Initialize an inference state for the predictor.

This function sets up the initial state required for performing inference on video data. It includes initializing various dictionaries and ordered dictionaries that will store inputs, outputs, and other metadata relevant to the tracking process.

Parameters:

Name Type Description Default
predictor SAM2VideoPredictor

The predictor object for which to initialize the state.

required
Source code in ultralytics/models/sam/predict.py
@staticmethod
def init_state(predictor):
    """
    Initialize an inference state for the predictor.

    This function sets up the initial state required for performing inference on video data.
    It includes initializing various dictionaries and ordered dictionaries that will store
    inputs, outputs, and other metadata relevant to the tracking process.

    Args:
        predictor (SAM2VideoPredictor): The predictor object for which to initialize the state.
    """
    if len(predictor.inference_state) > 0:  # means initialized
        return
    assert predictor.dataset is not None
    assert predictor.dataset.mode == "video"

    inference_state = {}
    inference_state["num_frames"] = predictor.dataset.frames
    # inputs on each frame
    inference_state["point_inputs_per_obj"] = {}
    inference_state["mask_inputs_per_obj"] = {}
    # values that don't change across frames (so we only need to hold one copy of them)
    inference_state["constants"] = {}
    # mapping between client-side object id and model-side object index
    inference_state["obj_id_to_idx"] = OrderedDict()
    inference_state["obj_idx_to_id"] = OrderedDict()
    inference_state["obj_ids"] = []
    # A storage to hold the model's tracking results and states on each frame
    inference_state["output_dict"] = {
        "cond_frame_outputs": {},  # dict containing {frame_idx: <out>}
        "non_cond_frame_outputs": {},  # dict containing {frame_idx: <out>}
    }
    # Slice (view) of each object tracking results, sharing the same memory with "output_dict"
    inference_state["output_dict_per_obj"] = {}
    # A temporary storage to hold new outputs when user interact with a frame
    # to add clicks or mask (it's merged into "output_dict" before propagation starts)
    inference_state["temp_output_dict_per_obj"] = {}
    # Frames that already holds consolidated outputs from click or mask inputs
    # (we directly use their consolidated outputs during tracking)
    inference_state["consolidated_frame_inds"] = {
        "cond_frame_outputs": set(),  # set containing frame indices
        "non_cond_frame_outputs": set(),  # set containing frame indices
    }
    # metadata for each tracking frame (e.g. which direction it's tracked)
    inference_state["tracking_has_started"] = False
    inference_state["frames_already_tracked"] = []
    predictor.inference_state = inference_state

postprocess

postprocess(preds, img, orig_imgs)

Post-processes the predictions to apply non-overlapping constraints if required.

This method extends the post-processing functionality by applying non-overlapping constraints to the predicted masks if the non_overlap_masks flag is set to True. This ensures that the masks do not overlap, which can be useful for certain applications.

Parameters:

Name Type Description Default
preds Tuple[Tensor]

The predictions from the model.

required
img Tensor

The processed image tensor.

required
orig_imgs List[ndarray]

The original images before processing.

required

Returns:

Name Type Description
results list

The post-processed predictions.

Note

If non_overlap_masks is True, the method applies constraints to ensure non-overlapping masks.

Source code in ultralytics/models/sam/predict.py
def postprocess(self, preds, img, orig_imgs):
    """
    Post-processes the predictions to apply non-overlapping constraints if required.

    This method extends the post-processing functionality by applying non-overlapping constraints
    to the predicted masks if the `non_overlap_masks` flag is set to True. This ensures that
    the masks do not overlap, which can be useful for certain applications.

    Args:
        preds (Tuple[torch.Tensor]): The predictions from the model.
        img (torch.Tensor): The processed image tensor.
        orig_imgs (List[np.ndarray]): The original images before processing.

    Returns:
        results (list): The post-processed predictions.

    Note:
        If `non_overlap_masks` is True, the method applies constraints to ensure non-overlapping masks.
    """
    results = super().postprocess(preds, img, orig_imgs)
    if self.non_overlap_masks:
        for result in results:
            if result.masks is None or len(result.masks) == 0:
                continue
            result.masks.data = self.model._apply_non_overlapping_constraints(result.masks.data.unsqueeze(0))[0]
    return results

propagate_in_video_preflight

propagate_in_video_preflight()

Prepare inference_state and consolidate temporary outputs before tracking.

This method marks the start of tracking, disallowing the addition of new objects until the session is reset. It consolidates temporary outputs from temp_output_dict_per_obj and merges them into output_dict. Additionally, it clears non-conditioning memory around input frames and ensures that the state is consistent with the provided inputs.

Source code in ultralytics/models/sam/predict.py
@smart_inference_mode()
def propagate_in_video_preflight(self):
    """
    Prepare inference_state and consolidate temporary outputs before tracking.

    This method marks the start of tracking, disallowing the addition of new objects until the session is reset.
    It consolidates temporary outputs from `temp_output_dict_per_obj` and merges them into `output_dict`.
    Additionally, it clears non-conditioning memory around input frames and ensures that the state is consistent
    with the provided inputs.
    """
    # Tracking has started and we don't allow adding new objects until session is reset.
    self.inference_state["tracking_has_started"] = True
    batch_size = len(self.inference_state["obj_idx_to_id"])

    # Consolidate per-object temporary outputs in "temp_output_dict_per_obj" and
    # add them into "output_dict".
    temp_output_dict_per_obj = self.inference_state["temp_output_dict_per_obj"]
    output_dict = self.inference_state["output_dict"]
    # "consolidated_frame_inds" contains indices of those frames where consolidated
    # temporary outputs have been added (either in this call or any previous calls
    # to `propagate_in_video_preflight`).
    consolidated_frame_inds = self.inference_state["consolidated_frame_inds"]
    for is_cond in {False, True}:
        # Separately consolidate conditioning and non-conditioning temp outptus
        storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"
        # Find all the frames that contain temporary outputs for any objects
        # (these should be the frames that have just received clicks for mask inputs
        # via `add_new_points` or `add_new_mask`)
        temp_frame_inds = set()
        for obj_temp_output_dict in temp_output_dict_per_obj.values():
            temp_frame_inds.update(obj_temp_output_dict[storage_key].keys())
        consolidated_frame_inds[storage_key].update(temp_frame_inds)
        # consolidate the temprary output across all objects on this frame
        for frame_idx in temp_frame_inds:
            consolidated_out = self._consolidate_temp_output_across_obj(
                frame_idx, is_cond=is_cond, run_mem_encoder=True
            )
            # merge them into "output_dict" and also create per-object slices
            output_dict[storage_key][frame_idx] = consolidated_out
            self._add_output_per_object(frame_idx, consolidated_out, storage_key)
            if self.clear_non_cond_mem_around_input and (self.clear_non_cond_mem_for_multi_obj or batch_size <= 1):
                # clear non-conditioning memory of the surrounding frames
                self._clear_non_cond_mem_around_input(frame_idx)

        # clear temporary outputs in `temp_output_dict_per_obj`
        for obj_temp_output_dict in temp_output_dict_per_obj.values():
            obj_temp_output_dict[storage_key].clear()

    # edge case: if an output is added to "cond_frame_outputs", we remove any prior
    # output on the same frame in "non_cond_frame_outputs"
    for frame_idx in output_dict["cond_frame_outputs"]:
        output_dict["non_cond_frame_outputs"].pop(frame_idx, None)
    for obj_output_dict in self.inference_state["output_dict_per_obj"].values():
        for frame_idx in obj_output_dict["cond_frame_outputs"]:
            obj_output_dict["non_cond_frame_outputs"].pop(frame_idx, None)
    for frame_idx in consolidated_frame_inds["cond_frame_outputs"]:
        assert frame_idx in output_dict["cond_frame_outputs"]
        consolidated_frame_inds["non_cond_frame_outputs"].discard(frame_idx)

    # Make sure that the frame indices in "consolidated_frame_inds" are exactly those frames
    # with either points or mask inputs (which should be true under a correct workflow).
    all_consolidated_frame_inds = (
        consolidated_frame_inds["cond_frame_outputs"] | consolidated_frame_inds["non_cond_frame_outputs"]
    )
    input_frames_inds = set()
    for point_inputs_per_frame in self.inference_state["point_inputs_per_obj"].values():
        input_frames_inds.update(point_inputs_per_frame.keys())
    for mask_inputs_per_frame in self.inference_state["mask_inputs_per_obj"].values():
        input_frames_inds.update(mask_inputs_per_frame.keys())
    assert all_consolidated_frame_inds == input_frames_inds



📅 Created 1 year ago ✏️ Updated 7 days ago