μ½˜ν…μΈ λ‘œ κ±΄λ„ˆλ›°κΈ°

μ°Έμ‘° ultralytics/models/sam/predict.py

μ°Έκ³ 

이 νŒŒμΌμ€ https://github.com/ultralytics/ ultralytics/blob/main/ ultralytics/models/ sam/predict .pyμ—μ„œ 확인할 수 μžˆμŠ΅λ‹ˆλ‹€. 문제λ₯Ό λ°œκ²¬ν•˜λ©΄ ν’€ λ¦¬ν€˜μŠ€νŠΈ πŸ› οΈ 에 κΈ°μ—¬ν•˜μ—¬ 문제λ₯Ό ν•΄κ²°ν•˜λ„λ‘ λ„μ™€μ£Όμ„Έμš”. κ°μ‚¬ν•©λ‹ˆλ‹€ πŸ™!



ultralytics.models.sam.predict.Predictor

베이슀: BasePredictor

μ„Έκ·Έλ¨ΌνŠΈ μ• λ‹ˆμ”½ λͺ¨λΈμš© 예츑자 클래슀(SAM)둜, BasePredictorλ₯Ό ν™•μž₯ν•©λ‹ˆλ‹€.

이 ν΄λž˜μŠ€λŠ” 이미지 λΆ„ν•  μž‘μ—…μ— 맞좘 λͺ¨λΈ 좔둠을 μœ„ν•œ μΈν„°νŽ˜μ΄μŠ€λ₯Ό μ œκ³΅ν•©λ‹ˆλ‹€. κ³ κΈ‰ μ•„ν‚€ν…μ²˜μ™€ μ‹ μ†ν•œ μ„ΈλΆ„ν™” κΈ°λŠ₯을 톡해 μœ μ—°ν•˜κ³  μ‹€μ‹œκ°„μœΌλ‘œ 마슀크 생성. 이 ν΄λž˜μŠ€λŠ” λ°”μš΄λ”© λ°•μŠ€μ™€ 같은 λ‹€μ–‘ν•œ μœ ν˜•μ˜ ν”„λ‘¬ν”„νŠΈμ™€ ν•¨κ»˜ μž‘μ—…ν•  수 μžˆμŠ΅λ‹ˆλ‹€, 포인트, 저해상도 마슀크 λ“± λ‹€μ–‘ν•œ μœ ν˜•μ˜ ν”„λ‘¬ν”„νŠΈλ‘œ μž‘μ—…ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

속성:

이름 μœ ν˜• μ„€λͺ…
cfg dict

λͺ¨λΈ 및 μž‘μ—… κ΄€λ ¨ λ§€κ°œλ³€μˆ˜λ₯Ό μ§€μ •ν•˜λŠ” ꡬ성 μ‚¬μ „μž…λ‹ˆλ‹€.

overrides dict

κΈ°λ³Έ ꡬ성을 μž¬μ •μ˜ν•˜λŠ” 값이 ν¬ν•¨λœ λ”•μ…”λ„ˆλ¦¬μž…λ‹ˆλ‹€.

_callbacks dict

λ™μž‘μ„ λ³΄κ°•ν•˜κΈ° μœ„ν•œ μ‚¬μš©μž μ •μ˜ 콜백 ν•¨μˆ˜ μ‚¬μ „μž…λ‹ˆλ‹€.

args namespace

λͺ…령쀄 인수 λ˜λŠ” 기타 μ—°μ‚° λ³€μˆ˜λ₯Ό μ €μž₯ν•  λ„€μž„μŠ€νŽ˜μ΄μŠ€μž…λ‹ˆλ‹€.

im Tensor

μ „μ²˜λ¦¬λœ μž…λ ₯ 이미지 tensor.

features Tensor

좔둠에 μ‚¬μš©λ˜λŠ” μΆ”μΆœλœ 이미지 νŠΉμ§•.

prompts dict

경계 μƒμž 및 ν¬μΈνŠΈμ™€ 같은 λ‹€μ–‘ν•œ ν”„λ‘¬ν”„νŠΈ μœ ν˜•μ˜ λͺ¨μŒμž…λ‹ˆλ‹€.

segment_all bool

μ΄λ―Έμ§€μ˜ λͺ¨λ“  개체λ₯Ό 뢄할할지 μ•„λ‹ˆλ©΄ μ§€μ •λœ 개체만 뢄할할지 μ—¬λΆ€λ₯Ό μ œμ–΄ν•˜λŠ” ν”Œλž˜κ·Έμž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
class Predictor(BasePredictor):
    """
    Predictor class for the Segment Anything Model (SAM), extending BasePredictor.

    The class provides an interface for model inference tailored to image segmentation tasks.
    With advanced architecture and promptable segmentation capabilities, it facilitates flexible and real-time
    mask generation. The class is capable of working with various types of prompts such as bounding boxes,
    points, and low-resolution masks.

    Attributes:
        cfg (dict): Configuration dictionary specifying model and task-related parameters.
        overrides (dict): Dictionary containing values that override the default configuration.
        _callbacks (dict): Dictionary of user-defined callback functions to augment behavior.
        args (namespace): Namespace to hold command-line arguments or other operational variables.
        im (torch.Tensor): Preprocessed input image tensor.
        features (torch.Tensor): Extracted image features used for inference.
        prompts (dict): Collection of various prompt types, such as bounding boxes and points.
        segment_all (bool): Flag to control whether to segment all objects in the image or only specified ones.
    """

    def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
        """
        Initialize the Predictor with configuration, overrides, and callbacks.

        The method sets up the Predictor object and applies any configuration overrides or callbacks provided. It
        initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

        Args:
            cfg (dict): Configuration dictionary.
            overrides (dict, optional): Dictionary of values to override default configuration.
            _callbacks (dict, optional): Dictionary of callback functions to customize behavior.
        """
        if overrides is None:
            overrides = {}
        overrides.update(dict(task="segment", mode="predict", imgsz=1024))
        super().__init__(cfg, overrides, _callbacks)
        self.args.retina_masks = True
        self.im = None
        self.features = None
        self.prompts = {}
        self.segment_all = False

    def preprocess(self, im):
        """
        Preprocess the input image for model inference.

        The method prepares the input image by applying transformations and normalization.
        It supports both torch.Tensor and list of np.ndarray as input formats.

        Args:
            im (torch.Tensor | List[np.ndarray]): BCHW tensor format or list of HWC numpy arrays.

        Returns:
            (torch.Tensor): The preprocessed image tensor.
        """
        if self.im is not None:
            return self.im
        not_tensor = not isinstance(im, torch.Tensor)
        if not_tensor:
            im = np.stack(self.pre_transform(im))
            im = im[..., ::-1].transpose((0, 3, 1, 2))
            im = np.ascontiguousarray(im)
            im = torch.from_numpy(im)

        im = im.to(self.device)
        im = im.half() if self.model.fp16 else im.float()
        if not_tensor:
            im = (im - self.mean) / self.std
        return im

    def pre_transform(self, im):
        """
        Perform initial transformations on the input image for preprocessing.

        The method applies transformations such as resizing to prepare the image for further preprocessing.
        Currently, batched inference is not supported; hence the list length should be 1.

        Args:
            im (List[np.ndarray]): List containing images in HWC numpy array format.

        Returns:
            (List[np.ndarray]): List of transformed images.
        """
        assert len(im) == 1, "SAM model does not currently support batched inference"
        letterbox = LetterBox(self.args.imgsz, auto=False, center=False)
        return [letterbox(image=x) for x in im]

    def inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs):
        """
        Perform image segmentation inference based on the given input cues, using the currently loaded image. This
        method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
        mask decoder for real-time and promptable segmentation tasks.

        Args:
            im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
            bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
            points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
            labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
            masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.
            multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts.

        Returns:
            (tuple): Contains the following three elements.
                - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
                - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
                - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
        """
        # Override prompts if any stored in self.prompts
        bboxes = self.prompts.pop("bboxes", bboxes)
        points = self.prompts.pop("points", points)
        masks = self.prompts.pop("masks", masks)

        if all(i is None for i in [bboxes, points, masks]):
            return self.generate(im, *args, **kwargs)

        return self.prompt_inference(im, bboxes, points, labels, masks, multimask_output)

    def prompt_inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False):
        """
        Internal function for image segmentation inference based on cues like bounding boxes, points, and masks.
        Leverages SAM's specialized architecture for prompt-based, real-time segmentation.

        Args:
            im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
            bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
            points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
            labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
            masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.
            multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts.

        Returns:
            (tuple): Contains the following three elements.
                - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
                - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
                - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
        """
        features = self.model.image_encoder(im) if self.features is None else self.features

        src_shape, dst_shape = self.batch[1][0].shape[:2], im.shape[2:]
        r = 1.0 if self.segment_all else min(dst_shape[0] / src_shape[0], dst_shape[1] / src_shape[1])
        # Transform input prompts
        if points is not None:
            points = torch.as_tensor(points, dtype=torch.float32, device=self.device)
            points = points[None] if points.ndim == 1 else points
            # Assuming labels are all positive if users don't pass labels.
            if labels is None:
                labels = np.ones(points.shape[0])
            labels = torch.as_tensor(labels, dtype=torch.int32, device=self.device)
            points *= r
            # (N, 2) --> (N, 1, 2), (N, ) --> (N, 1)
            points, labels = points[:, None, :], labels[:, None]
        if bboxes is not None:
            bboxes = torch.as_tensor(bboxes, dtype=torch.float32, device=self.device)
            bboxes = bboxes[None] if bboxes.ndim == 1 else bboxes
            bboxes *= r
        if masks is not None:
            masks = torch.as_tensor(masks, dtype=torch.float32, device=self.device).unsqueeze(1)

        points = (points, labels) if points is not None else None
        # Embed prompts
        sparse_embeddings, dense_embeddings = self.model.prompt_encoder(points=points, boxes=bboxes, masks=masks)

        # Predict masks
        pred_masks, pred_scores = self.model.mask_decoder(
            image_embeddings=features,
            image_pe=self.model.prompt_encoder.get_dense_pe(),
            sparse_prompt_embeddings=sparse_embeddings,
            dense_prompt_embeddings=dense_embeddings,
            multimask_output=multimask_output,
        )

        # (N, d, H, W) --> (N*d, H, W), (N, d) --> (N*d, )
        # `d` could be 1 or 3 depends on `multimask_output`.
        return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

    def generate(
        self,
        im,
        crop_n_layers=0,
        crop_overlap_ratio=512 / 1500,
        crop_downscale_factor=1,
        point_grids=None,
        points_stride=32,
        points_batch_size=64,
        conf_thres=0.88,
        stability_score_thresh=0.95,
        stability_score_offset=0.95,
        crop_nms_thresh=0.7,
    ):
        """
        Perform image segmentation using the Segment Anything Model (SAM).

        This function segments an entire image into constituent parts by leveraging SAM's advanced architecture
        and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

        Args:
            im (torch.Tensor): Input tensor representing the preprocessed image with dimensions (N, C, H, W).
            crop_n_layers (int): Specifies the number of layers for additional mask predictions on image crops.
                                 Each layer produces 2**i_layer number of image crops.
            crop_overlap_ratio (float): Determines the overlap between crops. Scaled down in subsequent layers.
            crop_downscale_factor (int): Scaling factor for the number of sampled points-per-side in each layer.
            point_grids (list[np.ndarray], optional): Custom grids for point sampling normalized to [0,1].
                                                      Used in the nth crop layer.
            points_stride (int, optional): Number of points to sample along each side of the image.
                                           Exclusive with 'point_grids'.
            points_batch_size (int): Batch size for the number of points processed simultaneously.
            conf_thres (float): Confidence threshold [0,1] for filtering based on the model's mask quality prediction.
            stability_score_thresh (float): Stability threshold [0,1] for mask filtering based on mask stability.
            stability_score_offset (float): Offset value for calculating stability score.
            crop_nms_thresh (float): IoU cutoff for NMS to remove duplicate masks between crops.

        Returns:
            (tuple): A tuple containing segmented masks, confidence scores, and bounding boxes.
        """
        import torchvision  # scope for faster 'import ultralytics'

        self.segment_all = True
        ih, iw = im.shape[2:]
        crop_regions, layer_idxs = generate_crop_boxes((ih, iw), crop_n_layers, crop_overlap_ratio)
        if point_grids is None:
            point_grids = build_all_layer_point_grids(points_stride, crop_n_layers, crop_downscale_factor)
        pred_masks, pred_scores, pred_bboxes, region_areas = [], [], [], []
        for crop_region, layer_idx in zip(crop_regions, layer_idxs):
            x1, y1, x2, y2 = crop_region
            w, h = x2 - x1, y2 - y1
            area = torch.tensor(w * h, device=im.device)
            points_scale = np.array([[w, h]])  # w, h
            # Crop image and interpolate to input size
            crop_im = F.interpolate(im[..., y1:y2, x1:x2], (ih, iw), mode="bilinear", align_corners=False)
            # (num_points, 2)
            points_for_image = point_grids[layer_idx] * points_scale
            crop_masks, crop_scores, crop_bboxes = [], [], []
            for (points,) in batch_iterator(points_batch_size, points_for_image):
                pred_mask, pred_score = self.prompt_inference(crop_im, points=points, multimask_output=True)
                # Interpolate predicted masks to input size
                pred_mask = F.interpolate(pred_mask[None], (h, w), mode="bilinear", align_corners=False)[0]
                idx = pred_score > conf_thres
                pred_mask, pred_score = pred_mask[idx], pred_score[idx]

                stability_score = calculate_stability_score(
                    pred_mask, self.model.mask_threshold, stability_score_offset
                )
                idx = stability_score > stability_score_thresh
                pred_mask, pred_score = pred_mask[idx], pred_score[idx]
                # Bool type is much more memory-efficient.
                pred_mask = pred_mask > self.model.mask_threshold
                # (N, 4)
                pred_bbox = batched_mask_to_box(pred_mask).float()
                keep_mask = ~is_box_near_crop_edge(pred_bbox, crop_region, [0, 0, iw, ih])
                if not torch.all(keep_mask):
                    pred_bbox, pred_mask, pred_score = pred_bbox[keep_mask], pred_mask[keep_mask], pred_score[keep_mask]

                crop_masks.append(pred_mask)
                crop_bboxes.append(pred_bbox)
                crop_scores.append(pred_score)

            # Do nms within this crop
            crop_masks = torch.cat(crop_masks)
            crop_bboxes = torch.cat(crop_bboxes)
            crop_scores = torch.cat(crop_scores)
            keep = torchvision.ops.nms(crop_bboxes, crop_scores, self.args.iou)  # NMS
            crop_bboxes = uncrop_boxes_xyxy(crop_bboxes[keep], crop_region)
            crop_masks = uncrop_masks(crop_masks[keep], crop_region, ih, iw)
            crop_scores = crop_scores[keep]

            pred_masks.append(crop_masks)
            pred_bboxes.append(crop_bboxes)
            pred_scores.append(crop_scores)
            region_areas.append(area.expand(len(crop_masks)))

        pred_masks = torch.cat(pred_masks)
        pred_bboxes = torch.cat(pred_bboxes)
        pred_scores = torch.cat(pred_scores)
        region_areas = torch.cat(region_areas)

        # Remove duplicate masks between crops
        if len(crop_regions) > 1:
            scores = 1 / region_areas
            keep = torchvision.ops.nms(pred_bboxes, scores, crop_nms_thresh)
            pred_masks, pred_bboxes, pred_scores = pred_masks[keep], pred_bboxes[keep], pred_scores[keep]

        return pred_masks, pred_scores, pred_bboxes

    def setup_model(self, model, verbose=True):
        """
        Initializes the Segment Anything Model (SAM) for inference.

        This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary
        parameters for image normalization and other Ultralytics compatibility settings.

        Args:
            model (torch.nn.Module): A pre-trained SAM model. If None, a model will be built based on configuration.
            verbose (bool): If True, prints selected device information.

        Attributes:
            model (torch.nn.Module): The SAM model allocated to the chosen device for inference.
            device (torch.device): The device to which the model and tensors are allocated.
            mean (torch.Tensor): The mean values for image normalization.
            std (torch.Tensor): The standard deviation values for image normalization.
        """
        device = select_device(self.args.device, verbose=verbose)
        if model is None:
            model = build_sam(self.args.model)
        model.eval()
        self.model = model.to(device)
        self.device = device
        self.mean = torch.tensor([123.675, 116.28, 103.53]).view(-1, 1, 1).to(device)
        self.std = torch.tensor([58.395, 57.12, 57.375]).view(-1, 1, 1).to(device)

        # Ultralytics compatibility settings
        self.model.pt = False
        self.model.triton = False
        self.model.stride = 32
        self.model.fp16 = False
        self.done_warmup = True

    def postprocess(self, preds, img, orig_imgs):
        """
        Post-processes SAM's inference outputs to generate object detection masks and bounding boxes.

        The method scales masks and boxes to the original image size and applies a threshold to the mask predictions.
        The SAM model uses advanced architecture and promptable segmentation tasks to achieve real-time performance.

        Args:
            preds (tuple): The output from SAM model inference, containing masks, scores, and optional bounding boxes.
            img (torch.Tensor): The processed input image tensor.
            orig_imgs (list | torch.Tensor): The original, unprocessed images.

        Returns:
            (list): List of Results objects containing detection masks, bounding boxes, and other metadata.
        """
        # (N, 1, H, W), (N, 1)
        pred_masks, pred_scores = preds[:2]
        pred_bboxes = preds[2] if self.segment_all else None
        names = dict(enumerate(str(i) for i in range(len(pred_masks))))

        if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
            orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

        results = []
        for i, masks in enumerate([pred_masks]):
            orig_img = orig_imgs[i]
            if pred_bboxes is not None:
                pred_bboxes = ops.scale_boxes(img.shape[2:], pred_bboxes.float(), orig_img.shape, padding=False)
                cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
                pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)

            masks = ops.scale_masks(masks[None].float(), orig_img.shape[:2], padding=False)[0]
            masks = masks > self.model.mask_threshold  # to bool
            img_path = self.batch[0][i]
            results.append(Results(orig_img, path=img_path, names=names, masks=masks, boxes=pred_bboxes))
        # Reset segment-all mode.
        self.segment_all = False
        return results

    def setup_source(self, source):
        """
        Sets up the data source for inference.

        This method configures the data source from which images will be fetched for inference. The source could be a
        directory, a video file, or other types of image data sources.

        Args:
            source (str | Path): The path to the image data source for inference.
        """
        if source is not None:
            super().setup_source(source)

    def set_image(self, image):
        """
        Preprocesses and sets a single image for inference.

        This function sets up the model if not already initialized, configures the data source to the specified image,
        and preprocesses the image for feature extraction. Only one image can be set at a time.

        Args:
            image (str | np.ndarray): Image file path as a string, or a np.ndarray image read by cv2.

        Raises:
            AssertionError: If more than one image is set.
        """
        if self.model is None:
            model = build_sam(self.args.model)
            self.setup_model(model)
        self.setup_source(image)
        assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
        for batch in self.dataset:
            im = self.preprocess(batch[1])
            self.features = self.model.image_encoder(im)
            self.im = im
            break

    def set_prompts(self, prompts):
        """Set prompts in advance."""
        self.prompts = prompts

    def reset_image(self):
        """Resets the image and its features to None."""
        self.im = None
        self.features = None

    @staticmethod
    def remove_small_regions(masks, min_area=0, nms_thresh=0.7):
        """
        Perform post-processing on segmentation masks generated by the Segment Anything Model (SAM). Specifically, this
        function removes small disconnected regions and holes from the input masks, and then performs Non-Maximum
        Suppression (NMS) to eliminate any newly created duplicate boxes.

        Args:
            masks (torch.Tensor): A tensor containing the masks to be processed. Shape should be (N, H, W), where N is
                                  the number of masks, H is height, and W is width.
            min_area (int): The minimum area below which disconnected regions and holes will be removed. Defaults to 0.
            nms_thresh (float): The IoU threshold for the NMS algorithm. Defaults to 0.7.

        Returns:
            (tuple([torch.Tensor, List[int]])):
                - new_masks (torch.Tensor): The processed masks with small regions removed. Shape is (N, H, W).
                - keep (List[int]): The indices of the remaining masks post-NMS, which can be used to filter the boxes.
        """
        import torchvision  # scope for faster 'import ultralytics'

        if len(masks) == 0:
            return masks

        # Filter small disconnected regions and holes
        new_masks = []
        scores = []
        for mask in masks:
            mask = mask.cpu().numpy().astype(np.uint8)
            mask, changed = remove_small_regions(mask, min_area, mode="holes")
            unchanged = not changed
            mask, changed = remove_small_regions(mask, min_area, mode="islands")
            unchanged = unchanged and not changed

            new_masks.append(torch.as_tensor(mask).unsqueeze(0))
            # Give score=0 to changed masks and 1 to unchanged masks so NMS prefers masks not needing postprocessing
            scores.append(float(unchanged))

        # Recalculate boxes and remove any new duplicates
        new_masks = torch.cat(new_masks, dim=0)
        boxes = batched_mask_to_box(new_masks)
        keep = torchvision.ops.nms(boxes.float(), torch.as_tensor(scores), nms_thresh)

        return new_masks[keep].to(device=masks.device, dtype=masks.dtype), keep

__init__(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

ꡬ성, μž¬μ •μ˜ 및 μ½œλ°±μ„ μ‚¬μš©ν•˜μ—¬ 예츑자λ₯Ό μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

이 λ©”μ„œλ“œλŠ” 예츑자 객체λ₯Ό μ„€μ •ν•˜κ³  제곡된 λͺ¨λ“  ꡬ성 μž¬μ •μ˜ λ˜λŠ” μ½œλ°±μ„ μ μš©ν•©λ‹ˆλ‹€. It SAM에 λŒ€ν•œ μž‘μ—…λ³„ μ„€μ •(예: 졜적의 κ²°κ³Όλ₯Ό μœ„ν•΄ retina_masksλ₯Ό True둜 μ„€μ •)을 μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
cfg dict

ꡬ성 사전.

DEFAULT_CFG
overrides dict

κΈ°λ³Έ ꡬ성을 μž¬μ •μ˜ν•  κ°’ μ‚¬μ „μž…λ‹ˆλ‹€.

None
_callbacks dict

λ™μž‘μ„ μ‚¬μš©μž μ§€μ •ν•˜λŠ” 콜백 ν•¨μˆ˜ μ‚¬μ „μž…λ‹ˆλ‹€.

None
의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the Predictor with configuration, overrides, and callbacks.

    The method sets up the Predictor object and applies any configuration overrides or callbacks provided. It
    initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

    Args:
        cfg (dict): Configuration dictionary.
        overrides (dict, optional): Dictionary of values to override default configuration.
        _callbacks (dict, optional): Dictionary of callback functions to customize behavior.
    """
    if overrides is None:
        overrides = {}
    overrides.update(dict(task="segment", mode="predict", imgsz=1024))
    super().__init__(cfg, overrides, _callbacks)
    self.args.retina_masks = True
    self.im = None
    self.features = None
    self.prompts = {}
    self.segment_all = False

generate(im, crop_n_layers=0, crop_overlap_ratio=512 / 1500, crop_downscale_factor=1, point_grids=None, points_stride=32, points_batch_size=64, conf_thres=0.88, stability_score_thresh=0.95, stability_score_offset=0.95, crop_nms_thresh=0.7)

무엇이든 μ„Έκ·Έλ¨ΌνŠΈ λͺ¨λΈ(SAM)을 μ‚¬μš©ν•˜μ—¬ 이미지 μ„Έκ·Έλ¨Όν…Œμ΄μ…˜μ„ μˆ˜ν–‰ν•©λ‹ˆλ‹€.

이 κΈ°λŠ₯은 SAM 의 κ³ κΈ‰ μ•„ν‚€ν…μ²˜μ™€ μ‹€μ‹œκ°„ μ„±λŠ₯ κΈ°λŠ₯을 ν™œμš©ν•˜μ—¬ 전체 이미지λ₯Ό ꡬ성 λΆ€λΆ„μœΌλ‘œ μ„ΈλΆ„ν™”ν•©λ‹ˆλ‹€. 및 μ‹€μ‹œκ°„ μ„±λŠ₯ κΈ°λŠ₯을 ν™œμš©ν•©λ‹ˆλ‹€. μ„ νƒμ μœΌλ‘œ 이미지 자λ₯΄κΈ°λ₯Ό 톡해 더 μ„Έλ°€ν•œ λΆ„ν•  μž‘μ—…μ„ μˆ˜ν–‰ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
im Tensor

사전 처리된 μ΄λ―Έμ§€μ˜ 크기(N, C, H, W)λ₯Ό λ‚˜νƒ€λ‚΄λŠ” tensor 을 μž…λ ₯ν•©λ‹ˆλ‹€.

ν•„μˆ˜
crop_n_layers int

이미지 자λ₯΄κΈ°μ—μ„œ μΆ”κ°€ 마슀크 μ˜ˆμΈ‘μ„ μœ„ν•œ λ ˆμ΄μ–΄ 수λ₯Ό μ§€μ •ν•©λ‹ˆλ‹€. 각 λ ˆμ΄μ–΄λŠ” 2**i_λ ˆμ΄μ–΄ 수의 이미지 크둭을 μƒμ„±ν•©λ‹ˆλ‹€.

0
crop_overlap_ratio float

μž‘λ¬Ό κ°„μ˜ 쀑첩을 κ²°μ •ν•©λ‹ˆλ‹€. 후속 λ ˆμ΄μ–΄μ—μ„œ μΆ•μ†Œλ©λ‹ˆλ‹€.

512 / 1500
crop_downscale_factor int

각 λ ˆμ΄μ–΄μ—μ„œ μΈ‘λ©΄λ‹Ή μƒ˜ν”Œλ§λœ 포인트 μˆ˜μ— λŒ€ν•œ μŠ€μΌ€μΌλ§ κ³„μˆ˜μž…λ‹ˆλ‹€.

1
point_grids list[ndarray]

0,1]둜 μ •κ·œν™”λœ 포인트 μƒ˜ν”Œλ§μ„ μœ„ν•œ μ‚¬μš©μž 지정 격자. n번째 자λ₯΄κΈ° λ ˆμ΄μ–΄μ— μ‚¬μš©λ©λ‹ˆλ‹€.

None
points_stride int

μ΄λ―Έμ§€μ˜ 각 면을 따라 μƒ˜ν”Œλ§ν•  점의 μˆ˜μž…λ‹ˆλ‹€. 'point_grids'와 ν•¨κ»˜ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

32
points_batch_size int

λ™μ‹œμ— μ²˜λ¦¬λ˜λŠ” 포인트 μˆ˜μ— λŒ€ν•œ 배치 ν¬κΈ°μž…λ‹ˆλ‹€.

64
conf_thres float

λͺ¨λΈμ˜ 마슀크 ν’ˆμ§ˆ μ˜ˆμΈ‘μ„ 기반으둜 ν•„ν„°λ§ν•˜κΈ° μœ„ν•œ 신뒰도 μž„κ³„κ°’ [0,1]μž…λ‹ˆλ‹€.

0.88
stability_score_thresh float

마슀크 μ•ˆμ •μ„±μ— λ”°λ₯Έ 마슀크 ν•„ν„°λ§μ˜ μ•ˆμ •μ„± μž„κ³„κ°’ [0,1]μž…λ‹ˆλ‹€.

0.95
stability_score_offset float

μ•ˆμ •μ„± 점수 계산을 μœ„ν•œ μ˜€ν”„μ…‹ κ°’μž…λ‹ˆλ‹€.

0.95
crop_nms_thresh float

μž‘λ¬Ό κ°„ 쀑볡 마슀크λ₯Ό μ œκ±°ν•˜κΈ° μœ„ν•œ NMS용 IoU μ»·μ˜€ν”„.

0.7

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
tuple

μ„Έκ·Έλ¨ΌνŠΈν™”λœ 마슀크, 신뒰도 점수, 경계 μƒμžλ₯Ό ν¬ν•¨ν•˜λŠ” νŠœν”Œμž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def generate(
    self,
    im,
    crop_n_layers=0,
    crop_overlap_ratio=512 / 1500,
    crop_downscale_factor=1,
    point_grids=None,
    points_stride=32,
    points_batch_size=64,
    conf_thres=0.88,
    stability_score_thresh=0.95,
    stability_score_offset=0.95,
    crop_nms_thresh=0.7,
):
    """
    Perform image segmentation using the Segment Anything Model (SAM).

    This function segments an entire image into constituent parts by leveraging SAM's advanced architecture
    and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

    Args:
        im (torch.Tensor): Input tensor representing the preprocessed image with dimensions (N, C, H, W).
        crop_n_layers (int): Specifies the number of layers for additional mask predictions on image crops.
                             Each layer produces 2**i_layer number of image crops.
        crop_overlap_ratio (float): Determines the overlap between crops. Scaled down in subsequent layers.
        crop_downscale_factor (int): Scaling factor for the number of sampled points-per-side in each layer.
        point_grids (list[np.ndarray], optional): Custom grids for point sampling normalized to [0,1].
                                                  Used in the nth crop layer.
        points_stride (int, optional): Number of points to sample along each side of the image.
                                       Exclusive with 'point_grids'.
        points_batch_size (int): Batch size for the number of points processed simultaneously.
        conf_thres (float): Confidence threshold [0,1] for filtering based on the model's mask quality prediction.
        stability_score_thresh (float): Stability threshold [0,1] for mask filtering based on mask stability.
        stability_score_offset (float): Offset value for calculating stability score.
        crop_nms_thresh (float): IoU cutoff for NMS to remove duplicate masks between crops.

    Returns:
        (tuple): A tuple containing segmented masks, confidence scores, and bounding boxes.
    """
    import torchvision  # scope for faster 'import ultralytics'

    self.segment_all = True
    ih, iw = im.shape[2:]
    crop_regions, layer_idxs = generate_crop_boxes((ih, iw), crop_n_layers, crop_overlap_ratio)
    if point_grids is None:
        point_grids = build_all_layer_point_grids(points_stride, crop_n_layers, crop_downscale_factor)
    pred_masks, pred_scores, pred_bboxes, region_areas = [], [], [], []
    for crop_region, layer_idx in zip(crop_regions, layer_idxs):
        x1, y1, x2, y2 = crop_region
        w, h = x2 - x1, y2 - y1
        area = torch.tensor(w * h, device=im.device)
        points_scale = np.array([[w, h]])  # w, h
        # Crop image and interpolate to input size
        crop_im = F.interpolate(im[..., y1:y2, x1:x2], (ih, iw), mode="bilinear", align_corners=False)
        # (num_points, 2)
        points_for_image = point_grids[layer_idx] * points_scale
        crop_masks, crop_scores, crop_bboxes = [], [], []
        for (points,) in batch_iterator(points_batch_size, points_for_image):
            pred_mask, pred_score = self.prompt_inference(crop_im, points=points, multimask_output=True)
            # Interpolate predicted masks to input size
            pred_mask = F.interpolate(pred_mask[None], (h, w), mode="bilinear", align_corners=False)[0]
            idx = pred_score > conf_thres
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]

            stability_score = calculate_stability_score(
                pred_mask, self.model.mask_threshold, stability_score_offset
            )
            idx = stability_score > stability_score_thresh
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]
            # Bool type is much more memory-efficient.
            pred_mask = pred_mask > self.model.mask_threshold
            # (N, 4)
            pred_bbox = batched_mask_to_box(pred_mask).float()
            keep_mask = ~is_box_near_crop_edge(pred_bbox, crop_region, [0, 0, iw, ih])
            if not torch.all(keep_mask):
                pred_bbox, pred_mask, pred_score = pred_bbox[keep_mask], pred_mask[keep_mask], pred_score[keep_mask]

            crop_masks.append(pred_mask)
            crop_bboxes.append(pred_bbox)
            crop_scores.append(pred_score)

        # Do nms within this crop
        crop_masks = torch.cat(crop_masks)
        crop_bboxes = torch.cat(crop_bboxes)
        crop_scores = torch.cat(crop_scores)
        keep = torchvision.ops.nms(crop_bboxes, crop_scores, self.args.iou)  # NMS
        crop_bboxes = uncrop_boxes_xyxy(crop_bboxes[keep], crop_region)
        crop_masks = uncrop_masks(crop_masks[keep], crop_region, ih, iw)
        crop_scores = crop_scores[keep]

        pred_masks.append(crop_masks)
        pred_bboxes.append(crop_bboxes)
        pred_scores.append(crop_scores)
        region_areas.append(area.expand(len(crop_masks)))

    pred_masks = torch.cat(pred_masks)
    pred_bboxes = torch.cat(pred_bboxes)
    pred_scores = torch.cat(pred_scores)
    region_areas = torch.cat(region_areas)

    # Remove duplicate masks between crops
    if len(crop_regions) > 1:
        scores = 1 / region_areas
        keep = torchvision.ops.nms(pred_bboxes, scores, crop_nms_thresh)
        pred_masks, pred_bboxes, pred_scores = pred_masks[keep], pred_bboxes[keep], pred_scores[keep]

    return pred_masks, pred_scores, pred_bboxes

inference(im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs)

ν˜„μž¬ λ‘œλ“œλœ 이미지λ₯Ό μ‚¬μš©ν•˜μ—¬ 주어진 μž…λ ₯ λ‹¨μ„œλ₯Ό 기반으둜 이미지 λΆ„ν•  좔둠을 μˆ˜ν–‰ν•©λ‹ˆλ‹€. 이 λ©”μ„œλ“œλŠ” 이미지 인코더, ν”„λ‘¬ν”„νŠΈ 인코더, 마슀크 λ””μ½”λ”λ‘œ κ΅¬μ„±λœ SAM 의 (Segment Anything Model) μ•„ν‚€ν…μ²˜λ₯Ό ν™œμš©ν•˜μ—¬ 마슀크 λ””μ½”λ”λ‘œ κ΅¬μ„±λœ μ•„ν‚€ν…μ²˜λ₯Ό ν™œμš©ν•˜μ—¬ μ‹€μ‹œκ°„ 및 ν”„λ‘¬ν”„νŠΈ κ°€λŠ₯ν•œ λΆ„ν•  μž‘μ—…μ„ μˆ˜ν–‰ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
im Tensor

tensor ν˜•μ‹μ˜ 사전 처리된 μž…λ ₯ μ΄λ―Έμ§€λ‘œ, λͺ¨μ–‘(N, C, H, W)이 ν¬ν•¨λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

ν•„μˆ˜
bboxes ndarray | List

λͺ¨μ–‘(N, 4)이 μžˆλŠ” λ°”μš΄λ”© λ°•μŠ€, XYXY ν˜•μ‹μž…λ‹ˆλ‹€.

None
points ndarray | List

λ„ν˜•(N, 2)을 가진 객체 μœ„μΉ˜λ₯Ό ν”½μ…€ λ‹¨μœ„λ‘œ λ‚˜νƒ€λ‚΄λŠ” μ μž…λ‹ˆλ‹€.

None
labels ndarray | List

점 ν”„λ‘¬ν”„νŠΈμ˜ λ ˆμ΄λΈ”, λͺ¨μ–‘(N, ). 1 = μ „κ²½, 0 = λ°°κ²½.

None
masks ndarray

이전 예츑 λ„ν˜•μ˜ 저해상도 마슀크(N,H,W). SAM H=W=256의 경우.

None
multimask_output bool

μ—¬λŸ¬ 개의 마슀크λ₯Ό λ°˜ν™˜ν•˜λ €λ©΄ ν”Œλž˜κ·Έλ₯Ό μ§€μ •ν•©λ‹ˆλ‹€. λͺ¨ν˜Έν•œ ν”„λ‘¬ν”„νŠΈμ— μœ μš©ν•©λ‹ˆλ‹€.

False

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
tuple

λ‹€μŒ μ„Έ 가지 μš”μ†Œλ₯Ό ν¬ν•¨ν•©λ‹ˆλ‹€. - np.ndarray: CxHxW ν˜•νƒœμ˜ 좜λ ₯ 마슀크(μ—¬κΈ°μ„œ CλŠ” μƒμ„±λœ 마슀크의 개수). - np.ndarray: 각 λ§ˆμŠ€ν¬μ— λŒ€ν•΄ λͺ¨λΈμ΄ μ˜ˆμΈ‘ν•œ ν’ˆμ§ˆ 점수λ₯Ό ν¬ν•¨ν•˜λŠ” 길이 C의 λ°°μ—΄μž…λ‹ˆλ‹€. - np.ndarray: 후속 좔둠을 μœ„ν•œ 저해상도 둜그둜, μ—¬κΈ°μ„œ H=W=256μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs):
    """
    Perform image segmentation inference based on the given input cues, using the currently loaded image. This
    method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
    mask decoder for real-time and promptable segmentation tasks.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
        masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.
        multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts.

    Returns:
        (tuple): Contains the following three elements.
            - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
            - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
            - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
    """
    # Override prompts if any stored in self.prompts
    bboxes = self.prompts.pop("bboxes", bboxes)
    points = self.prompts.pop("points", points)
    masks = self.prompts.pop("masks", masks)

    if all(i is None for i in [bboxes, points, masks]):
        return self.generate(im, *args, **kwargs)

    return self.prompt_inference(im, bboxes, points, labels, masks, multimask_output)

postprocess(preds, img, orig_imgs)

SAM 의 μΆ”λ‘  좜λ ₯을 사후 μ²˜λ¦¬ν•˜μ—¬ 객체 감지 λ§ˆμŠ€ν¬μ™€ λ°”μš΄λ”© λ°•μŠ€λ₯Ό μƒμ„±ν•©λ‹ˆλ‹€.

이 방법은 λ§ˆμŠ€ν¬μ™€ μƒμžμ˜ 크기λ₯Ό 원본 이미지 크기에 맞게 μ‘°μ •ν•˜κ³  마슀크 μ˜ˆμΈ‘μ— μž„κ³„κ°’μ„ μ μš©ν•©λ‹ˆλ‹€. SAM λͺ¨λΈμ€ κ³ κΈ‰ μ•„ν‚€ν…μ²˜μ™€ ν”„λ‘¬ν”„νŠΈ κ°€λŠ₯ν•œ μ„ΈλΆ„ν™” μž‘μ—…μ„ μ‚¬μš©ν•˜μ—¬ μ‹€μ‹œκ°„ μ„±λŠ₯을 λ‹¬μ„±ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
preds tuple

SAM λͺ¨λΈ μΆ”λ‘ μ˜ 좜λ ₯으둜, 마슀크, 점수 및 선택적 경계 μƒμžκ°€ ν¬ν•¨λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

ν•„μˆ˜
img Tensor

처리된 μž…λ ₯ 이미지 tensor.

ν•„μˆ˜
orig_imgs list | Tensor

κ°€κ³΅λ˜μ§€ μ•Šμ€ 원본 μ΄λ―Έμ§€μž…λ‹ˆλ‹€.

ν•„μˆ˜

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
list

탐지 마슀크, 경계 μƒμž 및 기타 메타데이터가 ν¬ν•¨λœ κ²°κ³Ό 개체 λͺ©λ‘μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def postprocess(self, preds, img, orig_imgs):
    """
    Post-processes SAM's inference outputs to generate object detection masks and bounding boxes.

    The method scales masks and boxes to the original image size and applies a threshold to the mask predictions.
    The SAM model uses advanced architecture and promptable segmentation tasks to achieve real-time performance.

    Args:
        preds (tuple): The output from SAM model inference, containing masks, scores, and optional bounding boxes.
        img (torch.Tensor): The processed input image tensor.
        orig_imgs (list | torch.Tensor): The original, unprocessed images.

    Returns:
        (list): List of Results objects containing detection masks, bounding boxes, and other metadata.
    """
    # (N, 1, H, W), (N, 1)
    pred_masks, pred_scores = preds[:2]
    pred_bboxes = preds[2] if self.segment_all else None
    names = dict(enumerate(str(i) for i in range(len(pred_masks))))

    if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
        orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

    results = []
    for i, masks in enumerate([pred_masks]):
        orig_img = orig_imgs[i]
        if pred_bboxes is not None:
            pred_bboxes = ops.scale_boxes(img.shape[2:], pred_bboxes.float(), orig_img.shape, padding=False)
            cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
            pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)

        masks = ops.scale_masks(masks[None].float(), orig_img.shape[:2], padding=False)[0]
        masks = masks > self.model.mask_threshold  # to bool
        img_path = self.batch[0][i]
        results.append(Results(orig_img, path=img_path, names=names, masks=masks, boxes=pred_bboxes))
    # Reset segment-all mode.
    self.segment_all = False
    return results

pre_transform(im)

μ „μ²˜λ¦¬λ₯Ό μœ„ν•΄ μž…λ ₯ 이미지에 초기 λ³€ν™˜μ„ μˆ˜ν–‰ν•©λ‹ˆλ‹€.

이 방법은 크기 μ‘°μ •κ³Ό 같은 λ³€ν™˜μ„ μ μš©ν•˜μ—¬ μΆ”κ°€ μ „μ²˜λ¦¬λ₯Ό μœ„ν•΄ 이미지λ₯Ό μ€€λΉ„ν•©λ‹ˆλ‹€. ν˜„μž¬ 일괄 좔둠은 μ§€μ›λ˜μ§€ μ•ŠμœΌλ―€λ‘œ λͺ©λ‘ κΈΈμ΄λŠ” 1이어야 ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
im List[ndarray]

이미지가 ν¬ν•¨λœ λͺ©λ‘μ€ HWC 널 λ°°μ—΄ ν˜•μ‹μž…λ‹ˆλ‹€.

ν•„μˆ˜

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
List[ndarray]

λ³€ν™˜λœ 이미지 λͺ©λ‘μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def pre_transform(self, im):
    """
    Perform initial transformations on the input image for preprocessing.

    The method applies transformations such as resizing to prepare the image for further preprocessing.
    Currently, batched inference is not supported; hence the list length should be 1.

    Args:
        im (List[np.ndarray]): List containing images in HWC numpy array format.

    Returns:
        (List[np.ndarray]): List of transformed images.
    """
    assert len(im) == 1, "SAM model does not currently support batched inference"
    letterbox = LetterBox(self.args.imgsz, auto=False, center=False)
    return [letterbox(image=x) for x in im]

preprocess(im)

λͺ¨λΈ 좔둠을 μœ„ν•΄ μž…λ ₯ 이미지λ₯Ό 사전 μ²˜λ¦¬ν•©λ‹ˆλ‹€.

이 λ©”μ„œλ“œλŠ” λ³€ν™˜κ³Ό μ •κ·œν™”λ₯Ό μ μš©ν•˜μ—¬ μž…λ ₯ 이미지λ₯Ό μ€€λΉ„ν•©λ‹ˆλ‹€. μž…λ ₯ ν˜•μ‹μœΌλ‘œ torch.Tensor 및 λͺ©λ‘ np.ndarrayλ₯Ό λͺ¨λ‘ μ§€μ›ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
im Tensor | List[ndarray]

BCHW tensor ν˜•μ‹ λ˜λŠ” HWC 널 λ°°μ—΄ λͺ©λ‘μž…λ‹ˆλ‹€.

ν•„μˆ˜

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
Tensor

μ „μ²˜λ¦¬λœ 이미지 tensor.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def preprocess(self, im):
    """
    Preprocess the input image for model inference.

    The method prepares the input image by applying transformations and normalization.
    It supports both torch.Tensor and list of np.ndarray as input formats.

    Args:
        im (torch.Tensor | List[np.ndarray]): BCHW tensor format or list of HWC numpy arrays.

    Returns:
        (torch.Tensor): The preprocessed image tensor.
    """
    if self.im is not None:
        return self.im
    not_tensor = not isinstance(im, torch.Tensor)
    if not_tensor:
        im = np.stack(self.pre_transform(im))
        im = im[..., ::-1].transpose((0, 3, 1, 2))
        im = np.ascontiguousarray(im)
        im = torch.from_numpy(im)

    im = im.to(self.device)
    im = im.half() if self.model.fp16 else im.float()
    if not_tensor:
        im = (im - self.mean) / self.std
    return im

prompt_inference(im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False)

경계 μƒμž, 포인트, λ§ˆμŠ€ν¬μ™€ 같은 λ‹¨μ„œλ₯Ό 기반으둜 이미지 뢄할을 μΆ”λ‘ ν•˜λŠ” λ‚΄λΆ€ κΈ°λŠ₯μž…λ‹ˆλ‹€. ν”„λ‘¬ν”„νŠΈ 기반의 μ‹€μ‹œκ°„ μ„ΈλΆ„ν™”λ₯Ό μœ„ν•΄ SAM 의 μ „λ¬Έν™”λœ μ•„ν‚€ν…μ²˜λ₯Ό ν™œμš©ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
im Tensor

tensor ν˜•μ‹μ˜ 사전 처리된 μž…λ ₯ μ΄λ―Έμ§€λ‘œ, λͺ¨μ–‘(N, C, H, W)이 ν¬ν•¨λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

ν•„μˆ˜
bboxes ndarray | List

λͺ¨μ–‘(N, 4)이 μžˆλŠ” λ°”μš΄λ”© λ°•μŠ€, XYXY ν˜•μ‹μž…λ‹ˆλ‹€.

None
points ndarray | List

λ„ν˜•(N, 2)을 가진 객체 μœ„μΉ˜λ₯Ό ν”½μ…€ λ‹¨μœ„λ‘œ λ‚˜νƒ€λ‚΄λŠ” μ μž…λ‹ˆλ‹€.

None
labels ndarray | List

점 ν”„λ‘¬ν”„νŠΈμ˜ λ ˆμ΄λΈ”, λͺ¨μ–‘(N, ). 1 = μ „κ²½, 0 = λ°°κ²½.

None
masks ndarray

이전 예츑 λ„ν˜•μ˜ 저해상도 마슀크(N,H,W). SAM H=W=256의 경우.

None
multimask_output bool

μ—¬λŸ¬ 개의 마슀크λ₯Ό λ°˜ν™˜ν•˜λ €λ©΄ ν”Œλž˜κ·Έλ₯Ό μ§€μ •ν•©λ‹ˆλ‹€. λͺ¨ν˜Έν•œ ν”„λ‘¬ν”„νŠΈμ— μœ μš©ν•©λ‹ˆλ‹€.

False

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
tuple

λ‹€μŒ μ„Έ 가지 μš”μ†Œλ₯Ό ν¬ν•¨ν•©λ‹ˆλ‹€. - np.ndarray: CxHxW ν˜•νƒœμ˜ 좜λ ₯ 마슀크(μ—¬κΈ°μ„œ CλŠ” μƒμ„±λœ 마슀크의 개수). - np.ndarray: 각 λ§ˆμŠ€ν¬μ— λŒ€ν•΄ λͺ¨λΈμ΄ μ˜ˆμΈ‘ν•œ ν’ˆμ§ˆ 점수λ₯Ό ν¬ν•¨ν•˜λŠ” 길이 C의 λ°°μ—΄μž…λ‹ˆλ‹€. - np.ndarray: 후속 좔둠을 μœ„ν•œ 저해상도 둜그둜, μ—¬κΈ°μ„œ H=W=256μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def prompt_inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False):
    """
    Internal function for image segmentation inference based on cues like bounding boxes, points, and masks.
    Leverages SAM's specialized architecture for prompt-based, real-time segmentation.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
        labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
        masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.
        multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts.

    Returns:
        (tuple): Contains the following three elements.
            - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
            - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
            - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
    """
    features = self.model.image_encoder(im) if self.features is None else self.features

    src_shape, dst_shape = self.batch[1][0].shape[:2], im.shape[2:]
    r = 1.0 if self.segment_all else min(dst_shape[0] / src_shape[0], dst_shape[1] / src_shape[1])
    # Transform input prompts
    if points is not None:
        points = torch.as_tensor(points, dtype=torch.float32, device=self.device)
        points = points[None] if points.ndim == 1 else points
        # Assuming labels are all positive if users don't pass labels.
        if labels is None:
            labels = np.ones(points.shape[0])
        labels = torch.as_tensor(labels, dtype=torch.int32, device=self.device)
        points *= r
        # (N, 2) --> (N, 1, 2), (N, ) --> (N, 1)
        points, labels = points[:, None, :], labels[:, None]
    if bboxes is not None:
        bboxes = torch.as_tensor(bboxes, dtype=torch.float32, device=self.device)
        bboxes = bboxes[None] if bboxes.ndim == 1 else bboxes
        bboxes *= r
    if masks is not None:
        masks = torch.as_tensor(masks, dtype=torch.float32, device=self.device).unsqueeze(1)

    points = (points, labels) if points is not None else None
    # Embed prompts
    sparse_embeddings, dense_embeddings = self.model.prompt_encoder(points=points, boxes=bboxes, masks=masks)

    # Predict masks
    pred_masks, pred_scores = self.model.mask_decoder(
        image_embeddings=features,
        image_pe=self.model.prompt_encoder.get_dense_pe(),
        sparse_prompt_embeddings=sparse_embeddings,
        dense_prompt_embeddings=dense_embeddings,
        multimask_output=multimask_output,
    )

    # (N, d, H, W) --> (N*d, H, W), (N, d) --> (N*d, )
    # `d` could be 1 or 3 depends on `multimask_output`.
    return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

remove_small_regions(masks, min_area=0, nms_thresh=0.7) staticmethod

μ„Έκ·Έλ¨ΌνŠΈ μ• λ‹ˆμ”½ λͺ¨λΈ(SAM)μ—μ„œ μƒμ„±λœ μ„Έκ·Έλ¨Όν…Œμ΄μ…˜ λ§ˆμŠ€ν¬μ— λŒ€ν•œ ν›„μ²˜λ¦¬λ₯Ό μˆ˜ν–‰ν•©λ‹ˆλ‹€. ꡬ체적으둜, 이 ν•¨μˆ˜λŠ” μž…λ ₯ λ§ˆμŠ€ν¬μ—μ„œ μž‘μ€ λ‹¨μ ˆλœ μ˜μ—­κ³Ό ꡬ멍을 μ œκ±°ν•œ λ‹€μŒ μ΅œλŒ€κ°’μ΄ μ•„λ‹Œ μ–΅μ œ(NMS)λ₯Ό μˆ˜ν–‰ν•˜μ—¬ μƒˆλ‘œ μƒμ„±λœ 쀑볡 μƒμžλ₯Ό μ œκ±°ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
masks Tensor

μ²˜λ¦¬ν•  λ§ˆμŠ€ν¬κ°€ ν¬ν•¨λœ tensor . λͺ¨μ–‘은 (N, H, W)μ—¬μ•Ό ν•˜λ©°, μ—¬κΈ°μ„œ N은 마슀크 수, HλŠ” 높이, WλŠ” λ„ˆλΉ„μž…λ‹ˆλ‹€.

ν•„μˆ˜
min_area int

연결이 끊긴 μ˜μ—­κ³Ό ꡬ멍이 μ œκ±°λ˜λŠ” μ΅œμ†Œ μ˜μ—­μž…λ‹ˆλ‹€. 기본값은 0μž…λ‹ˆλ‹€.

0
nms_thresh float

NMS μ•Œκ³ λ¦¬μ¦˜μ˜ IoU μž„κ³„κ°’μž…λ‹ˆλ‹€. 기본값은 0.7μž…λ‹ˆλ‹€.

0.7

λ°˜ν™˜ν•©λ‹ˆλ‹€:

μœ ν˜• μ„€λͺ…
tuple([Tensor, List[int]])
  • new_masks (torch.Tensor): μž‘μ€ μ˜μ—­μ΄ 제거된 처리된 λ§ˆμŠ€ν¬μž…λ‹ˆλ‹€. λͺ¨μ–‘은 (N, H, W)μž…λ‹ˆλ‹€.
  • keep (List[int]): λ°•μŠ€λ₯Ό ν•„ν„°λ§ν•˜λŠ” 데 μ‚¬μš©ν•  수 μžˆλŠ” NMS 이후 λ‚˜λ¨Έμ§€ 마슀크의 μΈλ±μŠ€μž…λ‹ˆλ‹€.
의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
@staticmethod
def remove_small_regions(masks, min_area=0, nms_thresh=0.7):
    """
    Perform post-processing on segmentation masks generated by the Segment Anything Model (SAM). Specifically, this
    function removes small disconnected regions and holes from the input masks, and then performs Non-Maximum
    Suppression (NMS) to eliminate any newly created duplicate boxes.

    Args:
        masks (torch.Tensor): A tensor containing the masks to be processed. Shape should be (N, H, W), where N is
                              the number of masks, H is height, and W is width.
        min_area (int): The minimum area below which disconnected regions and holes will be removed. Defaults to 0.
        nms_thresh (float): The IoU threshold for the NMS algorithm. Defaults to 0.7.

    Returns:
        (tuple([torch.Tensor, List[int]])):
            - new_masks (torch.Tensor): The processed masks with small regions removed. Shape is (N, H, W).
            - keep (List[int]): The indices of the remaining masks post-NMS, which can be used to filter the boxes.
    """
    import torchvision  # scope for faster 'import ultralytics'

    if len(masks) == 0:
        return masks

    # Filter small disconnected regions and holes
    new_masks = []
    scores = []
    for mask in masks:
        mask = mask.cpu().numpy().astype(np.uint8)
        mask, changed = remove_small_regions(mask, min_area, mode="holes")
        unchanged = not changed
        mask, changed = remove_small_regions(mask, min_area, mode="islands")
        unchanged = unchanged and not changed

        new_masks.append(torch.as_tensor(mask).unsqueeze(0))
        # Give score=0 to changed masks and 1 to unchanged masks so NMS prefers masks not needing postprocessing
        scores.append(float(unchanged))

    # Recalculate boxes and remove any new duplicates
    new_masks = torch.cat(new_masks, dim=0)
    boxes = batched_mask_to_box(new_masks)
    keep = torchvision.ops.nms(boxes.float(), torch.as_tensor(scores), nms_thresh)

    return new_masks[keep].to(device=masks.device, dtype=masks.dtype), keep

reset_image()

이미지와 ν•΄λ‹Ή κΈ°λŠ₯을 μ—†μŒμœΌλ‘œ μž¬μ„€μ •ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def reset_image(self):
    """Resets the image and its features to None."""
    self.im = None
    self.features = None

set_image(image)

좔둠을 μœ„ν•΄ ν•˜λ‚˜μ˜ 이미지λ₯Ό μ „μ²˜λ¦¬ν•˜κ³  μ„€μ •ν•©λ‹ˆλ‹€.

이 ν•¨μˆ˜λŠ” 아직 μ΄ˆκΈ°ν™”λ˜μ§€ μ•Šμ€ 경우 λͺ¨λΈμ„ μ„€μ •ν•˜κ³  데이터 μ†ŒμŠ€λ₯Ό μ§€μ •λœ μ΄λ―Έμ§€λ‘œ κ΅¬μ„±ν•©λ‹ˆλ‹€, 그리고 νŠΉμ§• μΆ”μΆœμ„ μœ„ν•΄ 이미지λ₯Ό μ „μ²˜λ¦¬ν•©λ‹ˆλ‹€. ν•œ λ²ˆμ— ν•˜λ‚˜μ˜ μ΄λ―Έμ§€λ§Œ μ„€μ •ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
image str | ndarray

이미지 파일 경둜λ₯Ό λ¬Έμžμ—΄λ‘œ μ§€μ •ν•˜κ±°λ‚˜ cv2둜 읽은 np.ndarray 이미지λ₯Ό μ§€μ •ν•©λ‹ˆλ‹€.

ν•„μˆ˜

μ˜¬λ¦¬λ‹€:

μœ ν˜• μ„€λͺ…
AssertionError

이미지가 두 개 이상 μ„€μ •λœ 경우.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def set_image(self, image):
    """
    Preprocesses and sets a single image for inference.

    This function sets up the model if not already initialized, configures the data source to the specified image,
    and preprocesses the image for feature extraction. Only one image can be set at a time.

    Args:
        image (str | np.ndarray): Image file path as a string, or a np.ndarray image read by cv2.

    Raises:
        AssertionError: If more than one image is set.
    """
    if self.model is None:
        model = build_sam(self.args.model)
        self.setup_model(model)
    self.setup_source(image)
    assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
    for batch in self.dataset:
        im = self.preprocess(batch[1])
        self.features = self.model.image_encoder(im)
        self.im = im
        break

set_prompts(prompts)

미리 ν”„λ‘¬ν”„νŠΈλ₯Ό μ„€μ •ν•©λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def set_prompts(self, prompts):
    """Set prompts in advance."""
    self.prompts = prompts

setup_model(model, verbose=True)

좔둠을 μœ„ν•΄ μ„Έκ·Έλ¨ΌνŠΈ μ• λ‹ˆμ”½ λͺ¨λΈ(SAM)을 μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

이 방법은 SAM λͺ¨λΈμ„ μ μ ˆν•œ λ””λ°”μ΄μŠ€μ— ν• λ‹Ήν•˜κ³  이미지 λ…Έλ©€λΌμ΄μ œμ΄μ…˜ 섀정에 ν•„μš”ν•œ 이미지 μ •κ·œν™” 및 기타 Ultralytics ν˜Έν™˜μ„± 섀정을 μœ„ν•œ λ§€κ°œλ³€μˆ˜λ₯Ό μ΄ˆκΈ°ν™”ν•©λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
model Module

사전 ν•™μŠ΅λœ SAM λͺ¨λΈμž…λ‹ˆλ‹€. μ—†μŒμΈ 경우 ꡬ성에 따라 λͺ¨λΈμ΄ κ΅¬μΆ•λ©λ‹ˆλ‹€.

ν•„μˆ˜
verbose bool

True이면 μ„ νƒν•œ μž₯치 정보λ₯Ό μΈμ‡„ν•©λ‹ˆλ‹€.

True

속성:

이름 μœ ν˜• μ„€λͺ…
model Module

좔둠을 μœ„ν•΄ μ„ νƒν•œ μž₯μΉ˜μ— ν• λ‹Ήλœ SAM λͺ¨λΈμž…λ‹ˆλ‹€.

device device

λͺ¨λΈκ³Ό ν…μ„œκ°€ ν• λ‹Ήλ˜λŠ” λ””λ°”μ΄μŠ€μž…λ‹ˆλ‹€.

mean Tensor

이미지 λ…Έλ©€λΌμ΄μ œμ΄μ…˜μ˜ ν‰κ· κ°’μž…λ‹ˆλ‹€.

std Tensor

이미지 λ…Έλ©€λΌμ΄μ œμ΄μ…˜μ˜ ν‘œμ€€νŽΈμ°¨ κ°’μž…λ‹ˆλ‹€.

의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def setup_model(self, model, verbose=True):
    """
    Initializes the Segment Anything Model (SAM) for inference.

    This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary
    parameters for image normalization and other Ultralytics compatibility settings.

    Args:
        model (torch.nn.Module): A pre-trained SAM model. If None, a model will be built based on configuration.
        verbose (bool): If True, prints selected device information.

    Attributes:
        model (torch.nn.Module): The SAM model allocated to the chosen device for inference.
        device (torch.device): The device to which the model and tensors are allocated.
        mean (torch.Tensor): The mean values for image normalization.
        std (torch.Tensor): The standard deviation values for image normalization.
    """
    device = select_device(self.args.device, verbose=verbose)
    if model is None:
        model = build_sam(self.args.model)
    model.eval()
    self.model = model.to(device)
    self.device = device
    self.mean = torch.tensor([123.675, 116.28, 103.53]).view(-1, 1, 1).to(device)
    self.std = torch.tensor([58.395, 57.12, 57.375]).view(-1, 1, 1).to(device)

    # Ultralytics compatibility settings
    self.model.pt = False
    self.model.triton = False
    self.model.stride = 32
    self.model.fp16 = False
    self.done_warmup = True

setup_source(source)

μΆ”λ‘ ν•  데이터 원본을 μ„€μ •ν•©λ‹ˆλ‹€.

이 λ©”μ„œλ“œλŠ” 좔둠을 μœ„ν•΄ 이미지λ₯Ό κ°€μ Έμ˜¬ 데이터 μ†ŒμŠ€λ₯Ό κ΅¬μ„±ν•©λ‹ˆλ‹€. μ†ŒμŠ€λŠ” 디렉토리, λΉ„λ””μ˜€ 파일 λ˜λŠ” 기타 μœ ν˜•μ˜ 이미지 데이터 μ†ŒμŠ€μΌ 수 μžˆμŠ΅λ‹ˆλ‹€.

λ§€κ°œλ³€μˆ˜:

이름 μœ ν˜• μ„€λͺ… κΈ°λ³Έκ°’
source str | Path

μΆ”λ‘ ν•  이미지 데이터 μ†ŒμŠ€μ˜ κ²½λ‘œμž…λ‹ˆλ‹€.

ν•„μˆ˜
의 μ†ŒμŠ€ μ½”λ“œ ultralytics/models/sam/predict.py
def setup_source(self, source):
    """
    Sets up the data source for inference.

    This method configures the data source from which images will be fetched for inference. The source could be a
    directory, a video file, or other types of image data sources.

    Args:
        source (str | Path): The path to the image data source for inference.
    """
    if source is not None:
        super().setup_source(source)





Created 2023-11-12, Updated 2024-06-02
Authors: glenn-jocher (5), Burhan-Q (1)