コンテンツへスキップ

参考 ultralytics/models/sam/predict.py

備考

このファイルはhttps://github.com/ultralytics/ultralytics/blob/main/ ultralytics/models/ sam/predict .pyにあります。もし問題を発見したら、Pull Request🛠️ を投稿して修正にご協力ください。ありがとうございました!



ultralytics.models.sam.predict.Predictor

ベース: BasePredictor

Segment Anything Model (SAM) 用の予測クラスで、 BasePredictor を継承しています。

このクラスは、画像セグメンテーションタスクに合わせたモデル推論のためのインターフェイスを提供する。 先進的なアーキテクチャとプロンプトが表示されるセグメンテーション機能により、柔軟でリアルタイムな マスク生成を容易にします。このクラスは、バウンディングボックスや点、低解像度のマスクなど、さまざまな種類のプロンプトを扱うことができます、 点、低解像度マスクなど、様々な種類のプロンプトを扱うことができる。

属性:

名称 タイプ 説明
cfg dict

モデルおよびタスク関連のパラメータを指定する設定辞書。

overrides dict

デフォルト設定を上書きする値を含む辞書。

_callbacks dict

動作を補強するためのユーザー定義コールバック関数の辞書。

args namespace

コマンドライン引数やその他の操作変数を保持する名前空間。

im Tensor

前処理された入力画像tensor.

features Tensor

推論に使用される抽出された画像特徴。

prompts dict

バウンディングボックスやポイントなど、さまざまなプロンプトタイプのコレクション。

segment_all bool

画像内のすべてのオブジェクトをセグメント化するか、指定したオブジェクトのみをセグメント化するかを制御するフラグ。

ソースコード ultralytics/models/sam/predict.py
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
class Predictor(BasePredictor):
    """
    Predictor class for the Segment Anything Model (SAM), extending BasePredictor.

    The class provides an interface for model inference tailored to image segmentation tasks.
    With advanced architecture and promptable segmentation capabilities, it facilitates flexible and real-time
    mask generation. The class is capable of working with various types of prompts such as bounding boxes,
    points, and low-resolution masks.

    Attributes:
        cfg (dict): Configuration dictionary specifying model and task-related parameters.
        overrides (dict): Dictionary containing values that override the default configuration.
        _callbacks (dict): Dictionary of user-defined callback functions to augment behavior.
        args (namespace): Namespace to hold command-line arguments or other operational variables.
        im (torch.Tensor): Preprocessed input image tensor.
        features (torch.Tensor): Extracted image features used for inference.
        prompts (dict): Collection of various prompt types, such as bounding boxes and points.
        segment_all (bool): Flag to control whether to segment all objects in the image or only specified ones.
    """

    def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
        """
        Initialize the Predictor with configuration, overrides, and callbacks.

        The method sets up the Predictor object and applies any configuration overrides or callbacks provided. It
        initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

        Args:
            cfg (dict): Configuration dictionary.
            overrides (dict, optional): Dictionary of values to override default configuration.
            _callbacks (dict, optional): Dictionary of callback functions to customize behavior.
        """
        if overrides is None:
            overrides = {}
        overrides.update(dict(task="segment", mode="predict", imgsz=1024))
        super().__init__(cfg, overrides, _callbacks)
        self.args.retina_masks = True
        self.im = None
        self.features = None
        self.prompts = {}
        self.segment_all = False

    def preprocess(self, im):
        """
        Preprocess the input image for model inference.

        The method prepares the input image by applying transformations and normalization.
        It supports both torch.Tensor and list of np.ndarray as input formats.

        Args:
            im (torch.Tensor | List[np.ndarray]): BCHW tensor format or list of HWC numpy arrays.

        Returns:
            (torch.Tensor): The preprocessed image tensor.
        """
        if self.im is not None:
            return self.im
        not_tensor = not isinstance(im, torch.Tensor)
        if not_tensor:
            im = np.stack(self.pre_transform(im))
            im = im[..., ::-1].transpose((0, 3, 1, 2))
            im = np.ascontiguousarray(im)
            im = torch.from_numpy(im)

        im = im.to(self.device)
        im = im.half() if self.model.fp16 else im.float()
        if not_tensor:
            im = (im - self.mean) / self.std
        return im

    def pre_transform(self, im):
        """
        Perform initial transformations on the input image for preprocessing.

        The method applies transformations such as resizing to prepare the image for further preprocessing.
        Currently, batched inference is not supported; hence the list length should be 1.

        Args:
            im (List[np.ndarray]): List containing images in HWC numpy array format.

        Returns:
            (List[np.ndarray]): List of transformed images.
        """
        assert len(im) == 1, "SAM model does not currently support batched inference"
        letterbox = LetterBox(self.args.imgsz, auto=False, center=False)
        return [letterbox(image=x) for x in im]

    def inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs):
        """
        Perform image segmentation inference based on the given input cues, using the currently loaded image. This
        method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
        mask decoder for real-time and promptable segmentation tasks.

        Args:
            im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
            bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
            points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixel coordinates.
            labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 for foreground and 0 for background.
            masks (np.ndarray, optional): Low-resolution masks from previous predictions. Shape should be (N, H, W). For SAM, H=W=256.
            multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts. Defaults to False.

        Returns:
            (tuple): Contains the following three elements.
                - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
                - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
                - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
        """
        # Override prompts if any stored in self.prompts
        bboxes = self.prompts.pop("bboxes", bboxes)
        points = self.prompts.pop("points", points)
        masks = self.prompts.pop("masks", masks)

        if all(i is None for i in [bboxes, points, masks]):
            return self.generate(im, *args, **kwargs)

        return self.prompt_inference(im, bboxes, points, labels, masks, multimask_output)

    def prompt_inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False):
        """
        Internal function for image segmentation inference based on cues like bounding boxes, points, and masks.
        Leverages SAM's specialized architecture for prompt-based, real-time segmentation.

        Args:
            im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
            bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
            points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixel coordinates.
            labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 for foreground and 0 for background.
            masks (np.ndarray, optional): Low-resolution masks from previous predictions. Shape should be (N, H, W). For SAM, H=W=256.
            multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts. Defaults to False.

        Returns:
            (tuple): Contains the following three elements.
                - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
                - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
                - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
        """
        features = self.model.image_encoder(im) if self.features is None else self.features

        src_shape, dst_shape = self.batch[1][0].shape[:2], im.shape[2:]
        r = 1.0 if self.segment_all else min(dst_shape[0] / src_shape[0], dst_shape[1] / src_shape[1])
        # Transform input prompts
        if points is not None:
            points = torch.as_tensor(points, dtype=torch.float32, device=self.device)
            points = points[None] if points.ndim == 1 else points
            # Assuming labels are all positive if users don't pass labels.
            if labels is None:
                labels = np.ones(points.shape[0])
            labels = torch.as_tensor(labels, dtype=torch.int32, device=self.device)
            points *= r
            # (N, 2) --> (N, 1, 2), (N, ) --> (N, 1)
            points, labels = points[:, None, :], labels[:, None]
        if bboxes is not None:
            bboxes = torch.as_tensor(bboxes, dtype=torch.float32, device=self.device)
            bboxes = bboxes[None] if bboxes.ndim == 1 else bboxes
            bboxes *= r
        if masks is not None:
            masks = torch.as_tensor(masks, dtype=torch.float32, device=self.device).unsqueeze(1)

        points = (points, labels) if points is not None else None
        # Embed prompts
        sparse_embeddings, dense_embeddings = self.model.prompt_encoder(points=points, boxes=bboxes, masks=masks)

        # Predict masks
        pred_masks, pred_scores = self.model.mask_decoder(
            image_embeddings=features,
            image_pe=self.model.prompt_encoder.get_dense_pe(),
            sparse_prompt_embeddings=sparse_embeddings,
            dense_prompt_embeddings=dense_embeddings,
            multimask_output=multimask_output,
        )

        # (N, d, H, W) --> (N*d, H, W), (N, d) --> (N*d, )
        # `d` could be 1 or 3 depends on `multimask_output`.
        return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

    def generate(
        self,
        im,
        crop_n_layers=0,
        crop_overlap_ratio=512 / 1500,
        crop_downscale_factor=1,
        point_grids=None,
        points_stride=32,
        points_batch_size=64,
        conf_thres=0.88,
        stability_score_thresh=0.95,
        stability_score_offset=0.95,
        crop_nms_thresh=0.7,
    ):
        """
        Perform image segmentation using the Segment Anything Model (SAM).

        This function segments an entire image into constituent parts by leveraging SAM's advanced architecture
        and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

        Args:
            im (torch.Tensor): Input tensor representing the preprocessed image with dimensions (N, C, H, W).
            crop_n_layers (int): Specifies the number of layers for additional mask predictions on image crops.
                                 Each layer produces 2**i_layer number of image crops.
            crop_overlap_ratio (float): Determines the extent of overlap between crops. Scaled down in subsequent layers.
            crop_downscale_factor (int): Scaling factor for the number of sampled points-per-side in each layer.
            point_grids (list[np.ndarray], optional): Custom grids for point sampling normalized to [0,1].
                                                      Used in the nth crop layer.
            points_stride (int, optional): Number of points to sample along each side of the image.
                                           Exclusive with 'point_grids'.
            points_batch_size (int): Batch size for the number of points processed simultaneously.
            conf_thres (float): Confidence threshold [0,1] for filtering based on the model's mask quality prediction.
            stability_score_thresh (float): Stability threshold [0,1] for mask filtering based on mask stability.
            stability_score_offset (float): Offset value for calculating stability score.
            crop_nms_thresh (float): IoU cutoff for Non-Maximum Suppression (NMS) to remove duplicate masks between crops.

        Returns:
            (tuple): A tuple containing segmented masks, confidence scores, and bounding boxes.
        """
        self.segment_all = True
        ih, iw = im.shape[2:]
        crop_regions, layer_idxs = generate_crop_boxes((ih, iw), crop_n_layers, crop_overlap_ratio)
        if point_grids is None:
            point_grids = build_all_layer_point_grids(points_stride, crop_n_layers, crop_downscale_factor)
        pred_masks, pred_scores, pred_bboxes, region_areas = [], [], [], []
        for crop_region, layer_idx in zip(crop_regions, layer_idxs):
            x1, y1, x2, y2 = crop_region
            w, h = x2 - x1, y2 - y1
            area = torch.tensor(w * h, device=im.device)
            points_scale = np.array([[w, h]])  # w, h
            # Crop image and interpolate to input size
            crop_im = F.interpolate(im[..., y1:y2, x1:x2], (ih, iw), mode="bilinear", align_corners=False)
            # (num_points, 2)
            points_for_image = point_grids[layer_idx] * points_scale
            crop_masks, crop_scores, crop_bboxes = [], [], []
            for (points,) in batch_iterator(points_batch_size, points_for_image):
                pred_mask, pred_score = self.prompt_inference(crop_im, points=points, multimask_output=True)
                # Interpolate predicted masks to input size
                pred_mask = F.interpolate(pred_mask[None], (h, w), mode="bilinear", align_corners=False)[0]
                idx = pred_score > conf_thres
                pred_mask, pred_score = pred_mask[idx], pred_score[idx]

                stability_score = calculate_stability_score(
                    pred_mask, self.model.mask_threshold, stability_score_offset
                )
                idx = stability_score > stability_score_thresh
                pred_mask, pred_score = pred_mask[idx], pred_score[idx]
                # Bool type is much more memory-efficient.
                pred_mask = pred_mask > self.model.mask_threshold
                # (N, 4)
                pred_bbox = batched_mask_to_box(pred_mask).float()
                keep_mask = ~is_box_near_crop_edge(pred_bbox, crop_region, [0, 0, iw, ih])
                if not torch.all(keep_mask):
                    pred_bbox, pred_mask, pred_score = pred_bbox[keep_mask], pred_mask[keep_mask], pred_score[keep_mask]

                crop_masks.append(pred_mask)
                crop_bboxes.append(pred_bbox)
                crop_scores.append(pred_score)

            # Do nms within this crop
            crop_masks = torch.cat(crop_masks)
            crop_bboxes = torch.cat(crop_bboxes)
            crop_scores = torch.cat(crop_scores)
            keep = torchvision.ops.nms(crop_bboxes, crop_scores, self.args.iou)  # NMS
            crop_bboxes = uncrop_boxes_xyxy(crop_bboxes[keep], crop_region)
            crop_masks = uncrop_masks(crop_masks[keep], crop_region, ih, iw)
            crop_scores = crop_scores[keep]

            pred_masks.append(crop_masks)
            pred_bboxes.append(crop_bboxes)
            pred_scores.append(crop_scores)
            region_areas.append(area.expand(len(crop_masks)))

        pred_masks = torch.cat(pred_masks)
        pred_bboxes = torch.cat(pred_bboxes)
        pred_scores = torch.cat(pred_scores)
        region_areas = torch.cat(region_areas)

        # Remove duplicate masks between crops
        if len(crop_regions) > 1:
            scores = 1 / region_areas
            keep = torchvision.ops.nms(pred_bboxes, scores, crop_nms_thresh)
            pred_masks, pred_bboxes, pred_scores = pred_masks[keep], pred_bboxes[keep], pred_scores[keep]

        return pred_masks, pred_scores, pred_bboxes

    def setup_model(self, model, verbose=True):
        """
        Initializes the Segment Anything Model (SAM) for inference.

        This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary
        parameters for image normalization and other Ultralytics compatibility settings.

        Args:
            model (torch.nn.Module): A pre-trained SAM model. If None, a model will be built based on configuration.
            verbose (bool): If True, prints selected device information.

        Attributes:
            model (torch.nn.Module): The SAM model allocated to the chosen device for inference.
            device (torch.device): The device to which the model and tensors are allocated.
            mean (torch.Tensor): The mean values for image normalization.
            std (torch.Tensor): The standard deviation values for image normalization.
        """
        device = select_device(self.args.device, verbose=verbose)
        if model is None:
            model = build_sam(self.args.model)
        model.eval()
        self.model = model.to(device)
        self.device = device
        self.mean = torch.tensor([123.675, 116.28, 103.53]).view(-1, 1, 1).to(device)
        self.std = torch.tensor([58.395, 57.12, 57.375]).view(-1, 1, 1).to(device)

        # Ultralytics compatibility settings
        self.model.pt = False
        self.model.triton = False
        self.model.stride = 32
        self.model.fp16 = False
        self.done_warmup = True

    def postprocess(self, preds, img, orig_imgs):
        """
        Post-processes SAM's inference outputs to generate object detection masks and bounding boxes.

        The method scales masks and boxes to the original image size and applies a threshold to the mask predictions. The
        SAM model uses advanced architecture and promptable segmentation tasks to achieve real-time performance.

        Args:
            preds (tuple): The output from SAM model inference, containing masks, scores, and optional bounding boxes.
            img (torch.Tensor): The processed input image tensor.
            orig_imgs (list | torch.Tensor): The original, unprocessed images.

        Returns:
            (list): List of Results objects containing detection masks, bounding boxes, and other metadata.
        """
        # (N, 1, H, W), (N, 1)
        pred_masks, pred_scores = preds[:2]
        pred_bboxes = preds[2] if self.segment_all else None
        names = dict(enumerate(str(i) for i in range(len(pred_masks))))

        if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
            orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

        results = []
        for i, masks in enumerate([pred_masks]):
            orig_img = orig_imgs[i]
            if pred_bboxes is not None:
                pred_bboxes = ops.scale_boxes(img.shape[2:], pred_bboxes.float(), orig_img.shape, padding=False)
                cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
                pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)

            masks = ops.scale_masks(masks[None].float(), orig_img.shape[:2], padding=False)[0]
            masks = masks > self.model.mask_threshold  # to bool
            img_path = self.batch[0][i]
            results.append(Results(orig_img, path=img_path, names=names, masks=masks, boxes=pred_bboxes))
        # Reset segment-all mode.
        self.segment_all = False
        return results

    def setup_source(self, source):
        """
        Sets up the data source for inference.

        This method configures the data source from which images will be fetched for inference. The source could be a
        directory, a video file, or other types of image data sources.

        Args:
            source (str | Path): The path to the image data source for inference.
        """
        if source is not None:
            super().setup_source(source)

    def set_image(self, image):
        """
        Preprocesses and sets a single image for inference.

        This function sets up the model if not already initialized, configures the data source to the specified image,
        and preprocesses the image for feature extraction. Only one image can be set at a time.

        Args:
            image (str | np.ndarray): Image file path as a string, or a np.ndarray image read by cv2.

        Raises:
            AssertionError: If more than one image is set.
        """
        if self.model is None:
            model = build_sam(self.args.model)
            self.setup_model(model)
        self.setup_source(image)
        assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
        for batch in self.dataset:
            im = self.preprocess(batch[1])
            self.features = self.model.image_encoder(im)
            self.im = im
            break

    def set_prompts(self, prompts):
        """Set prompts in advance."""
        self.prompts = prompts

    def reset_image(self):
        """Resets the image and its features to None."""
        self.im = None
        self.features = None

    @staticmethod
    def remove_small_regions(masks, min_area=0, nms_thresh=0.7):
        """
        Perform post-processing on segmentation masks generated by the Segment Anything Model (SAM). Specifically, this
        function removes small disconnected regions and holes from the input masks, and then performs Non-Maximum
        Suppression (NMS) to eliminate any newly created duplicate boxes.

        Args:
            masks (torch.Tensor): A tensor containing the masks to be processed. Shape should be (N, H, W), where N is
                                  the number of masks, H is height, and W is width.
            min_area (int): The minimum area below which disconnected regions and holes will be removed. Defaults to 0.
            nms_thresh (float): The IoU threshold for the NMS algorithm. Defaults to 0.7.

        Returns:
            (tuple([torch.Tensor, List[int]])):
                - new_masks (torch.Tensor): The processed masks with small regions removed. Shape is (N, H, W).
                - keep (List[int]): The indices of the remaining masks post-NMS, which can be used to filter the boxes.
        """
        if len(masks) == 0:
            return masks

        # Filter small disconnected regions and holes
        new_masks = []
        scores = []
        for mask in masks:
            mask = mask.cpu().numpy().astype(np.uint8)
            mask, changed = remove_small_regions(mask, min_area, mode="holes")
            unchanged = not changed
            mask, changed = remove_small_regions(mask, min_area, mode="islands")
            unchanged = unchanged and not changed

            new_masks.append(torch.as_tensor(mask).unsqueeze(0))
            # Give score=0 to changed masks and 1 to unchanged masks so NMS prefers masks not needing postprocessing
            scores.append(float(unchanged))

        # Recalculate boxes and remove any new duplicates
        new_masks = torch.cat(new_masks, dim=0)
        boxes = batched_mask_to_box(new_masks)
        keep = torchvision.ops.nms(boxes.float(), torch.as_tensor(scores), nms_thresh)

        return new_masks[keep].to(device=masks.device, dtype=masks.dtype), keep

__init__(cfg=DEFAULT_CFG, overrides=None, _callbacks=None)

設定、オーバーライド、コールバックでプレディクターを初期化します。

このメソッドは、Predictorオブジェクトをセットアップし、提供された設定オーバーライドまたはコールバックを適用します。それは retina_masksが最適な結果を得るためにTrueに設定されるなど、SAM のタスク固有の設定を初期化します。

パラメーター

名称 タイプ 説明 デフォルト
cfg dict

設定辞書。

DEFAULT_CFG
overrides dict

デフォルト設定を上書きする値の辞書。

None
_callbacks dict

動作をカスタマイズするためのコールバック関数の辞書。

None
ソースコード ultralytics/models/sam/predict.py
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
    """
    Initialize the Predictor with configuration, overrides, and callbacks.

    The method sets up the Predictor object and applies any configuration overrides or callbacks provided. It
    initializes task-specific settings for SAM, such as retina_masks being set to True for optimal results.

    Args:
        cfg (dict): Configuration dictionary.
        overrides (dict, optional): Dictionary of values to override default configuration.
        _callbacks (dict, optional): Dictionary of callback functions to customize behavior.
    """
    if overrides is None:
        overrides = {}
    overrides.update(dict(task="segment", mode="predict", imgsz=1024))
    super().__init__(cfg, overrides, _callbacks)
    self.args.retina_masks = True
    self.im = None
    self.features = None
    self.prompts = {}
    self.segment_all = False

generate(im, crop_n_layers=0, crop_overlap_ratio=512 / 1500, crop_downscale_factor=1, point_grids=None, points_stride=32, points_batch_size=64, conf_thres=0.88, stability_score_thresh=0.95, stability_score_offset=0.95, crop_nms_thresh=0.7)

Segment Anything Model (SAM) を使って画像セグメンテーションを行う。

この機能は、SAM の高度なアーキテクチャを活用することで、画像全体を構成要素に分割します。 の高度なアーキテクチャとリアルタイム性能を活用します。オプションで、より細かいセグメンテーションのために画像を切り抜くこともできます。

パラメーター

名称 タイプ 説明 デフォルト
im Tensor

入力tensor 前処理された画像を寸法 (N, C, H, W) で表す。

必須
crop_n_layers int

画像切 り 抜き の追加マ ス ク 予測のためのレ イ ヤーの数を指定 し ます。 各レイヤーは2**i_layer数のイメージクロップを生成する。

0
crop_overlap_ratio float

作物間の重なりの範囲を決定する。後続のレイヤーで縮小される。

512 / 1500
crop_downscale_factor int

各レイヤーにおける1辺あたりのサンプリング点数のスケーリング係数。

1
point_grids list[ndarray]

0,1]に正規化されたポイントサンプリング用のカスタムグリッド。 n番目のクロップレイヤーで使用される。

None
points_stride int

画像の各辺に沿ってサンプリングする点の数。 point_grids' と排他的。

32
points_batch_size int

同時に処理する点の数を表すバッチサイズ。

64
conf_thres float

モデルのマスク品質予測に基づくフィルタリングの信頼度閾値[0,1]。

0.88
stability_score_thresh float

マスクの安定性に基づくマスク・フィルタリングのための安定性閾値[0,1]。

0.95
stability_score_offset float

安定性スコアを計算するためのオフセット値。

0.95
crop_nms_thresh float

クロップ間の重複マスクを除去するためのNon-Maximum Suppression (NMS)のIoUカットオフ。

0.7

リターンズ

タイプ 説明
tuple

セグメンテーションされたマスク、信頼度スコア、バウンディングボックスを含むタプル。

ソースコード ultralytics/models/sam/predict.py
def generate(
    self,
    im,
    crop_n_layers=0,
    crop_overlap_ratio=512 / 1500,
    crop_downscale_factor=1,
    point_grids=None,
    points_stride=32,
    points_batch_size=64,
    conf_thres=0.88,
    stability_score_thresh=0.95,
    stability_score_offset=0.95,
    crop_nms_thresh=0.7,
):
    """
    Perform image segmentation using the Segment Anything Model (SAM).

    This function segments an entire image into constituent parts by leveraging SAM's advanced architecture
    and real-time performance capabilities. It can optionally work on image crops for finer segmentation.

    Args:
        im (torch.Tensor): Input tensor representing the preprocessed image with dimensions (N, C, H, W).
        crop_n_layers (int): Specifies the number of layers for additional mask predictions on image crops.
                             Each layer produces 2**i_layer number of image crops.
        crop_overlap_ratio (float): Determines the extent of overlap between crops. Scaled down in subsequent layers.
        crop_downscale_factor (int): Scaling factor for the number of sampled points-per-side in each layer.
        point_grids (list[np.ndarray], optional): Custom grids for point sampling normalized to [0,1].
                                                  Used in the nth crop layer.
        points_stride (int, optional): Number of points to sample along each side of the image.
                                       Exclusive with 'point_grids'.
        points_batch_size (int): Batch size for the number of points processed simultaneously.
        conf_thres (float): Confidence threshold [0,1] for filtering based on the model's mask quality prediction.
        stability_score_thresh (float): Stability threshold [0,1] for mask filtering based on mask stability.
        stability_score_offset (float): Offset value for calculating stability score.
        crop_nms_thresh (float): IoU cutoff for Non-Maximum Suppression (NMS) to remove duplicate masks between crops.

    Returns:
        (tuple): A tuple containing segmented masks, confidence scores, and bounding boxes.
    """
    self.segment_all = True
    ih, iw = im.shape[2:]
    crop_regions, layer_idxs = generate_crop_boxes((ih, iw), crop_n_layers, crop_overlap_ratio)
    if point_grids is None:
        point_grids = build_all_layer_point_grids(points_stride, crop_n_layers, crop_downscale_factor)
    pred_masks, pred_scores, pred_bboxes, region_areas = [], [], [], []
    for crop_region, layer_idx in zip(crop_regions, layer_idxs):
        x1, y1, x2, y2 = crop_region
        w, h = x2 - x1, y2 - y1
        area = torch.tensor(w * h, device=im.device)
        points_scale = np.array([[w, h]])  # w, h
        # Crop image and interpolate to input size
        crop_im = F.interpolate(im[..., y1:y2, x1:x2], (ih, iw), mode="bilinear", align_corners=False)
        # (num_points, 2)
        points_for_image = point_grids[layer_idx] * points_scale
        crop_masks, crop_scores, crop_bboxes = [], [], []
        for (points,) in batch_iterator(points_batch_size, points_for_image):
            pred_mask, pred_score = self.prompt_inference(crop_im, points=points, multimask_output=True)
            # Interpolate predicted masks to input size
            pred_mask = F.interpolate(pred_mask[None], (h, w), mode="bilinear", align_corners=False)[0]
            idx = pred_score > conf_thres
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]

            stability_score = calculate_stability_score(
                pred_mask, self.model.mask_threshold, stability_score_offset
            )
            idx = stability_score > stability_score_thresh
            pred_mask, pred_score = pred_mask[idx], pred_score[idx]
            # Bool type is much more memory-efficient.
            pred_mask = pred_mask > self.model.mask_threshold
            # (N, 4)
            pred_bbox = batched_mask_to_box(pred_mask).float()
            keep_mask = ~is_box_near_crop_edge(pred_bbox, crop_region, [0, 0, iw, ih])
            if not torch.all(keep_mask):
                pred_bbox, pred_mask, pred_score = pred_bbox[keep_mask], pred_mask[keep_mask], pred_score[keep_mask]

            crop_masks.append(pred_mask)
            crop_bboxes.append(pred_bbox)
            crop_scores.append(pred_score)

        # Do nms within this crop
        crop_masks = torch.cat(crop_masks)
        crop_bboxes = torch.cat(crop_bboxes)
        crop_scores = torch.cat(crop_scores)
        keep = torchvision.ops.nms(crop_bboxes, crop_scores, self.args.iou)  # NMS
        crop_bboxes = uncrop_boxes_xyxy(crop_bboxes[keep], crop_region)
        crop_masks = uncrop_masks(crop_masks[keep], crop_region, ih, iw)
        crop_scores = crop_scores[keep]

        pred_masks.append(crop_masks)
        pred_bboxes.append(crop_bboxes)
        pred_scores.append(crop_scores)
        region_areas.append(area.expand(len(crop_masks)))

    pred_masks = torch.cat(pred_masks)
    pred_bboxes = torch.cat(pred_bboxes)
    pred_scores = torch.cat(pred_scores)
    region_areas = torch.cat(region_areas)

    # Remove duplicate masks between crops
    if len(crop_regions) > 1:
        scores = 1 / region_areas
        keep = torchvision.ops.nms(pred_bboxes, scores, crop_nms_thresh)
        pred_masks, pred_bboxes, pred_scores = pred_masks[keep], pred_bboxes[keep], pred_scores[keep]

    return pred_masks, pred_scores, pred_bboxes

inference(im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs)

現在読み込まれている画像を用いて、与えられた入力キューに基づく画像セグメンテーション推論を行う。この この手法は、SAM(Segment Anything Model) のアーキテクチャを活用し、画像エンコーダ、プロンプトエンコーダ、マスクデコーダから構成される。 マスクデコーダから構成されるアーキテクチャを利用する。

パラメーター

名称 タイプ 説明 デフォルト
im Tensor

前処理された入力画像。tensor 形式で、形状は (N、C、H、W)。

必須
bboxes ndarray | List

XYXY 形式の、形状 (N, 4) のバウンディングボックス。

None
points ndarray | List

形状(N, 2)を持つオブジェクトの位置をピクセル座標で示す点。

None
labels ndarray | List

点プロンプトのラベル、形状(N, )。1 は前景、0 は背景を表す。

None
masks ndarray

過去の予測から得られた低解像度のマスク。形状は(N, H, W)とする。SAM の場合、H=W=256。

None
multimask_output bool

複数のマスクを返すフラグ。あいまいなプロンプトに役立つ。デフォルトは False。

False

リターンズ

タイプ 説明
tuple

以下の3つの要素を含む。 - np.ndarray:CxHxWの形状の出力マスク。Cは生成されたマスクの数。 - np.ndarray:各マスクについてモデルによって予測された品質スコアを含む長さCの配列. - np.ndarray: 長さCの配列.H=W=256としたときの、後続の推論のための形状CxHxWの低解像度ロジット。

ソースコード ultralytics/models/sam/predict.py
def inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False, *args, **kwargs):
    """
    Perform image segmentation inference based on the given input cues, using the currently loaded image. This
    method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
    mask decoder for real-time and promptable segmentation tasks.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixel coordinates.
        labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 for foreground and 0 for background.
        masks (np.ndarray, optional): Low-resolution masks from previous predictions. Shape should be (N, H, W). For SAM, H=W=256.
        multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts. Defaults to False.

    Returns:
        (tuple): Contains the following three elements.
            - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
            - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
            - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
    """
    # Override prompts if any stored in self.prompts
    bboxes = self.prompts.pop("bboxes", bboxes)
    points = self.prompts.pop("points", points)
    masks = self.prompts.pop("masks", masks)

    if all(i is None for i in [bboxes, points, masks]):
        return self.generate(im, *args, **kwargs)

    return self.prompt_inference(im, bboxes, points, labels, masks, multimask_output)

postprocess(preds, img, orig_imgs)

SAM の推論出力を後処理して、オブジェクト検出マスクとバウンディングボックスを生成する。

この方法では、マスクとボックスを元の画像サイズにスケーリングし、マスク予測値にしきい値を適用する。この SAM モデルは、リアルタイム性能を達成するために、高度なアーキテクチャとプロンプト可能なセグメンテーションタスクを使用する。

パラメーター

名称 タイプ 説明 デフォルト
preds tuple

マスク、スコア、およびオプションのバウンディングボックスを含む、SAM モデル推論の出力。

必須
img Tensor

処理された入力画像tensor 。

必須
orig_imgs list | Tensor

未加工のオリジナル画像。

必須

リターンズ

タイプ 説明
list

検出マスク、バウンディングボックス、その他のメタデータを含む結果オブジェクトのリスト。

ソースコード ultralytics/models/sam/predict.py
def postprocess(self, preds, img, orig_imgs):
    """
    Post-processes SAM's inference outputs to generate object detection masks and bounding boxes.

    The method scales masks and boxes to the original image size and applies a threshold to the mask predictions. The
    SAM model uses advanced architecture and promptable segmentation tasks to achieve real-time performance.

    Args:
        preds (tuple): The output from SAM model inference, containing masks, scores, and optional bounding boxes.
        img (torch.Tensor): The processed input image tensor.
        orig_imgs (list | torch.Tensor): The original, unprocessed images.

    Returns:
        (list): List of Results objects containing detection masks, bounding boxes, and other metadata.
    """
    # (N, 1, H, W), (N, 1)
    pred_masks, pred_scores = preds[:2]
    pred_bboxes = preds[2] if self.segment_all else None
    names = dict(enumerate(str(i) for i in range(len(pred_masks))))

    if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
        orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

    results = []
    for i, masks in enumerate([pred_masks]):
        orig_img = orig_imgs[i]
        if pred_bboxes is not None:
            pred_bboxes = ops.scale_boxes(img.shape[2:], pred_bboxes.float(), orig_img.shape, padding=False)
            cls = torch.arange(len(pred_masks), dtype=torch.int32, device=pred_masks.device)
            pred_bboxes = torch.cat([pred_bboxes, pred_scores[:, None], cls[:, None]], dim=-1)

        masks = ops.scale_masks(masks[None].float(), orig_img.shape[:2], padding=False)[0]
        masks = masks > self.model.mask_threshold  # to bool
        img_path = self.batch[0][i]
        results.append(Results(orig_img, path=img_path, names=names, masks=masks, boxes=pred_bboxes))
    # Reset segment-all mode.
    self.segment_all = False
    return results

pre_transform(im)

前処理のために入力画像に対して初期変換を行う。

このメソッドは、画像のサイズ変更などの変換を行い、さらなる前処理に備える。 現在のところ,バッチ推論はサポートされていないので,リストの長さは1でなければならない.

パラメーター

名称 タイプ 説明 デフォルト
im List[ndarray]

HWC numpy配列形式の画像を含むリスト。

必須

リターンズ

タイプ 説明
List[ndarray]

変換された画像のリスト。

ソースコード ultralytics/models/sam/predict.py
def pre_transform(self, im):
    """
    Perform initial transformations on the input image for preprocessing.

    The method applies transformations such as resizing to prepare the image for further preprocessing.
    Currently, batched inference is not supported; hence the list length should be 1.

    Args:
        im (List[np.ndarray]): List containing images in HWC numpy array format.

    Returns:
        (List[np.ndarray]): List of transformed images.
    """
    assert len(im) == 1, "SAM model does not currently support batched inference"
    letterbox = LetterBox(self.args.imgsz, auto=False, center=False)
    return [letterbox(image=x) for x in im]

preprocess(im)

モデル推論のために入力画像を前処理する。

このメソッドは、変換と正規化を適用して入力画像を準備する。 入力形式として、torch.Tensor と np.ndarray のリストの両方をサポートしています。

パラメーター

名称 タイプ 説明 デフォルト
im Tensor | List[ndarray]

BCHWtensor フォーマットまたは HWC numpy 配列のリスト。

必須

リターンズ

タイプ 説明
Tensor

前処理された画像tensor 。

ソースコード ultralytics/models/sam/predict.py
def preprocess(self, im):
    """
    Preprocess the input image for model inference.

    The method prepares the input image by applying transformations and normalization.
    It supports both torch.Tensor and list of np.ndarray as input formats.

    Args:
        im (torch.Tensor | List[np.ndarray]): BCHW tensor format or list of HWC numpy arrays.

    Returns:
        (torch.Tensor): The preprocessed image tensor.
    """
    if self.im is not None:
        return self.im
    not_tensor = not isinstance(im, torch.Tensor)
    if not_tensor:
        im = np.stack(self.pre_transform(im))
        im = im[..., ::-1].transpose((0, 3, 1, 2))
        im = np.ascontiguousarray(im)
        im = torch.from_numpy(im)

    im = im.to(self.device)
    im = im.half() if self.model.fp16 else im.float()
    if not_tensor:
        im = (im - self.mean) / self.std
    return im

prompt_inference(im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False)

バウンディングボックス、ポイント、マスクなどのキューに基づく画像セグメンテーション推論用の内部関数。 プロンプトベースのリアルタイムセグメンテーションに特化したSAM のアーキテクチャを活用。

パラメーター

名称 タイプ 説明 デフォルト
im Tensor

前処理された入力画像。tensor 形式で、形状は (N、C、H、W)。

必須
bboxes ndarray | List

XYXY 形式の、形状 (N, 4) のバウンディングボックス。

None
points ndarray | List

形状(N, 2)を持つオブジェクトの位置をピクセル座標で示す点。

None
labels ndarray | List

点プロンプトのラベル、形状(N, )。1 は前景、0 は背景を表す。

None
masks ndarray

過去の予測から得られた低解像度のマスク。形状は(N, H, W)とする。SAM の場合、H=W=256。

None
multimask_output bool

複数のマスクを返すフラグ。あいまいなプロンプトに役立つ。デフォルトは False。

False

リターンズ

タイプ 説明
tuple

以下の3つの要素を含む。 - np.ndarray:CxHxWの形状の出力マスク。Cは生成されたマスクの数。 - np.ndarray:各マスクについてモデルによって予測された品質スコアを含む長さCの配列. - np.ndarray: 長さCの配列.H=W=256としたときの、後続の推論のための形状CxHxWの低解像度ロジット。

ソースコード ultralytics/models/sam/predict.py
def prompt_inference(self, im, bboxes=None, points=None, labels=None, masks=None, multimask_output=False):
    """
    Internal function for image segmentation inference based on cues like bounding boxes, points, and masks.
    Leverages SAM's specialized architecture for prompt-based, real-time segmentation.

    Args:
        im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
        bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
        points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixel coordinates.
        labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 for foreground and 0 for background.
        masks (np.ndarray, optional): Low-resolution masks from previous predictions. Shape should be (N, H, W). For SAM, H=W=256.
        multimask_output (bool, optional): Flag to return multiple masks. Helpful for ambiguous prompts. Defaults to False.

    Returns:
        (tuple): Contains the following three elements.
            - np.ndarray: The output masks in shape CxHxW, where C is the number of generated masks.
            - np.ndarray: An array of length C containing quality scores predicted by the model for each mask.
            - np.ndarray: Low-resolution logits of shape CxHxW for subsequent inference, where H=W=256.
    """
    features = self.model.image_encoder(im) if self.features is None else self.features

    src_shape, dst_shape = self.batch[1][0].shape[:2], im.shape[2:]
    r = 1.0 if self.segment_all else min(dst_shape[0] / src_shape[0], dst_shape[1] / src_shape[1])
    # Transform input prompts
    if points is not None:
        points = torch.as_tensor(points, dtype=torch.float32, device=self.device)
        points = points[None] if points.ndim == 1 else points
        # Assuming labels are all positive if users don't pass labels.
        if labels is None:
            labels = np.ones(points.shape[0])
        labels = torch.as_tensor(labels, dtype=torch.int32, device=self.device)
        points *= r
        # (N, 2) --> (N, 1, 2), (N, ) --> (N, 1)
        points, labels = points[:, None, :], labels[:, None]
    if bboxes is not None:
        bboxes = torch.as_tensor(bboxes, dtype=torch.float32, device=self.device)
        bboxes = bboxes[None] if bboxes.ndim == 1 else bboxes
        bboxes *= r
    if masks is not None:
        masks = torch.as_tensor(masks, dtype=torch.float32, device=self.device).unsqueeze(1)

    points = (points, labels) if points is not None else None
    # Embed prompts
    sparse_embeddings, dense_embeddings = self.model.prompt_encoder(points=points, boxes=bboxes, masks=masks)

    # Predict masks
    pred_masks, pred_scores = self.model.mask_decoder(
        image_embeddings=features,
        image_pe=self.model.prompt_encoder.get_dense_pe(),
        sparse_prompt_embeddings=sparse_embeddings,
        dense_prompt_embeddings=dense_embeddings,
        multimask_output=multimask_output,
    )

    # (N, d, H, W) --> (N*d, H, W), (N, d) --> (N*d, )
    # `d` could be 1 or 3 depends on `multimask_output`.
    return pred_masks.flatten(0, 1), pred_scores.flatten(0, 1)

remove_small_regions(masks, min_area=0, nms_thresh=0.7) staticmethod

セグメント何でもモデル(SAM)によって生成されたセグメンテーションマスクの後処理を実行します。具体的には、この この関数は、入力マスクから小さな切断領域と穴を削除し、非最大抑制(NMS)を実行して、新しく作成された重複ボックスを削除します。 サプレッション(NMS)を実行し、新たに作成された重複ボックスを除去します。

パラメーター

名称 タイプ 説明 デフォルト
masks Tensor

tensor 処理するマスクを含む。形状は (N, H, W) でなければならない。 はマスクの数、H は高さ、W は幅。

必須
min_area int

切断された領域と穴が削除される最小領域。デフォルトは0。

0
nms_thresh float

NMSアルゴリズムのIoUしきい値。デフォルトは0.7。

0.7

リターンズ

タイプ 説明
tuple([Tensor, List[int]])
  • new_masks (torch.Tensor):小領域を除去したマスク。形状は (N, H, W)。
  • keep (List[int]):ボックスのフィルタリングに使用できる、NMS後の残りのマスクのインデックス。
ソースコード ultralytics/models/sam/predict.py
@staticmethod
def remove_small_regions(masks, min_area=0, nms_thresh=0.7):
    """
    Perform post-processing on segmentation masks generated by the Segment Anything Model (SAM). Specifically, this
    function removes small disconnected regions and holes from the input masks, and then performs Non-Maximum
    Suppression (NMS) to eliminate any newly created duplicate boxes.

    Args:
        masks (torch.Tensor): A tensor containing the masks to be processed. Shape should be (N, H, W), where N is
                              the number of masks, H is height, and W is width.
        min_area (int): The minimum area below which disconnected regions and holes will be removed. Defaults to 0.
        nms_thresh (float): The IoU threshold for the NMS algorithm. Defaults to 0.7.

    Returns:
        (tuple([torch.Tensor, List[int]])):
            - new_masks (torch.Tensor): The processed masks with small regions removed. Shape is (N, H, W).
            - keep (List[int]): The indices of the remaining masks post-NMS, which can be used to filter the boxes.
    """
    if len(masks) == 0:
        return masks

    # Filter small disconnected regions and holes
    new_masks = []
    scores = []
    for mask in masks:
        mask = mask.cpu().numpy().astype(np.uint8)
        mask, changed = remove_small_regions(mask, min_area, mode="holes")
        unchanged = not changed
        mask, changed = remove_small_regions(mask, min_area, mode="islands")
        unchanged = unchanged and not changed

        new_masks.append(torch.as_tensor(mask).unsqueeze(0))
        # Give score=0 to changed masks and 1 to unchanged masks so NMS prefers masks not needing postprocessing
        scores.append(float(unchanged))

    # Recalculate boxes and remove any new duplicates
    new_masks = torch.cat(new_masks, dim=0)
    boxes = batched_mask_to_box(new_masks)
    keep = torchvision.ops.nms(boxes.float(), torch.as_tensor(scores), nms_thresh)

    return new_masks[keep].to(device=masks.device, dtype=masks.dtype), keep

reset_image()

画像とその機能をNoneにリセットする。

ソースコード ultralytics/models/sam/predict.py
def reset_image(self):
    """Resets the image and its features to None."""
    self.im = None
    self.features = None

set_image(image)

推論用に1枚の画像を前処理してセットする。

この関数は,まだ初期化されていない場合はモデルをセットアップし,データソースを指定された画像に設定します. を設定し,特徴抽出のために画像を前処理します.一度に設定できる画像は1つだけです.

パラメーター

名称 タイプ 説明 デフォルト
image str | ndarray

文字列としての画像ファイルのパス,あるいは cv2 によって読み込まれる np.ndarray 画像.

必須

レイズ

タイプ 説明
AssertionError

複数の画像が設定されている場合。

ソースコード ultralytics/models/sam/predict.py
def set_image(self, image):
    """
    Preprocesses and sets a single image for inference.

    This function sets up the model if not already initialized, configures the data source to the specified image,
    and preprocesses the image for feature extraction. Only one image can be set at a time.

    Args:
        image (str | np.ndarray): Image file path as a string, or a np.ndarray image read by cv2.

    Raises:
        AssertionError: If more than one image is set.
    """
    if self.model is None:
        model = build_sam(self.args.model)
        self.setup_model(model)
    self.setup_source(image)
    assert len(self.dataset) == 1, "`set_image` only supports setting one image!"
    for batch in self.dataset:
        im = self.preprocess(batch[1])
        self.features = self.model.image_encoder(im)
        self.im = im
        break

set_prompts(prompts)

あらかじめプロンプトを設定しておく。

ソースコード ultralytics/models/sam/predict.py
def set_prompts(self, prompts):
    """Set prompts in advance."""
    self.prompts = prompts

setup_model(model, verbose=True)

推論用にセグメント何でもモデル (SAM) を初期化します。

このメソッドは、SAM モデルを適切なデバイスに割り当て、必要なパラメータを初期化することでセットアップする。 Ultralytics を初期化します。

パラメーター

名称 タイプ 説明 デフォルト
model Module

事前に訓練されたSAM モデル。None の場合、設定に基づいてモデルが構築されます。

必須
verbose bool

True の場合、選択されたデバイス情報を印刷します。

True

属性:

名称 タイプ 説明
model Module

SAM 推論のために選択されたデバイスに割り当てられたモデル。

device device

モデルとテンソルが割り当てられるデバイス。

mean Tensor

画像正規化の平均値。

std Tensor

画像正規化のための標準偏差値。

ソースコード ultralytics/models/sam/predict.py
def setup_model(self, model, verbose=True):
    """
    Initializes the Segment Anything Model (SAM) for inference.

    This method sets up the SAM model by allocating it to the appropriate device and initializing the necessary
    parameters for image normalization and other Ultralytics compatibility settings.

    Args:
        model (torch.nn.Module): A pre-trained SAM model. If None, a model will be built based on configuration.
        verbose (bool): If True, prints selected device information.

    Attributes:
        model (torch.nn.Module): The SAM model allocated to the chosen device for inference.
        device (torch.device): The device to which the model and tensors are allocated.
        mean (torch.Tensor): The mean values for image normalization.
        std (torch.Tensor): The standard deviation values for image normalization.
    """
    device = select_device(self.args.device, verbose=verbose)
    if model is None:
        model = build_sam(self.args.model)
    model.eval()
    self.model = model.to(device)
    self.device = device
    self.mean = torch.tensor([123.675, 116.28, 103.53]).view(-1, 1, 1).to(device)
    self.std = torch.tensor([58.395, 57.12, 57.375]).view(-1, 1, 1).to(device)

    # Ultralytics compatibility settings
    self.model.pt = False
    self.model.triton = False
    self.model.stride = 32
    self.model.fp16 = False
    self.done_warmup = True

setup_source(source)

推論用のデータ・ソースを設定する。

このメソッドは、推論のために画像を取得するデータ・ソースを設定します。ソースは ディレクトリ、ビデオ・ファイル、またはその他のタイプの画像データ・ソースです。

パラメーター

名称 タイプ 説明 デフォルト
source str | Path

推論用の画像データソースへのパス。

必須
ソースコード ultralytics/models/sam/predict.py
def setup_source(self, source):
    """
    Sets up the data source for inference.

    This method configures the data source from which images will be fetched for inference. The source could be a
    directory, a video file, or other types of image data sources.

    Args:
        source (str | Path): The path to the image data source for inference.
    """
    if source is not None:
        super().setup_source(source)





作成日:2023-11-12 更新日:2023-11-25
作成者:glenn-jocher(3)