─░├žeri─če ge├ž

Referans i├žin ultralytics/models/sam/modules/decoders.py

Not

Bu dosya https://github.com/ultralytics/ultralytics/blob/main/ ultralytics/models/ sam/modules/decoders .py adresinde mevcuttur. Bir sorun tespit ederseniz l├╝tfen bir ├çekme ─░ste─či ­čŤá´ŞĆ ile katk─▒da bulunarak d├╝zeltilmesine yard─▒mc─▒ olun. Te┼čekk├╝rler ­čÖĆ!



ultralytics.models.sam.modules.decoders.MaskDecoder

├ťsler: Module

Tahmin etmek i├žin bir d├Ân├╝┼čt├╝r├╝c├╝ mimarisi kullanarak maskeler ve bunlarla ili┼čkili kalite puanlar─▒ ├╝retmek i├žin kod ├ž├Âz├╝c├╝ mod├╝l├╝ g├Âr├╝nt├╝ ve h─▒zl─▒ g├Âmme verilen maskeler.

Nitelikler:

─░sim Tip A├ž─▒klama
transformer_dim int

Transformat├Âr mod├╝l├╝ i├žin kanal boyutu.

transformer Module

Maske tahmini i├žin kullan─▒lan transformat├Âr mod├╝l├╝.

num_multimask_outputs int

Belirsizle┼čtirici maskeler i├žin tahmin edilecek maske say─▒s─▒.

iou_token Embedding

IoU belirteci i├žin g├Âmme.

num_mask_tokens int

Maske belirte├žlerinin say─▒s─▒.

mask_tokens Embedding

Maske belirte├žleri i├žin g├Âmme.

output_upscaling Sequential

├ç─▒kt─▒y─▒ y├╝kseltmek i├žin sinir a─č─▒ dizisi.

output_hypernetworks_mlps ModuleList

Maske olu┼čturmak i├žin hiper a─č MLP'leri.

iou_prediction_head Module

Maske kalitesini tahmin etmek i├žin MLP.

Kaynak kodu ultralytics/models/sam/modules/decoders.py
class MaskDecoder(nn.Module):
    """
    Decoder module for generating masks and their associated quality scores, using a transformer architecture to predict
    masks given image and prompt embeddings.

    Attributes:
        transformer_dim (int): Channel dimension for the transformer module.
        transformer (nn.Module): The transformer module used for mask prediction.
        num_multimask_outputs (int): Number of masks to predict for disambiguating masks.
        iou_token (nn.Embedding): Embedding for the IoU token.
        num_mask_tokens (int): Number of mask tokens.
        mask_tokens (nn.Embedding): Embedding for the mask tokens.
        output_upscaling (nn.Sequential): Neural network sequence for upscaling the output.
        output_hypernetworks_mlps (nn.ModuleList): Hypernetwork MLPs for generating masks.
        iou_prediction_head (nn.Module): MLP for predicting mask quality.
    """

    def __init__(
        self,
        *,
        transformer_dim: int,
        transformer: nn.Module,
        num_multimask_outputs: int = 3,
        activation: Type[nn.Module] = nn.GELU,
        iou_head_depth: int = 3,
        iou_head_hidden_dim: int = 256,
    ) -> None:
        """
        Predicts masks given an image and prompt embeddings, using a transformer architecture.

        Args:
            transformer_dim (int): the channel dimension of the transformer module
            transformer (nn.Module): the transformer used to predict masks
            num_multimask_outputs (int): the number of masks to predict when disambiguating masks
            activation (nn.Module): the type of activation to use when upscaling masks
            iou_head_depth (int): the depth of the MLP used to predict mask quality
            iou_head_hidden_dim (int): the hidden dimension of the MLP used to predict mask quality
        """
        super().__init__()
        self.transformer_dim = transformer_dim
        self.transformer = transformer

        self.num_multimask_outputs = num_multimask_outputs

        self.iou_token = nn.Embedding(1, transformer_dim)
        self.num_mask_tokens = num_multimask_outputs + 1
        self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim)

        self.output_upscaling = nn.Sequential(
            nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2),
            LayerNorm2d(transformer_dim // 4),
            activation(),
            nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2),
            activation(),
        )
        self.output_hypernetworks_mlps = nn.ModuleList(
            [MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) for _ in range(self.num_mask_tokens)]
        )

        self.iou_prediction_head = MLP(transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth)

    def forward(
        self,
        image_embeddings: torch.Tensor,
        image_pe: torch.Tensor,
        sparse_prompt_embeddings: torch.Tensor,
        dense_prompt_embeddings: torch.Tensor,
        multimask_output: bool,
    ) -> Tuple[torch.Tensor, torch.Tensor]:
        """
        Predict masks given image and prompt embeddings.

        Args:
            image_embeddings (torch.Tensor): the embeddings from the image encoder
            image_pe (torch.Tensor): positional encoding with the shape of image_embeddings
            sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes
            dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs
            multimask_output (bool): Whether to return multiple masks or a single mask.

        Returns:
            torch.Tensor: batched predicted masks
            torch.Tensor: batched predictions of mask quality
        """
        masks, iou_pred = self.predict_masks(
            image_embeddings=image_embeddings,
            image_pe=image_pe,
            sparse_prompt_embeddings=sparse_prompt_embeddings,
            dense_prompt_embeddings=dense_prompt_embeddings,
        )

        # Select the correct mask or masks for output
        mask_slice = slice(1, None) if multimask_output else slice(0, 1)
        masks = masks[:, mask_slice, :, :]
        iou_pred = iou_pred[:, mask_slice]

        # Prepare output
        return masks, iou_pred

    def predict_masks(
        self,
        image_embeddings: torch.Tensor,
        image_pe: torch.Tensor,
        sparse_prompt_embeddings: torch.Tensor,
        dense_prompt_embeddings: torch.Tensor,
    ) -> Tuple[torch.Tensor, torch.Tensor]:
        """
        Predicts masks.

        See 'forward' for more details.
        """
        # Concatenate output tokens
        output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0)
        output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.shape[0], -1, -1)
        tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1)

        # Expand per-image data in batch direction to be per-mask
        src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0)
        src = src + dense_prompt_embeddings
        pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0)
        b, c, h, w = src.shape

        # Run the transformer
        hs, src = self.transformer(src, pos_src, tokens)
        iou_token_out = hs[:, 0, :]
        mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :]

        # Upscale mask embeddings and predict masks using the mask tokens
        src = src.transpose(1, 2).view(b, c, h, w)
        upscaled_embedding = self.output_upscaling(src)
        hyper_in_list: List[torch.Tensor] = [
            self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :]) for i in range(self.num_mask_tokens)
        ]
        hyper_in = torch.stack(hyper_in_list, dim=1)
        b, c, h, w = upscaled_embedding.shape
        masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w)

        # Generate mask quality predictions
        iou_pred = self.iou_prediction_head(iou_token_out)

        return masks, iou_pred

__init__(*, transformer_dim, transformer, num_multimask_outputs=3, activation=nn.GELU, iou_head_depth=3, iou_head_hidden_dim=256)

Bir d├Ân├╝┼čt├╝r├╝c├╝ mimarisi kullanarak bir g├Âr├╝nt├╝ ve istem kat─▒┼čt─▒rmalar─▒ verilen maskeleri tahmin eder.

Parametreler:

─░sim Tip A├ž─▒klama Varsay─▒lan
transformer_dim int

transformat├Âr mod├╝l├╝n├╝n kanal boyutu

gerekli
transformer Module

maskeleri tahmin etmek i├žin kullan─▒lan transformat├Âr

gerekli
num_multimask_outputs int

maskeleri belirsizle┼čtirirken tahmin edilecek maske say─▒s─▒

3
activation Module

maskeleri y├╝kseltirken kullan─▒lacak etkinle┼čtirme t├╝r├╝

GELU
iou_head_depth int

maske kalitesini tahmin etmek i├žin kullan─▒lan MLP derinli─či

3
iou_head_hidden_dim int

maske kalitesini tahmin etmek i├žin kullan─▒lan MLP'nin gizli boyutu

256
Kaynak kodu ultralytics/models/sam/modules/decoders.py
def __init__(
    self,
    *,
    transformer_dim: int,
    transformer: nn.Module,
    num_multimask_outputs: int = 3,
    activation: Type[nn.Module] = nn.GELU,
    iou_head_depth: int = 3,
    iou_head_hidden_dim: int = 256,
) -> None:
    """
    Predicts masks given an image and prompt embeddings, using a transformer architecture.

    Args:
        transformer_dim (int): the channel dimension of the transformer module
        transformer (nn.Module): the transformer used to predict masks
        num_multimask_outputs (int): the number of masks to predict when disambiguating masks
        activation (nn.Module): the type of activation to use when upscaling masks
        iou_head_depth (int): the depth of the MLP used to predict mask quality
        iou_head_hidden_dim (int): the hidden dimension of the MLP used to predict mask quality
    """
    super().__init__()
    self.transformer_dim = transformer_dim
    self.transformer = transformer

    self.num_multimask_outputs = num_multimask_outputs

    self.iou_token = nn.Embedding(1, transformer_dim)
    self.num_mask_tokens = num_multimask_outputs + 1
    self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim)

    self.output_upscaling = nn.Sequential(
        nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2),
        LayerNorm2d(transformer_dim // 4),
        activation(),
        nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2),
        activation(),
    )
    self.output_hypernetworks_mlps = nn.ModuleList(
        [MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) for _ in range(self.num_mask_tokens)]
    )

    self.iou_prediction_head = MLP(transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth)

forward(image_embeddings, image_pe, sparse_prompt_embeddings, dense_prompt_embeddings, multimask_output)

G├Âr├╝nt├╝ ve istem kat─▒┼čt─▒rmalar─▒ verilen maskeleri tahmin edin.

Parametreler:

─░sim Tip A├ž─▒klama Varsay─▒lan
image_embeddings Tensor

g├Âr├╝nt├╝ kodlay─▒c─▒dan gelen g├Âm├╝ler

gerekli
image_pe Tensor

image_embeddings ┼čekli ile konumsal kodlama

gerekli
sparse_prompt_embeddings Tensor

noktalar─▒n ve kutular─▒n g├Âm├╝lmeleri

gerekli
dense_prompt_embeddings Tensor

maske girdilerinin g├Âm├╝lmeleri

gerekli
multimask_output bool

Birden fazla maske mi yoksa tek bir maske mi d├Ând├╝r├╝lece─či.

gerekli

─░ade:

Tip A├ž─▒klama
Tensor

torch.Tensor: gruplanm─▒┼č ├Âng├Âr├╝len maskeler

Tensor

torch.Tensor: maske kalitesinin toplu tahminleri

Kaynak kodu ultralytics/models/sam/modules/decoders.py
def forward(
    self,
    image_embeddings: torch.Tensor,
    image_pe: torch.Tensor,
    sparse_prompt_embeddings: torch.Tensor,
    dense_prompt_embeddings: torch.Tensor,
    multimask_output: bool,
) -> Tuple[torch.Tensor, torch.Tensor]:
    """
    Predict masks given image and prompt embeddings.

    Args:
        image_embeddings (torch.Tensor): the embeddings from the image encoder
        image_pe (torch.Tensor): positional encoding with the shape of image_embeddings
        sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes
        dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs
        multimask_output (bool): Whether to return multiple masks or a single mask.

    Returns:
        torch.Tensor: batched predicted masks
        torch.Tensor: batched predictions of mask quality
    """
    masks, iou_pred = self.predict_masks(
        image_embeddings=image_embeddings,
        image_pe=image_pe,
        sparse_prompt_embeddings=sparse_prompt_embeddings,
        dense_prompt_embeddings=dense_prompt_embeddings,
    )

    # Select the correct mask or masks for output
    mask_slice = slice(1, None) if multimask_output else slice(0, 1)
    masks = masks[:, mask_slice, :, :]
    iou_pred = iou_pred[:, mask_slice]

    # Prepare output
    return masks, iou_pred

predict_masks(image_embeddings, image_pe, sparse_prompt_embeddings, dense_prompt_embeddings)

Maskeleri tahmin ediyor.

Daha fazla ayr─▒nt─▒ i├žin 'ileri' b├Âl├╝m├╝ne bak─▒n.

Kaynak kodu ultralytics/models/sam/modules/decoders.py
def predict_masks(
    self,
    image_embeddings: torch.Tensor,
    image_pe: torch.Tensor,
    sparse_prompt_embeddings: torch.Tensor,
    dense_prompt_embeddings: torch.Tensor,
) -> Tuple[torch.Tensor, torch.Tensor]:
    """
    Predicts masks.

    See 'forward' for more details.
    """
    # Concatenate output tokens
    output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0)
    output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.shape[0], -1, -1)
    tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1)

    # Expand per-image data in batch direction to be per-mask
    src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0)
    src = src + dense_prompt_embeddings
    pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0)
    b, c, h, w = src.shape

    # Run the transformer
    hs, src = self.transformer(src, pos_src, tokens)
    iou_token_out = hs[:, 0, :]
    mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :]

    # Upscale mask embeddings and predict masks using the mask tokens
    src = src.transpose(1, 2).view(b, c, h, w)
    upscaled_embedding = self.output_upscaling(src)
    hyper_in_list: List[torch.Tensor] = [
        self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :]) for i in range(self.num_mask_tokens)
    ]
    hyper_in = torch.stack(hyper_in_list, dim=1)
    b, c, h, w = upscaled_embedding.shape
    masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w)

    # Generate mask quality predictions
    iou_pred = self.iou_prediction_head(iou_token_out)

    return masks, iou_pred



ultralytics.models.sam.modules.decoders.MLP

├ťsler: Module

MLP (├çok Katmanl─▒ Alg─▒lay─▒c─▒) modelinden hafif├že uyarlanm─▒┼čt─▒r https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py

Kaynak kodu ultralytics/models/sam/modules/decoders.py
class MLP(nn.Module):
    """
    MLP (Multi-Layer Perceptron) model lightly adapted from
    https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py
    """

    def __init__(
        self,
        input_dim: int,
        hidden_dim: int,
        output_dim: int,
        num_layers: int,
        sigmoid_output: bool = False,
    ) -> None:
        """
        Initializes the MLP (Multi-Layer Perceptron) model.

        Args:
            input_dim (int): The dimensionality of the input features.
            hidden_dim (int): The dimensionality of the hidden layers.
            output_dim (int): The dimensionality of the output layer.
            num_layers (int): The number of hidden layers.
            sigmoid_output (bool, optional): Apply a sigmoid activation to the output layer. Defaults to False.
        """
        super().__init__()
        self.num_layers = num_layers
        h = [hidden_dim] * (num_layers - 1)
        self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
        self.sigmoid_output = sigmoid_output

    def forward(self, x):
        """Executes feedforward within the neural network module and applies activation."""
        for i, layer in enumerate(self.layers):
            x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
        if self.sigmoid_output:
            x = torch.sigmoid(x)
        return x

__init__(input_dim, hidden_dim, output_dim, num_layers, sigmoid_output=False)

MLP (├çok Katmanl─▒ Alg─▒lay─▒c─▒) modelini ba┼člat─▒r.

Parametreler:

─░sim Tip A├ž─▒klama Varsay─▒lan
input_dim int

Giri┼č ├Âzelliklerinin boyutlulu─ču.

gerekli
hidden_dim int

Gizli katmanlar─▒n boyutlulu─ču.

gerekli
output_dim int

├ç─▒kt─▒ katman─▒n─▒n boyutlulu─ču.

gerekli
num_layers int

Gizli katman say─▒s─▒.

gerekli
sigmoid_output bool

├ç─▒kt─▒ katman─▒na sigmoid aktivasyonu uygular. Varsay─▒lan de─čer False'dir.

False
Kaynak kodu ultralytics/models/sam/modules/decoders.py
def __init__(
    self,
    input_dim: int,
    hidden_dim: int,
    output_dim: int,
    num_layers: int,
    sigmoid_output: bool = False,
) -> None:
    """
    Initializes the MLP (Multi-Layer Perceptron) model.

    Args:
        input_dim (int): The dimensionality of the input features.
        hidden_dim (int): The dimensionality of the hidden layers.
        output_dim (int): The dimensionality of the output layer.
        num_layers (int): The number of hidden layers.
        sigmoid_output (bool, optional): Apply a sigmoid activation to the output layer. Defaults to False.
    """
    super().__init__()
    self.num_layers = num_layers
    h = [hidden_dim] * (num_layers - 1)
    self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
    self.sigmoid_output = sigmoid_output

forward(x)

Sinir a─č─▒ mod├╝l├╝ i├žinde ileri beslemeyi y├╝r├╝t├╝r ve aktivasyon uygular.

Kaynak kodu ultralytics/models/sam/modules/decoders.py
def forward(self, x):
    """Executes feedforward within the neural network module and applies activation."""
    for i, layer in enumerate(self.layers):
        x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
    if self.sigmoid_output:
        x = torch.sigmoid(x)
    return x





Created 2023-11-12, Updated 2024-06-02
Authors: glenn-jocher (5), Burhan-Q (1), Laughing-q (1)