Reference for ultralytics/models/sam/modules/decoders.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/modules/decoders.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.models.sam.modules.decoders.MaskDecoder
Bases: Module
Decoder module for generating masks and their associated quality scores, using a transformer architecture to predict masks given image and prompt embeddings.
Attributes:
Name | Type | Description |
---|---|---|
transformer_dim |
int
|
Channel dimension for the transformer module. |
transformer |
Module
|
The transformer module used for mask prediction. |
num_multimask_outputs |
int
|
Number of masks to predict for disambiguating masks. |
iou_token |
Embedding
|
Embedding for the IoU token. |
num_mask_tokens |
int
|
Number of mask tokens. |
mask_tokens |
Embedding
|
Embedding for the mask tokens. |
output_upscaling |
Sequential
|
Neural network sequence for upscaling the output. |
output_hypernetworks_mlps |
ModuleList
|
Hypernetwork MLPs for generating masks. |
iou_prediction_head |
Module
|
MLP for predicting mask quality. |
Source code in ultralytics/models/sam/modules/decoders.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
|
__init__(*, transformer_dim, transformer, num_multimask_outputs=3, activation=nn.GELU, iou_head_depth=3, iou_head_hidden_dim=256)
Predicts masks given an image and prompt embeddings, using a transformer architecture.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transformer_dim |
int
|
the channel dimension of the transformer module |
required |
transformer |
Module
|
the transformer used to predict masks |
required |
num_multimask_outputs |
int
|
the number of masks to predict when disambiguating masks |
3
|
activation |
Module
|
the type of activation to use when upscaling masks |
GELU
|
iou_head_depth |
int
|
the depth of the MLP used to predict mask quality |
3
|
iou_head_hidden_dim |
int
|
the hidden dimension of the MLP used to predict mask quality |
256
|
Source code in ultralytics/models/sam/modules/decoders.py
forward(image_embeddings, image_pe, sparse_prompt_embeddings, dense_prompt_embeddings, multimask_output)
Predict masks given image and prompt embeddings.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image_embeddings |
Tensor
|
the embeddings from the image encoder |
required |
image_pe |
Tensor
|
positional encoding with the shape of image_embeddings |
required |
sparse_prompt_embeddings |
Tensor
|
the embeddings of the points and boxes |
required |
dense_prompt_embeddings |
Tensor
|
the embeddings of the mask inputs |
required |
multimask_output |
bool
|
Whether to return multiple masks or a single mask. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: batched predicted masks |
Tensor
|
torch.Tensor: batched predictions of mask quality |
Source code in ultralytics/models/sam/modules/decoders.py
predict_masks(image_embeddings, image_pe, sparse_prompt_embeddings, dense_prompt_embeddings)
Predicts masks.
See 'forward' for more details.
Source code in ultralytics/models/sam/modules/decoders.py
ultralytics.models.sam.modules.decoders.MLP
Bases: Module
MLP (Multi-Layer Perceptron) model lightly adapted from https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py
Source code in ultralytics/models/sam/modules/decoders.py
__init__(input_dim, hidden_dim, output_dim, num_layers, sigmoid_output=False)
Initializes the MLP (Multi-Layer Perceptron) model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim |
int
|
The dimensionality of the input features. |
required |
hidden_dim |
int
|
The dimensionality of the hidden layers. |
required |
output_dim |
int
|
The dimensionality of the output layer. |
required |
num_layers |
int
|
The number of hidden layers. |
required |
sigmoid_output |
bool
|
Apply a sigmoid activation to the output layer. Defaults to False. |
False
|
Source code in ultralytics/models/sam/modules/decoders.py
forward(x)
Executes feedforward within the neural network module and applies activation.