Reference for ultralytics/models/sam/modules/transformer.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/modules/transformer.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.models.sam.modules.transformer.TwoWayTransformer
TwoWayTransformer(depth: int, embedding_dim: int, num_heads: int, mlp_dim: int, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2)
Bases: Module
A Two-Way Transformer module that enables the simultaneous attention to both image and query points. This class serves as a specialized transformer decoder that attends to an input image using queries whose positional embedding is supplied. This is particularly useful for tasks like object detection, image segmentation, and point cloud processing.
Attributes:
Name | Type | Description |
---|---|---|
depth |
int
|
The number of layers in the transformer. |
embedding_dim |
int
|
The channel dimension for the input embeddings. |
num_heads |
int
|
The number of heads for multihead attention. |
mlp_dim |
int
|
The internal channel dimension for the MLP block. |
layers |
ModuleList
|
The list of TwoWayAttentionBlock layers that make up the transformer. |
final_attn_token_to_image |
Attention
|
The final attention layer applied from the queries to the image. |
norm_final_attn |
LayerNorm
|
The layer normalization applied to the final queries. |
Parameters:
Name | Type | Description | Default |
---|---|---|---|
depth |
int
|
number of layers in the transformer |
required |
embedding_dim |
int
|
the channel dimension for the input embeddings |
required |
num_heads |
int
|
the number of heads for multihead attention. Must divide embedding_dim |
required |
mlp_dim |
int
|
the channel dimension internal to the MLP block |
required |
activation |
Module
|
the activation to use in the MLP block |
ReLU
|
Source code in ultralytics/models/sam/modules/transformer.py
forward
forward(image_embedding: Tensor, image_pe: Tensor, point_embedding: Tensor) -> Tuple[Tensor, Tensor]
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image_embedding |
Tensor
|
image to attend to. Should be shape B x embedding_dim x h x w for any h and w. |
required |
image_pe |
Tensor
|
the positional encoding to add to the image. Must have same shape as image_embedding. |
required |
point_embedding |
Tensor
|
the embedding to add to the query points. Must have shape B x N_points x embedding_dim for any N_points. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
the processed point_embedding |
Tensor
|
the processed image_embedding |
Source code in ultralytics/models/sam/modules/transformer.py
ultralytics.models.sam.modules.transformer.TwoWayAttentionBlock
TwoWayAttentionBlock(embedding_dim: int, num_heads: int, mlp_dim: int = 2048, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, skip_first_layer_pe: bool = False)
Bases: Module
An attention block that performs both self-attention and cross-attention in two directions: queries to keys and keys to queries. This block consists of four main layers: (1) self-attention on sparse inputs, (2) cross-attention of sparse inputs to dense inputs, (3) an MLP block on sparse inputs, and (4) cross-attention of dense inputs to sparse inputs.
Attributes:
Name | Type | Description |
---|---|---|
self_attn |
Attention
|
The self-attention layer for the queries. |
norm1 |
LayerNorm
|
Layer normalization following the first attention block. |
cross_attn_token_to_image |
Attention
|
Cross-attention layer from queries to keys. |
norm2 |
LayerNorm
|
Layer normalization following the second attention block. |
mlp |
MLPBlock
|
MLP block that transforms the query embeddings. |
norm3 |
LayerNorm
|
Layer normalization following the MLP block. |
norm4 |
LayerNorm
|
Layer normalization following the third attention block. |
cross_attn_image_to_token |
Attention
|
Cross-attention layer from keys to queries. |
skip_first_layer_pe |
bool
|
Whether to skip the positional encoding in the first layer. |
inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embedding_dim |
int
|
the channel dimension of the embeddings |
required |
num_heads |
int
|
the number of heads in the attention layers |
required |
mlp_dim |
int
|
the hidden dimension of the mlp block |
2048
|
activation |
Module
|
the activation of the mlp block |
ReLU
|
skip_first_layer_pe |
bool
|
skip the PE on the first layer |
False
|
Source code in ultralytics/models/sam/modules/transformer.py
forward
Apply self-attention and cross-attention to queries and keys and return the processed embeddings.
Source code in ultralytics/models/sam/modules/transformer.py
ultralytics.models.sam.modules.transformer.Attention
Bases: Module
An attention layer that allows for downscaling the size of the embedding after projection to queries, keys, and values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embedding_dim |
int
|
The dimensionality of the input embeddings. |
required |
num_heads |
int
|
The number of attention heads. |
required |
downsample_rate |
int
|
The factor by which the internal dimensions are downsampled. Defaults to 1. |
1
|
Raises:
Type | Description |
---|---|
AssertionError
|
If 'num_heads' does not evenly divide the internal dim (embedding_dim / downsample_rate). |
Source code in ultralytics/models/sam/modules/transformer.py
forward
Compute the attention output given the input query, key, and value tensors.