Reference for ultralytics/models/sam/modules/transformer.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/modules/transformer.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.models.sam.modules.transformer.TwoWayTransformer
TwoWayTransformer(
depth: int,
embedding_dim: int,
num_heads: int,
mlp_dim: int,
activation: Type[nn.Module] = nn.ReLU,
attention_downsample_rate: int = 2,
)
Bases: Module
A Two-Way Transformer module for simultaneous attention to image and query points.
This class implements a specialized transformer decoder that attends to an input image using queries with supplied positional embeddings. It's useful for tasks like object detection, image segmentation, and point cloud processing.
Attributes:
Name | Type | Description |
---|---|---|
depth |
int
|
Number of layers in the transformer. |
embedding_dim |
int
|
Channel dimension for input embeddings. |
num_heads |
int
|
Number of heads for multihead attention. |
mlp_dim |
int
|
Internal channel dimension for the MLP block. |
layers |
ModuleList
|
List of TwoWayAttentionBlock layers composing the transformer. |
final_attn_token_to_image |
Attention
|
Final attention layer from queries to image. |
norm_final_attn |
LayerNorm
|
Layer normalization applied to final queries. |
Methods:
Name | Description |
---|---|
forward |
Processes image and point embeddings through the transformer. |
Examples:
>>> transformer = TwoWayTransformer(depth=6, embedding_dim=256, num_heads=8, mlp_dim=2048)
>>> image_embedding = torch.randn(1, 256, 32, 32)
>>> image_pe = torch.randn(1, 256, 32, 32)
>>> point_embedding = torch.randn(1, 100, 256)
>>> output_queries, output_image = transformer(image_embedding, image_pe, point_embedding)
>>> print(output_queries.shape, output_image.shape)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
depth
|
int
|
Number of layers in the transformer. |
required |
embedding_dim
|
int
|
Channel dimension for input embeddings. |
required |
num_heads
|
int
|
Number of heads for multihead attention. Must divide embedding_dim. |
required |
mlp_dim
|
int
|
Internal channel dimension for the MLP block. |
required |
activation
|
Type[Module]
|
Activation function to use in the MLP block. |
ReLU
|
attention_downsample_rate
|
int
|
Downsampling rate for attention mechanism. |
2
|
Attributes:
Name | Type | Description |
---|---|---|
depth |
int
|
Number of layers in the transformer. |
embedding_dim |
int
|
Channel dimension for input embeddings. |
num_heads |
int
|
Number of heads for multihead attention. |
mlp_dim |
int
|
Internal channel dimension for the MLP block. |
layers |
ModuleList
|
List of TwoWayAttentionBlock layers. |
final_attn_token_to_image |
Attention
|
Final attention layer from queries to image. |
norm_final_attn |
LayerNorm
|
Layer normalization applied to final queries. |
Examples:
>>> transformer = TwoWayTransformer(depth=6, embedding_dim=256, num_heads=8, mlp_dim=2048)
>>> image_embedding = torch.randn(1, 256, 32, 32)
>>> image_pe = torch.randn(1, 256, 32, 32)
>>> point_embedding = torch.randn(1, 100, 256)
>>> output_queries, output_image = transformer(image_embedding, image_pe, point_embedding)
>>> print(output_queries.shape, output_image.shape)
Source code in ultralytics/models/sam/modules/transformer.py
forward
forward(
image_embedding: Tensor, image_pe: Tensor, point_embedding: Tensor
) -> Tuple[Tensor, Tensor]
Processes image and point embeddings through the Two-Way Transformer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image_embedding
|
Tensor
|
Image to attend to, with shape (B, embedding_dim, H, W). |
required |
image_pe
|
Tensor
|
Positional encoding to add to the image, with same shape as image_embedding. |
required |
point_embedding
|
Tensor
|
Embedding to add to query points, with shape (B, N_points, embedding_dim). |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, Tensor]
|
Processed point_embedding and image_embedding. |
Examples:
>>> transformer = TwoWayTransformer(depth=6, embedding_dim=256, num_heads=8, mlp_dim=2048)
>>> image_embedding = torch.randn(1, 256, 32, 32)
>>> image_pe = torch.randn(1, 256, 32, 32)
>>> point_embedding = torch.randn(1, 100, 256)
>>> output_queries, output_image = transformer(image_embedding, image_pe, point_embedding)
>>> print(output_queries.shape, output_image.shape)
Source code in ultralytics/models/sam/modules/transformer.py
ultralytics.models.sam.modules.transformer.TwoWayAttentionBlock
TwoWayAttentionBlock(
embedding_dim: int,
num_heads: int,
mlp_dim: int = 2048,
activation: Type[nn.Module] = nn.ReLU,
attention_downsample_rate: int = 2,
skip_first_layer_pe: bool = False,
)
Bases: Module
A two-way attention block for simultaneous attention to image and query points.
This class implements a specialized transformer block with four main layers: self-attention on sparse inputs, cross-attention of sparse inputs to dense inputs, MLP block on sparse inputs, and cross-attention of dense inputs to sparse inputs.
Attributes:
Name | Type | Description |
---|---|---|
self_attn |
Attention
|
Self-attention layer for queries. |
norm1 |
LayerNorm
|
Layer normalization after self-attention. |
cross_attn_token_to_image |
Attention
|
Cross-attention layer from queries to keys. |
norm2 |
LayerNorm
|
Layer normalization after token-to-image attention. |
mlp |
MLPBlock
|
MLP block for transforming query embeddings. |
norm3 |
LayerNorm
|
Layer normalization after MLP block. |
norm4 |
LayerNorm
|
Layer normalization after image-to-token attention. |
cross_attn_image_to_token |
Attention
|
Cross-attention layer from keys to queries. |
skip_first_layer_pe |
bool
|
Whether to skip positional encoding in the first layer. |
Methods:
Name | Description |
---|---|
forward |
Applies self-attention and cross-attention to queries and keys. |
Examples:
>>> embedding_dim, num_heads = 256, 8
>>> block = TwoWayAttentionBlock(embedding_dim, num_heads)
>>> queries = torch.randn(1, 100, embedding_dim)
>>> keys = torch.randn(1, 1000, embedding_dim)
>>> query_pe = torch.randn(1, 100, embedding_dim)
>>> key_pe = torch.randn(1, 1000, embedding_dim)
>>> processed_queries, processed_keys = block(queries, keys, query_pe, key_pe)
This block implements a specialized transformer layer with four main components: self-attention on sparse inputs, cross-attention of sparse inputs to dense inputs, MLP block on sparse inputs, and cross-attention of dense inputs to sparse inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embedding_dim
|
int
|
Channel dimension of the embeddings. |
required |
num_heads
|
int
|
Number of attention heads in the attention layers. |
required |
mlp_dim
|
int
|
Hidden dimension of the MLP block. |
2048
|
activation
|
Type[Module]
|
Activation function for the MLP block. |
ReLU
|
attention_downsample_rate
|
int
|
Downsampling rate for the attention mechanism. |
2
|
skip_first_layer_pe
|
bool
|
Whether to skip positional encoding in the first layer. |
False
|
Examples:
>>> embedding_dim, num_heads = 256, 8
>>> block = TwoWayAttentionBlock(embedding_dim, num_heads)
>>> queries = torch.randn(1, 100, embedding_dim)
>>> keys = torch.randn(1, 1000, embedding_dim)
>>> query_pe = torch.randn(1, 100, embedding_dim)
>>> key_pe = torch.randn(1, 1000, embedding_dim)
>>> processed_queries, processed_keys = block(queries, keys, query_pe, key_pe)
Source code in ultralytics/models/sam/modules/transformer.py
forward
Applies two-way attention to process query and key embeddings in a transformer block.
Source code in ultralytics/models/sam/modules/transformer.py
ultralytics.models.sam.modules.transformer.Attention
Bases: Module
An attention layer with downscaling capability for embedding size after projection.
This class implements a multi-head attention mechanism with the option to downsample the internal dimension of queries, keys, and values.
Attributes:
Name | Type | Description |
---|---|---|
embedding_dim |
int
|
Dimensionality of input embeddings. |
kv_in_dim |
int
|
Dimensionality of key and value inputs. |
internal_dim |
int
|
Internal dimension after downsampling. |
num_heads |
int
|
Number of attention heads. |
q_proj |
Linear
|
Linear projection for queries. |
k_proj |
Linear
|
Linear projection for keys. |
v_proj |
Linear
|
Linear projection for values. |
out_proj |
Linear
|
Linear projection for output. |
Methods:
Name | Description |
---|---|
_separate_heads |
Separates input tensor into attention heads. |
_recombine_heads |
Recombines separated attention heads. |
forward |
Computes attention output for given query, key, and value tensors. |
Examples:
>>> attn = Attention(embedding_dim=256, num_heads=8, downsample_rate=2)
>>> q = torch.randn(1, 100, 256)
>>> k = v = torch.randn(1, 50, 256)
>>> output = attn(q, k, v)
>>> print(output.shape)
torch.Size([1, 100, 256])
This class implements a multi-head attention mechanism with optional downsampling of the internal dimension for queries, keys, and values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embedding_dim
|
int
|
Dimensionality of input embeddings. |
required |
num_heads
|
int
|
Number of attention heads. |
required |
downsample_rate
|
int
|
Factor by which internal dimensions are downsampled. Defaults to 1. |
1
|
kv_in_dim
|
int | None
|
Dimensionality of key and value inputs. If None, uses embedding_dim. |
None
|
Raises:
Type | Description |
---|---|
AssertionError
|
If num_heads does not evenly divide the internal dim (embedding_dim / downsample_rate). |
Examples:
>>> attn = Attention(embedding_dim=256, num_heads=8, downsample_rate=2)
>>> q = torch.randn(1, 100, 256)
>>> k = v = torch.randn(1, 50, 256)
>>> output = attn(q, k, v)
>>> print(output.shape)
torch.Size([1, 100, 256])
Source code in ultralytics/models/sam/modules/transformer.py
forward
Applies multi-head attention to query, key, and value tensors with optional downsampling.