Reference for ultralytics/models/sam/modules/tiny_encoder.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/modules/tiny_encoder.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.models.sam.modules.tiny_encoder.Conv2d_BN
Conv2d_BN(a, b, ks=1, stride=1, pad=0, dilation=1, groups=1, bn_weight_init=1)
Bases: Sequential
A sequential container that performs 2D convolution followed by batch normalization.
Attributes:
Name | Type | Description |
---|---|---|
c |
Conv2d
|
2D convolution layer. |
bn |
BatchNorm2d
|
Batch normalization layer. |
Methods:
Name | Description |
---|
Parameters:
Name | Type | Description | Default |
---|---|---|---|
a
|
int
|
Number of input channels. |
required |
b
|
int
|
Number of output channels. |
required |
ks
|
int
|
Kernel size for the convolution. Defaults to 1. |
1
|
stride
|
int
|
Stride for the convolution. Defaults to 1. |
1
|
pad
|
int
|
Padding for the convolution. Defaults to 0. |
0
|
dilation
|
int
|
Dilation factor for the convolution. Defaults to 1. |
1
|
groups
|
int
|
Number of groups for the convolution. Defaults to 1. |
1
|
bn_weight_init
|
float
|
Initial value for batch normalization weight. Defaults to 1. |
1
|
Examples:
>>> conv_bn = Conv2d_BN(3, 64, ks=3, stride=1, pad=1)
>>> input_tensor = torch.randn(1, 3, 224, 224)
>>> output = conv_bn(input_tensor)
>>> print(output.shape)
Source code in ultralytics/models/sam/modules/tiny_encoder.py
52 53 54 55 56 57 58 59 |
|
ultralytics.models.sam.modules.tiny_encoder.PatchEmbed
PatchEmbed(in_chans, embed_dim, resolution, activation)
Bases: Module
Embeds images into patches and projects them into a specified embedding dimension.
Attributes:
Name | Type | Description |
---|---|---|
patches_resolution |
Tuple[int, int]
|
Resolution of the patches after embedding. |
num_patches |
int
|
Total number of patches. |
in_chans |
int
|
Number of input channels. |
embed_dim |
int
|
Dimension of the embedding. |
seq |
Sequential
|
Sequence of convolutional and activation layers for patch embedding. |
Methods:
Name | Description |
---|---|
forward |
Processes the input tensor through the patch embedding sequence. |
Examples:
>>> import torch
>>> patch_embed = PatchEmbed(in_chans=3, embed_dim=96, resolution=224, activation=nn.GELU)
>>> x = torch.randn(1, 3, 224, 224)
>>> output = patch_embed(x)
>>> print(output.shape)
Source code in ultralytics/models/sam/modules/tiny_encoder.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
|
forward
forward(x)
Processes input tensor through patch embedding sequence, converting images to patch embeddings.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
99 100 101 |
|
ultralytics.models.sam.modules.tiny_encoder.MBConv
MBConv(in_chans, out_chans, expand_ratio, activation, drop_path)
Bases: Module
Mobile Inverted Bottleneck Conv (MBConv) layer, part of the EfficientNet architecture.
Attributes:
Name | Type | Description |
---|---|---|
in_chans |
int
|
Number of input channels. |
hidden_chans |
int
|
Number of hidden channels. |
out_chans |
int
|
Number of output channels. |
conv1 |
Conv2d_BN
|
First convolutional layer. |
act1 |
Module
|
First activation function. |
conv2 |
Conv2d_BN
|
Depthwise convolutional layer. |
act2 |
Module
|
Second activation function. |
conv3 |
Conv2d_BN
|
Final convolutional layer. |
act3 |
Module
|
Third activation function. |
drop_path |
Module
|
Drop path layer (Identity for inference). |
Methods:
Name | Description |
---|---|
forward |
Performs the forward pass through the MBConv layer. |
Examples:
>>> in_chans, out_chans = 32, 64
>>> mbconv = MBConv(in_chans, out_chans, expand_ratio=4, activation=nn.ReLU, drop_path=0.1)
>>> x = torch.randn(1, in_chans, 56, 56)
>>> output = mbconv(x)
>>> print(output.shape)
torch.Size([1, 64, 56, 56])
Source code in ultralytics/models/sam/modules/tiny_encoder.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
|
forward
forward(x)
Implements the forward pass of MBConv, applying convolutions and skip connection.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
152 153 154 155 156 157 158 159 160 161 162 |
|
ultralytics.models.sam.modules.tiny_encoder.PatchMerging
PatchMerging(input_resolution, dim, out_dim, activation)
Bases: Module
Merges neighboring patches in the feature map and projects to a new dimension.
This class implements a patch merging operation that combines spatial information and adjusts the feature dimension. It uses a series of convolutional layers with batch normalization to achieve this.
Attributes:
Name | Type | Description |
---|---|---|
input_resolution |
Tuple[int, int]
|
The input resolution (height, width) of the feature map. |
dim |
int
|
The input dimension of the feature map. |
out_dim |
int
|
The output dimension after merging and projection. |
act |
Module
|
The activation function used between convolutions. |
conv1 |
Conv2d_BN
|
The first convolutional layer for dimension projection. |
conv2 |
Conv2d_BN
|
The second convolutional layer for spatial merging. |
conv3 |
Conv2d_BN
|
The third convolutional layer for final projection. |
Methods:
Name | Description |
---|---|
forward |
Applies the patch merging operation to the input tensor. |
Examples:
>>> input_resolution = (56, 56)
>>> patch_merging = PatchMerging(input_resolution, dim=64, out_dim=128, activation=nn.ReLU)
>>> x = torch.randn(4, 64, 56, 56)
>>> output = patch_merging(x)
>>> print(output.shape)
Source code in ultralytics/models/sam/modules/tiny_encoder.py
192 193 194 195 196 197 198 199 200 201 202 203 |
|
forward
forward(x)
Applies patch merging and dimension projection to the input feature map.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
|
ultralytics.models.sam.modules.tiny_encoder.ConvLayer
ConvLayer(
dim,
input_resolution,
depth,
activation,
drop_path=0.0,
downsample=None,
use_checkpoint=False,
out_dim=None,
conv_expand_ratio=4.0,
)
Bases: Module
Convolutional Layer featuring multiple MobileNetV3-style inverted bottleneck convolutions (MBConv).
This layer optionally applies downsample operations to the output and supports gradient checkpointing.
Attributes:
Name | Type | Description |
---|---|---|
dim |
int
|
Dimensionality of the input and output. |
input_resolution |
Tuple[int, int]
|
Resolution of the input image. |
depth |
int
|
Number of MBConv layers in the block. |
use_checkpoint |
bool
|
Whether to use gradient checkpointing to save memory. |
blocks |
ModuleList
|
List of MBConv layers. |
downsample |
Optional[Callable]
|
Function for downsampling the output. |
Methods:
Name | Description |
---|---|
forward |
Processes the input through the convolutional layers. |
Examples:
>>> input_tensor = torch.randn(1, 64, 56, 56)
>>> conv_layer = ConvLayer(64, (56, 56), depth=3, activation=nn.ReLU)
>>> output = conv_layer(input_tensor)
>>> print(output.shape)
This layer consists of multiple MobileNetV3-style inverted bottleneck convolutions (MBConv) and optionally applies downsampling to the output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim
|
int
|
The dimensionality of the input and output. |
required |
input_resolution
|
Tuple[int, int]
|
The resolution of the input image. |
required |
depth
|
int
|
The number of MBConv layers in the block. |
required |
activation
|
Module
|
Activation function applied after each convolution. |
required |
drop_path
|
float | List[float]
|
Drop path rate. Single float or a list of floats for each MBConv. |
0.0
|
downsample
|
Optional[Module]
|
Function for downsampling the output. None to skip downsampling. |
None
|
use_checkpoint
|
bool
|
Whether to use gradient checkpointing to save memory. |
False
|
out_dim
|
Optional[int]
|
The dimensionality of the output. None means it will be the same as |
None
|
conv_expand_ratio
|
float
|
Expansion ratio for the MBConv layers. |
4.0
|
Examples:
>>> input_tensor = torch.randn(1, 64, 56, 56)
>>> conv_layer = ConvLayer(64, (56, 56), depth=3, activation=nn.ReLU)
>>> output = conv_layer(input_tensor)
>>> print(output.shape)
Source code in ultralytics/models/sam/modules/tiny_encoder.py
246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 |
|
forward
forward(x)
Processes input through convolutional layers, applying MBConv blocks and optional downsampling.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
308 309 310 311 312 |
|
ultralytics.models.sam.modules.tiny_encoder.Mlp
Mlp(
in_features,
hidden_features=None,
out_features=None,
act_layer=nn.GELU,
drop=0.0,
)
Bases: Module
Multi-layer Perceptron (MLP) module for transformer architectures.
This module applies layer normalization, two fully-connected layers with an activation function in between, and dropout. It is commonly used in transformer-based architectures.
Attributes:
Name | Type | Description |
---|---|---|
norm |
LayerNorm
|
Layer normalization applied to the input. |
fc1 |
Linear
|
First fully-connected layer. |
fc2 |
Linear
|
Second fully-connected layer. |
act |
Module
|
Activation function applied after the first fully-connected layer. |
drop |
Dropout
|
Dropout layer applied after the activation function. |
Methods:
Name | Description |
---|---|
forward |
Applies the MLP operations on the input tensor. |
Examples:
>>> import torch
>>> from torch import nn
>>> mlp = Mlp(in_features=256, hidden_features=512, out_features=256, act_layer=nn.GELU, drop=0.1)
>>> x = torch.randn(32, 100, 256)
>>> output = mlp(x)
>>> print(output.shape)
torch.Size([32, 100, 256])
Source code in ultralytics/models/sam/modules/tiny_encoder.py
342 343 344 345 346 347 348 349 350 351 |
|
forward
forward(x)
Applies MLP operations: layer norm, FC layers, activation, and dropout to the input tensor.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
353 354 355 356 357 358 359 360 |
|
ultralytics.models.sam.modules.tiny_encoder.Attention
Attention(dim, key_dim, num_heads=8, attn_ratio=4, resolution=(14, 14))
Bases: Module
Multi-head attention module with spatial awareness and trainable attention biases.
This module implements a multi-head attention mechanism with support for spatial awareness, applying attention biases based on spatial resolution. It includes trainable attention biases for each unique offset between spatial positions in the resolution grid.
Attributes:
Name | Type | Description |
---|---|---|
num_heads |
int
|
Number of attention heads. |
scale |
float
|
Scaling factor for attention scores. |
key_dim |
int
|
Dimensionality of the keys and queries. |
nh_kd |
int
|
Product of num_heads and key_dim. |
d |
int
|
Dimensionality of the value vectors. |
dh |
int
|
Product of d and num_heads. |
attn_ratio |
float
|
Attention ratio affecting the dimensions of the value vectors. |
norm |
LayerNorm
|
Layer normalization applied to input. |
qkv |
Linear
|
Linear layer for computing query, key, and value projections. |
proj |
Linear
|
Linear layer for final projection. |
attention_biases |
Parameter
|
Learnable attention biases. |
attention_bias_idxs |
Tensor
|
Indices for attention biases. |
ab |
Tensor
|
Cached attention biases for inference, deleted during training. |
Methods:
Name | Description |
---|---|
train |
Sets the module in training mode and handles the 'ab' attribute. |
forward |
Performs the forward pass of the attention mechanism. |
Examples:
>>> attn = Attention(dim=256, key_dim=64, num_heads=8, resolution=(14, 14))
>>> x = torch.randn(1, 196, 256)
>>> output = attn(x)
>>> print(output.shape)
torch.Size([1, 196, 256])
This module implements a multi-head attention mechanism with support for spatial awareness, applying attention biases based on spatial resolution. It includes trainable attention biases for each unique offset between spatial positions in the resolution grid.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim
|
int
|
The dimensionality of the input and output. |
required |
key_dim
|
int
|
The dimensionality of the keys and queries. |
required |
num_heads
|
int
|
Number of attention heads. |
8
|
attn_ratio
|
float
|
Attention ratio, affecting the dimensions of the value vectors. |
4
|
resolution
|
Tuple[int, int]
|
Spatial resolution of the input feature map. |
(14, 14)
|
Examples:
>>> attn = Attention(dim=256, key_dim=64, num_heads=8, resolution=(14, 14))
>>> x = torch.randn(1, 196, 256)
>>> output = attn(x)
>>> print(output.shape)
torch.Size([1, 196, 256])
Source code in ultralytics/models/sam/modules/tiny_encoder.py
398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 |
|
forward
forward(x)
Applies multi-head attention with spatial awareness and trainable attention biases.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 |
|
train
train(mode=True)
Performs multi-head attention with spatial awareness and trainable attention biases.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
456 457 458 459 460 461 462 463 |
|
ultralytics.models.sam.modules.tiny_encoder.TinyViTBlock
TinyViTBlock(
dim,
input_resolution,
num_heads,
window_size=7,
mlp_ratio=4.0,
drop=0.0,
drop_path=0.0,
local_conv_size=3,
activation=nn.GELU,
)
Bases: Module
TinyViT Block that applies self-attention and a local convolution to the input.
This block is a key component of the TinyViT architecture, combining self-attention mechanisms with local convolutions to process input features efficiently.
Attributes:
Name | Type | Description |
---|---|---|
dim |
int
|
The dimensionality of the input and output. |
input_resolution |
Tuple[int, int]
|
Spatial resolution of the input feature map. |
num_heads |
int
|
Number of attention heads. |
window_size |
int
|
Size of the attention window. |
mlp_ratio |
float
|
Ratio of MLP hidden dimension to embedding dimension. |
drop_path |
Module
|
Stochastic depth layer, identity function during inference. |
attn |
Attention
|
Self-attention module. |
mlp |
Mlp
|
Multi-layer perceptron module. |
local_conv |
Conv2d_BN
|
Depth-wise local convolution layer. |
Methods:
Name | Description |
---|---|
forward |
Processes the input through the TinyViT block. |
extra_repr |
Returns a string with extra information about the block's parameters. |
Examples:
>>> input_tensor = torch.randn(1, 196, 192)
>>> block = TinyViTBlock(dim=192, input_resolution=(14, 14), num_heads=3)
>>> output = block(input_tensor)
>>> print(output.shape)
torch.Size([1, 196, 192])
This block is a key component of the TinyViT architecture, combining self-attention mechanisms with local convolutions to process input features efficiently.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim
|
int
|
Dimensionality of the input and output features. |
required |
input_resolution
|
Tuple[int, int]
|
Spatial resolution of the input feature map (height, width). |
required |
num_heads
|
int
|
Number of attention heads. |
required |
window_size
|
int
|
Size of the attention window. Must be greater than 0. |
7
|
mlp_ratio
|
float
|
Ratio of MLP hidden dimension to embedding dimension. |
4.0
|
drop
|
float
|
Dropout rate. |
0.0
|
drop_path
|
float
|
Stochastic depth rate. |
0.0
|
local_conv_size
|
int
|
Kernel size of the local convolution. |
3
|
activation
|
Module
|
Activation function for MLP. |
GELU
|
Raises:
Type | Description |
---|---|
AssertionError
|
If window_size is not greater than 0. |
AssertionError
|
If dim is not divisible by num_heads. |
Examples:
>>> block = TinyViTBlock(dim=192, input_resolution=(14, 14), num_heads=3)
>>> input_tensor = torch.randn(1, 196, 192)
>>> output = block(input_tensor)
>>> print(output.shape)
torch.Size([1, 196, 192])
Source code in ultralytics/models/sam/modules/tiny_encoder.py
519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 |
|
extra_repr
extra_repr() -> str
Returns a string representation of the TinyViTBlock's parameters.
This method provides a formatted string containing key information about the TinyViTBlock, including its dimension, input resolution, number of attention heads, window size, and MLP ratio.
Returns:
Type | Description |
---|---|
str
|
A formatted string containing the block's parameters. |
Examples:
>>> block = TinyViTBlock(dim=192, input_resolution=(14, 14), num_heads=3, window_size=7, mlp_ratio=4.0)
>>> print(block.extra_repr())
dim=192, input_resolution=(14, 14), num_heads=3, window_size=7, mlp_ratio=4.0
Source code in ultralytics/models/sam/modules/tiny_encoder.py
626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 |
|
forward
forward(x)
Applies self-attention, local convolution, and MLP operations to the input tensor.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 |
|
ultralytics.models.sam.modules.tiny_encoder.BasicLayer
BasicLayer(
dim,
input_resolution,
depth,
num_heads,
window_size,
mlp_ratio=4.0,
drop=0.0,
drop_path=0.0,
downsample=None,
use_checkpoint=False,
local_conv_size=3,
activation=nn.GELU,
out_dim=None,
)
Bases: Module
A basic TinyViT layer for one stage in a TinyViT architecture.
This class represents a single layer in the TinyViT model, consisting of multiple TinyViT blocks and an optional downsampling operation.
Attributes:
Name | Type | Description |
---|---|---|
dim |
int
|
The dimensionality of the input and output features. |
input_resolution |
Tuple[int, int]
|
Spatial resolution of the input feature map. |
depth |
int
|
Number of TinyViT blocks in this layer. |
use_checkpoint |
bool
|
Whether to use gradient checkpointing to save memory. |
blocks |
ModuleList
|
List of TinyViT blocks that make up this layer. |
downsample |
Module | None
|
Downsample layer at the end of the layer, if specified. |
Methods:
Name | Description |
---|---|
forward |
Processes the input through the layer's blocks and optional downsampling. |
extra_repr |
Returns a string with the layer's parameters for printing. |
Examples:
>>> input_tensor = torch.randn(1, 3136, 192)
>>> layer = BasicLayer(dim=192, input_resolution=(56, 56), depth=2, num_heads=3, window_size=7)
>>> output = layer(input_tensor)
>>> print(output.shape)
torch.Size([1, 784, 384])
This layer consists of multiple TinyViT blocks and an optional downsampling operation. It is designed to process feature maps at a specific resolution and dimensionality within the TinyViT model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim
|
int
|
Dimensionality of the input and output features. |
required |
input_resolution
|
Tuple[int, int]
|
Spatial resolution of the input feature map (height, width). |
required |
depth
|
int
|
Number of TinyViT blocks in this layer. |
required |
num_heads
|
int
|
Number of attention heads in each TinyViT block. |
required |
window_size
|
int
|
Size of the local window for attention computation. |
required |
mlp_ratio
|
float
|
Ratio of MLP hidden dimension to embedding dimension. |
4.0
|
drop
|
float
|
Dropout rate. |
0.0
|
drop_path
|
float | List[float]
|
Stochastic depth rate. Can be a float or a list of floats for each block. |
0.0
|
downsample
|
Module | None
|
Downsampling layer at the end of the layer. None to skip downsampling. |
None
|
use_checkpoint
|
bool
|
Whether to use gradient checkpointing to save memory. |
False
|
local_conv_size
|
int
|
Kernel size for the local convolution in each TinyViT block. |
3
|
activation
|
Module
|
Activation function used in the MLP. |
GELU
|
out_dim
|
int | None
|
Output dimension after downsampling. None means it will be the same as |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
If |
Examples:
>>> layer = BasicLayer(dim=96, input_resolution=(56, 56), depth=2, num_heads=3, window_size=7)
>>> x = torch.randn(1, 56 * 56, 96)
>>> output = layer(x)
>>> print(output.shape)
Source code in ultralytics/models/sam/modules/tiny_encoder.py
674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 |
|
extra_repr
extra_repr() -> str
Returns a string with the layer's parameters for printing.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
757 758 759 |
|
forward
forward(x)
Processes input through TinyViT blocks and optional downsampling.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
751 752 753 754 755 |
|
ultralytics.models.sam.modules.tiny_encoder.TinyViT
TinyViT(
img_size=224,
in_chans=3,
num_classes=1000,
embed_dims=(96, 192, 384, 768),
depths=(2, 2, 6, 2),
num_heads=(3, 6, 12, 24),
window_sizes=(7, 7, 14, 7),
mlp_ratio=4.0,
drop_rate=0.0,
drop_path_rate=0.1,
use_checkpoint=False,
mbconv_expand_ratio=4.0,
local_conv_size=3,
layer_lr_decay=1.0,
)
Bases: Module
TinyViT: A compact vision transformer architecture for efficient image classification and feature extraction.
This class implements the TinyViT model, which combines elements of vision transformers and convolutional neural networks for improved efficiency and performance on vision tasks.
Attributes:
Name | Type | Description |
---|---|---|
img_size |
int
|
Input image size. |
num_classes |
int
|
Number of classification classes. |
depths |
List[int]
|
Number of blocks in each stage. |
num_layers |
int
|
Total number of layers in the network. |
mlp_ratio |
float
|
Ratio of MLP hidden dimension to embedding dimension. |
patch_embed |
PatchEmbed
|
Module for patch embedding. |
patches_resolution |
Tuple[int, int]
|
Resolution of embedded patches. |
layers |
ModuleList
|
List of network layers. |
norm_head |
LayerNorm
|
Layer normalization for the classifier head. |
head |
Linear
|
Linear layer for final classification. |
neck |
Sequential
|
Neck module for feature refinement. |
Methods:
Name | Description |
---|---|
set_layer_lr_decay |
Sets layer-wise learning rate decay. |
_init_weights |
Initializes weights for linear and normalization layers. |
no_weight_decay_keywords |
Returns keywords for parameters that should not use weight decay. |
forward_features |
Processes input through the feature extraction layers. |
forward |
Performs a forward pass through the entire network. |
Examples:
>>> model = TinyViT(img_size=224, num_classes=1000)
>>> x = torch.randn(1, 3, 224, 224)
>>> features = model.forward_features(x)
>>> print(features.shape)
torch.Size([1, 256, 64, 64])
This constructor sets up the TinyViT architecture, including patch embedding, multiple layers of attention and convolution blocks, and a classification head.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img_size
|
int
|
Size of the input image. |
224
|
in_chans
|
int
|
Number of input channels. |
3
|
num_classes
|
int
|
Number of classes for classification. |
1000
|
embed_dims
|
Tuple[int, int, int, int]
|
Embedding dimensions for each stage. |
(96, 192, 384, 768)
|
depths
|
Tuple[int, int, int, int]
|
Number of blocks in each stage. |
(2, 2, 6, 2)
|
num_heads
|
Tuple[int, int, int, int]
|
Number of attention heads in each stage. |
(3, 6, 12, 24)
|
window_sizes
|
Tuple[int, int, int, int]
|
Window sizes for each stage. |
(7, 7, 14, 7)
|
mlp_ratio
|
float
|
Ratio of MLP hidden dim to embedding dim. |
4.0
|
drop_rate
|
float
|
Dropout rate. |
0.0
|
drop_path_rate
|
float
|
Stochastic depth rate. |
0.1
|
use_checkpoint
|
bool
|
Whether to use checkpointing to save memory. |
False
|
mbconv_expand_ratio
|
float
|
Expansion ratio for MBConv layer. |
4.0
|
local_conv_size
|
int
|
Kernel size for local convolutions. |
3
|
layer_lr_decay
|
float
|
Layer-wise learning rate decay factor. |
1.0
|
Examples:
>>> model = TinyViT(img_size=224, num_classes=1000)
>>> x = torch.randn(1, 3, 224, 224)
>>> output = model(x)
>>> print(output.shape)
torch.Size([1, 1000])
Source code in ultralytics/models/sam/modules/tiny_encoder.py
797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 |
|
forward
forward(x)
Performs the forward pass through the TinyViT model, extracting features from the input image.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
985 986 987 |
|
forward_features
forward_features(x)
Processes input through feature extraction layers, returning spatial features.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
970 971 972 973 974 975 976 977 978 979 980 981 982 983 |
|
no_weight_decay_keywords
no_weight_decay_keywords()
Returns a set of keywords for parameters that should not use weight decay.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
965 966 967 968 |
|
set_imgsz
set_imgsz(imgsz=[1024, 1024])
Set image size to make model compatible with different image sizes.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 |
|
set_layer_lr_decay
set_layer_lr_decay(layer_lr_decay)
Sets layer-wise learning rate decay for the TinyViT model based on depth.
Source code in ultralytics/models/sam/modules/tiny_encoder.py
918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 |
|