Reference for ultralytics/nn/modules/activation.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/modules/activation.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.nn.modules.activation.AGLU
Bases: Module
Unified activation function module from AGLU.
This class implements a parameterized activation function with learnable parameters lambda and kappa, based on the AGLU (Adaptive Gated Linear Unit) approach (https://github.com/kostas1515/AGLU).
Attributes:
Name | Type | Description |
---|---|---|
act |
Softplus
|
Softplus activation function with negative beta. |
lambd |
Parameter
|
Learnable lambda parameter initialized with uniform distribution. |
kappa |
Parameter
|
Learnable kappa parameter initialized with uniform distribution. |
Methods:
Name | Description |
---|---|
forward |
Compute the forward pass of the Unified activation function. |
Examples:
>>> import torch
>>> m = AGLU()
>>> input = torch.randn(2)
>>> output = m(input)
>>> print(output.shape)
torch.Size([2])
Source code in ultralytics/nn/modules/activation.py
forward
Apply the Adaptive Gated Linear Unit (AGLU) activation function.
This forward method implements the AGLU activation function with learnable parameters lambda and kappa. The function applies a transformation that adaptively combines linear and non-linear components.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Tensor
|
Input tensor to apply the activation function to. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
Output tensor after applying the AGLU activation function, with the same shape as the input. |