Skip to content

Reference for ultralytics/nn/modules/activation.py

Note

This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/modules/activation.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!


ultralytics.nn.modules.activation.AGLU

AGLU(device=None, dtype=None)

Bases: Module

Unified activation function module from https://github.com/kostas1515/AGLU.

This class implements a parameterized activation function with learnable parameters lambda and kappa.

Attributes:

Name Type Description
act Softplus

Softplus activation function with negative beta.

lambd Parameter

Learnable lambda parameter initialized with uniform distribution.

kappa Parameter

Learnable kappa parameter initialized with uniform distribution.

Source code in ultralytics/nn/modules/activation.py
def __init__(self, device=None, dtype=None) -> None:
    """Initialize the Unified activation function with learnable parameters."""
    super().__init__()
    self.act = nn.Softplus(beta=-1.0)
    self.lambd = nn.Parameter(nn.init.uniform_(torch.empty(1, device=device, dtype=dtype)))  # lambda parameter
    self.kappa = nn.Parameter(nn.init.uniform_(torch.empty(1, device=device, dtype=dtype)))  # kappa parameter

forward

forward(x: Tensor) -> torch.Tensor

Compute the forward pass of the Unified activation function.

Source code in ultralytics/nn/modules/activation.py
def forward(self, x: torch.Tensor) -> torch.Tensor:
    """Compute the forward pass of the Unified activation function."""
    lam = torch.clamp(self.lambd, min=0.0001)  # Clamp lambda to avoid division by zero
    return torch.exp((1 / lam) * self.act((self.kappa * x) - torch.log(lam)))



📅 Created 8 months ago ✏️ Updated 6 months ago