Skip to content

Reference for ultralytics/nn/backends/executorch.py

Improvements

This page is sourced from https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/backends/executorch.py. Have an improvement or example to add? Open a Pull Request — thank you! 🙏


class ultralytics.nn.backends.executorch.ExecuTorchBackend

ExecuTorchBackend()

Bases: BaseBackend

Meta ExecuTorch inference backend for on-device deployment.

Loads and runs inference with Meta ExecuTorch models (.pte files) using the ExecuTorch runtime. Supports both standalone .pte files and directory-based model packages with metadata.

Methods

NameDescription
forwardRun inference using the ExecuTorch runtime.
load_modelLoad an ExecuTorch model from a .pte file or directory.
Source code in ultralytics/nn/backends/executorch.pyView on GitHub
class ExecuTorchBackend(BaseBackend):


method ultralytics.nn.backends.executorch.ExecuTorchBackend.forward

def forward(self, im: torch.Tensor) -> list

Run inference using the ExecuTorch runtime.

Args

NameTypeDescriptionDefault
imtorch.TensorInput image tensor in BCHW format, normalized to [0, 1].required

Returns

TypeDescription
listModel predictions as a list of ExecuTorch output values.
Source code in ultralytics/nn/backends/executorch.pyView on GitHub
def forward(self, im: torch.Tensor) -> list:
    """Run inference using the ExecuTorch runtime.

    Args:
        im (torch.Tensor): Input image tensor in BCHW format, normalized to [0, 1].

    Returns:
        (list): Model predictions as a list of ExecuTorch output values.
    """
    return self.model.execute([im])


method ultralytics.nn.backends.executorch.ExecuTorchBackend.load_model

def load_model(self, weight: str | Path) -> None

Load an ExecuTorch model from a .pte file or directory.

Args

NameTypeDescriptionDefault
weightstr | PathPath to the .pte model file or directory containing the model.required
Source code in ultralytics/nn/backends/executorch.pyView on GitHub
def load_model(self, weight: str | Path) -> None:
    """Load an ExecuTorch model from a .pte file or directory.

    Args:
        weight (str | Path): Path to the .pte model file or directory containing the model.
    """
    LOGGER.info(f"Loading {weight} for ExecuTorch inference...")
    check_executorch_requirements()

    from executorch.runtime import Runtime

    w = Path(weight)
    if w.is_dir():
        model_file = next(w.rglob("*.pte"))
        metadata_file = w / "metadata.yaml"
    else:
        model_file = w
        metadata_file = w.parent / "metadata.yaml"

    program = Runtime.get().load_program(str(model_file))
    self.model = program.load_method("forward")

    # Load metadata
    if metadata_file.exists():
        from ultralytics.utils import YAML

        self.apply_metadata(YAML.load(metadata_file))





📅 Created 0 days ago ✏️ Updated 0 days ago
Laughing-q