Reference for ultralytics/nn/backends/executorch.py
Improvements
This page is sourced from https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/backends/executorch.py. Have an improvement or example to add? Open a Pull Request — thank you! 🙏
class ultralytics.nn.backends.executorch.ExecuTorchBackend
ExecuTorchBackend()
Bases: BaseBackend
Meta ExecuTorch inference backend for on-device deployment.
Loads and runs inference with Meta ExecuTorch models (.pte files) using the ExecuTorch runtime. Supports both standalone .pte files and directory-based model packages with metadata.
Methods
| Name | Description |
|---|---|
forward | Run inference using the ExecuTorch runtime. |
load_model | Load an ExecuTorch model from a .pte file or directory. |
Source code in ultralytics/nn/backends/executorch.py
View on GitHubclass ExecuTorchBackend(BaseBackend):
method ultralytics.nn.backends.executorch.ExecuTorchBackend.forward
def forward(self, im: torch.Tensor) -> list
Run inference using the ExecuTorch runtime.
Args
| Name | Type | Description | Default |
|---|---|---|---|
im | torch.Tensor | Input image tensor in BCHW format, normalized to [0, 1]. | required |
Returns
| Type | Description |
|---|---|
list | Model predictions as a list of ExecuTorch output values. |
Source code in ultralytics/nn/backends/executorch.py
View on GitHubdef forward(self, im: torch.Tensor) -> list:
"""Run inference using the ExecuTorch runtime.
Args:
im (torch.Tensor): Input image tensor in BCHW format, normalized to [0, 1].
Returns:
(list): Model predictions as a list of ExecuTorch output values.
"""
return self.model.execute([im])
method ultralytics.nn.backends.executorch.ExecuTorchBackend.load_model
def load_model(self, weight: str | Path) -> None
Load an ExecuTorch model from a .pte file or directory.
Args
| Name | Type | Description | Default |
|---|---|---|---|
weight | str | Path | Path to the .pte model file or directory containing the model. | required |
Source code in ultralytics/nn/backends/executorch.py
View on GitHubdef load_model(self, weight: str | Path) -> None:
"""Load an ExecuTorch model from a .pte file or directory.
Args:
weight (str | Path): Path to the .pte model file or directory containing the model.
"""
LOGGER.info(f"Loading {weight} for ExecuTorch inference...")
check_executorch_requirements()
from executorch.runtime import Runtime
w = Path(weight)
if w.is_dir():
model_file = next(w.rglob("*.pte"))
metadata_file = w / "metadata.yaml"
else:
model_file = w
metadata_file = w.parent / "metadata.yaml"
program = Runtime.get().load_program(str(model_file))
self.model = program.load_method("forward")
# Load metadata
if metadata_file.exists():
from ultralytics.utils import YAML
self.apply_metadata(YAML.load(metadata_file))
📅 Created 0 days ago ✏️ Updated 0 days ago