Reference for ultralytics/utils/export/imx.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/imx.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.utils.export.imx.FXModel
FXModel(model, imgsz=(640, 640))
Bases: Module
A custom model class for torch.fx compatibility.
This class extends torch.nn.Module
and is designed to ensure compatibility with torch.fx for tracing and graph
manipulation. It copies attributes from an existing model and explicitly sets the model attribute to ensure proper
copying.
Attributes:
Name | Type | Description |
---|---|---|
model |
Module
|
The original model's layers. |
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
The original model to wrap for torch.fx compatibility. |
required |
imgsz
|
tuple[int, int]
|
The input image size (height, width). Default is (640, 640). |
(640, 640)
|
Source code in ultralytics/utils/export/imx.py
29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
forward
forward(x)
Forward pass through the model.
This method performs the forward pass through the model, handling the dependencies between layers and saving intermediate outputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Tensor
|
The input tensor to the model. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The output tensor from the model. |
Source code in ultralytics/utils/export/imx.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
ultralytics.utils.export.imx.NMSWrapper
NMSWrapper(
model: Module,
score_threshold: float = 0.001,
iou_threshold: float = 0.7,
max_detections: int = 300,
task: str = "detect",
)
Bases: Module
Wrap PyTorch Module with multiclass_nms layer from sony_custom_layers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
Model instance. |
required |
score_threshold
|
float
|
Score threshold for non-maximum suppression. |
0.001
|
iou_threshold
|
float
|
Intersection over union threshold for non-maximum suppression. |
0.7
|
max_detections
|
int
|
The number of detections to return. |
300
|
task
|
str
|
Task type, either 'detect' or 'pose'. |
'detect'
|
Source code in ultralytics/utils/export/imx.py
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
|
forward
forward(images)
Forward pass with model inference and NMS post-processing.
Source code in ultralytics/utils/export/imx.py
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
|
ultralytics.utils.export.imx._inference
_inference(self, x: list[Tensor]) -> tuple[torch.Tensor]
Decode boxes and cls scores for imx object detection.
Source code in ultralytics/utils/export/imx.py
76 77 78 79 80 81 |
|
ultralytics.utils.export.imx.pose_forward
pose_forward(
self, x: list[Tensor]
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]
Forward pass for imx pose estimation, including keypoint decoding.
Source code in ultralytics/utils/export/imx.py
84 85 86 87 88 89 90 |
|
ultralytics.utils.export.imx.torch2imx
torch2imx(
model: Module,
file: Path | str,
conf: float,
iou: float,
max_det: int,
metadata: dict | None = None,
gptq: bool = False,
dataset=None,
prefix: str = "",
)
Export YOLO model to IMX format for deployment on Sony IMX500 devices.
This function quantizes a YOLO model using Model Compression Toolkit (MCT) and exports it to IMX format compatible with Sony IMX500 edge devices. It supports both YOLOv8n and YOLO11n models for detection and pose estimation tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
The YOLO model to export. Must be YOLOv8n or YOLO11n. |
required |
file
|
Path | str
|
Output file path for the exported model. |
required |
conf
|
float
|
Confidence threshold for NMS post-processing. |
required |
iou
|
float
|
IoU threshold for NMS post-processing. |
required |
max_det
|
int
|
Maximum number of detections to return. |
required |
metadata
|
dict | None
|
Metadata to embed in the ONNX model. Defaults to None. |
None
|
gptq
|
bool
|
Whether to use Gradient-Based Post Training Quantization. If False, uses standard Post Training Quantization. Defaults to False. |
False
|
dataset
|
optional
|
Representative dataset for quantization calibration. Defaults to None. |
None
|
prefix
|
str
|
Logging prefix string. Defaults to "". |
''
|
Returns:
Name | Type | Description |
---|---|---|
f |
Path
|
Path to the exported IMX model directory |
Raises:
Type | Description |
---|---|
ValueError
|
If the model is not a supported YOLOv8n or YOLO11n variant. |
Example
from ultralytics import YOLO model = YOLO("yolo11n.pt") path, _ = export_imx(model, "model.imx", conf=0.25, iou=0.45, max_det=300)
Note
- Requires model_compression_toolkit, onnx, edgemdt_tpc, and sony_custom_layers packages
- Only supports YOLOv8n and YOLO11n models (detection and pose tasks)
- Output includes quantized ONNX model, IMX binary, and labels.txt file
Source code in ultralytics/utils/export/imx.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 |
|