Skip to content

Quick Start Guide: NVIDIA Jetson with Ultralytics YOLO11

This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLO11 on NVIDIA Jetson devices. Additionally, it showcases performance benchmarks to demonstrate the capabilities of YOLO11 on these small and powerful devices.

New product support

We have updated this guide with the latest NVIDIA Jetson AGX Thor Developer Kit which delivers up to 2070 FP4 TFLOPS of AI compute and 128 GB of memory with power configurable between 40 W and 130 W. It delivers over 7.5x higher AI compute than NVIDIA Jetson AGX Orin, with 3.5x better energy efficiency to seamlessly run the most popular AI models.



Watch: How to use Ultralytics YOLO11 on NVIDIA JETSON Devices

NVIDIA Jetson Ecosystem

Note

This guide has been tested with NVIDIA Jetson AGX Thor Developer Kit running the latest stable JetPack release of JP7.0, NVIDIA Jetson AGX Orin Developer Kit (64GB) running JetPack release of JP6.2, NVIDIA Jetson Orin Nano Super Developer Kit running JetPack release of JP6.1, Seeed Studio reComputer J4012 which is based on NVIDIA Jetson Orin NX 16GB running JetPack release of JP6.0/ JetPack release of JP5.1.3 and Seeed Studio reComputer J1020 v2 which is based on NVIDIA Jetson Nano 4GB running JetPack release of JP4.6.1. It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.

What is NVIDIA Jetson?

NVIDIA Jetson is a series of embedded computing boards designed to bring accelerated AI (artificial intelligence) computing to edge devices. These compact and powerful devices are built around NVIDIA's GPU architecture and are capable of running complex AI algorithms and deep learning models directly on the device, without needing to rely on cloud computing resources. Jetson boards are often used in robotics, autonomous vehicles, industrial automation, and other applications where AI inference needs to be performed locally with low latency and high efficiency. Additionally, these boards are based on the ARM64 architecture and runs on lower power compared to traditional GPU computing devices.

NVIDIA Jetson Series Comparison

NVIDIA Jetson AGX Thor is the latest iteration of the NVIDIA Jetson family based on NVIDIA Blackwell architecture which brings drastically improved AI performance when compared to the previous generations. The table below compares a few of the Jetson devices in the ecosystem.

Jetson AGX ThorJetson AGX Orin 64GBJetson Orin NX 16GBJetson Orin Nano SuperJetson AGX XavierJetson Xavier NXJetson Nano
AI Performance2070 TFLOPS275 TOPS100 TOPS67 TOPS32 TOPS21 TOPS472 GFLOPS
GPU2560-core NVIDIA Blackwell architecture GPU with 96 Tensor Cores2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores512-core NVIDIA Volta architecture GPU with 64 Tensor Cores384-core NVIDIA Voltaโ„ข architecture GPU with 48 Tensor Cores128-core NVIDIA Maxwellโ„ข architecture GPU
GPU Max Frequency1.57 GHz1.3 GHz918 MHz1020 MHz1377 MHz1100 MHz921MHz
CPU14-core Armยฎ Neoverseยฎ-V3AE 64-bit CPU 1MB L2 + 16MB L312-core NVIDIA Armยฎ Cortex A78AE v8.2 64-bit CPU 3MB L2 + 6MB L38-core NVIDIA Armยฎ Cortex A78AE v8.2 64-bit CPU 2MB L2 + 4MB L36-core Armยฎ Cortexยฎ-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L38-core NVIDIA Carmel Armยฎv8.2 64-bit CPU 8MB L2 + 4MB L36-core NVIDIA Carmel Armยฎv8.2 64-bit CPU 6MB L2 + 4MB L3Quad-Core Armยฎ Cortexยฎ-A57 MPCore processor
CPU Max Frequency2.6 GHz2.2 GHz2.0 GHz1.7 GHz2.2 GHz1.9 GHz1.43GHz
Memory128GB 256-bit LPDDR5X 273GB/s64GB 256-bit LPDDR5 204.8GB/s16GB 128-bit LPDDR5 102.4GB/s8GB 128-bit LPDDR5 102 GB/s32GB 256-bit LPDDR4x 136.5GB/s8GB 128-bit LPDDR4x 59.7GB/s4GB 64-bit LPDDR4 25.6GB/s

For a more detailed comparison table, please visit the Compare Specifications section of official NVIDIA Jetson page.

What is NVIDIA JetPack?

NVIDIA JetPack SDK powering the Jetson modules is the most comprehensive solution and provides full development environment for building end-to-end accelerated AI applications and shortens time to market. JetPack includes Jetson Linux with bootloader, Linux kernel, Ubuntu desktop environment, and a complete set of libraries for acceleration of GPU computing, multimedia, graphics, and computer vision. It also includes samples, documentation, and developer tools for both host computer and developer kit, and supports higher level SDKs such as DeepStream for streaming video analytics, Isaac for robotics, and Riva for conversational AI.

Flash JetPack to NVIDIA Jetson

The first step after getting your hands on an NVIDIA Jetson device is to flash NVIDIA JetPack to the device. There are several different way of flashing NVIDIA Jetson devices.

  1. If you own an official NVIDIA Development Kit such as the Jetson AGX Thor Developer Kit, you can download an image and prepare a bootable USB stick to flash JetPack to the included SSD.
  2. If you own an official NVIDIA Development Kit such as the Jetson Orin Nano Developer Kit, you can download an image and prepare an SD card with JetPack for booting the device.
  3. If you own any other NVIDIA Development Kit, you can flash JetPack to the device using SDK Manager.
  4. If you own a Seeed Studio reComputer J4012 device, you can flash JetPack to the included SSD and if you own a Seeed Studio reComputer J1020 v2 device, you can flash JetPack to the eMMC/ SSD.
  5. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow command-line flashing.

Note

For methods 1, 4 and 5 above, after flashing the system and booting the device, please enter "sudo apt update && sudo apt install nvidia-jetpack -y" on the device terminal to install all the remaining JetPack components needed.

JetPack Support Based on Jetson Device

The below table highlights NVIDIA JetPack versions supported by different NVIDIA Jetson devices.

JetPack 4JetPack 5JetPack 6JetPack 7
Jetson Nanoโœ…โŒโŒโŒ
Jetson TX2โœ…โŒโŒโŒ
Jetson Xavier NXโœ…โœ…โŒโŒ
Jetson AGX Xavierโœ…โœ…โŒโŒ
Jetson AGX OrinโŒโœ…โœ…โŒ
Jetson Orin NXโŒโœ…โœ…โŒ
Jetson Orin NanoโŒโœ…โœ…โŒ
Jetson AGX ThorโŒโŒโŒโœ…

Quick Start with Docker

The fastest way to get started with Ultralytics YOLO11 on NVIDIA Jetson is to run with pre-built docker images for Jetson. Refer to the table above and choose the JetPack version according to the Jetson device you own.

t=ultralytics/ultralytics:latest-jetson-jetpack4
sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
t=ultralytics/ultralytics:latest-jetson-jetpack5
sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
t=ultralytics/ultralytics:latest-jetson-jetpack6
sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t

Coming soon! Stay tuned!

After this is done, skip to Use TensorRT on NVIDIA Jetson section.

Start with Native Installation

For a native installation without Docker, please refer to the steps below.

Run on JetPack 7.0

Install Ultralytics Package

Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on NVIDIA TensorRT exports because TensorRT will make sure we can get the maximum performance out of the Jetson devices.

  1. Update packages list, install pip and upgrade to latest

    sudo apt update
    sudo apt install python3-pip -y
    pip install -U pip
    
  2. Install ultralytics pip package with optional dependencies

    pip install ultralytics[export]
    
  3. Reboot the device

    sudo reboot
    

Install PyTorch and Torchvision

The above ultralytics installation will install Torch and Torchvision. However, these 2 packages installed via pip are not compatible to run on Jetson AGX Thor which comes with JetPack 7.0 and CUDA 13. Therefore, we need to manually install them.

Install torch and torchvision according to JP7.0

pip install torch torchvision --index-url https://download.pytorch.org/whl/cu130

Install onnxruntime-gpu

The onnxruntime-gpu package hosted in PyPI does not have aarch64 binaries for the Jetson. So we need to manually install this package. This package is needed for some of the exports.

Here we will download and install onnxruntime-gpu 1.24.0 with Python3.12 support.

pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/onnxruntime_gpu-1.24.0-cp312-cp312-linux_aarch64.whl

Run on JetPack 6.1

Install Ultralytics Package

Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on NVIDIA TensorRT exports because TensorRT will make sure we can get the maximum performance out of the Jetson devices.

  1. Update packages list, install pip and upgrade to latest

    sudo apt update
    sudo apt install python3-pip -y
    pip install -U pip
    
  2. Install ultralytics pip package with optional dependencies

    pip install ultralytics[export]
    
  3. Reboot the device

    sudo reboot
    

Install PyTorch and Torchvision

The above ultralytics installation will install Torch and Torchvision. However, these two packages installed via pip are not compatible with the Jetson platform, which is based on ARM64 architecture. Therefore, we need to manually install a pre-built PyTorch pip wheel and compile or install Torchvision from source.

Install torch 2.5.0 and torchvision 0.20 according to JP6.1

pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-2.5.0a0+872d972e41.nv24.08-cp310-cp310-linux_aarch64.whl
pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.20.0a0+afc54f7-cp310-cp310-linux_aarch64.whl

Note

Visit the PyTorch for Jetson page to access all different versions of PyTorch for different JetPack versions. For a more detailed list on the PyTorch, Torchvision compatibility, visit the PyTorch and Torchvision compatibility page.

Install cuSPARSELt to fix a dependency issue with torch 2.5.0

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install libcusparselt0 libcusparselt-dev

Install onnxruntime-gpu

The onnxruntime-gpu package hosted in PyPI does not have aarch64 binaries for the Jetson. So we need to manually install this package. This package is needed for some of the exports.

You can find all available onnxruntime-gpu packagesโ€”organized by JetPack version, Python version, and other compatibility detailsโ€”in the Jetson Zoo ONNX Runtime compatibility matrix. Here we will download and install onnxruntime-gpu 1.20.0 with Python3.10 support.

pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/onnxruntime_gpu-1.20.0-cp310-cp310-linux_aarch64.whl

Note

onnxruntime-gpu will automatically revert back the numpy version to latest. So we need to reinstall numpy to 1.23.5 to fix an issue by executing:

pip install numpy==1.23.5

Run on JetPack 5.1.2

Install Ultralytics Package

Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on NVIDIA TensorRT exports because TensorRT will make sure we can get the maximum performance out of the Jetson devices.

  1. Update packages list, install pip and upgrade to latest

    sudo apt update
    sudo apt install python3-pip -y
    pip install -U pip
    
  2. Install ultralytics pip package with optional dependencies

    pip install ultralytics[export]
    
  3. Reboot the device

    sudo reboot
    

Install PyTorch and Torchvision

The above ultralytics installation will install Torch and Torchvision. However, these two packages installed via pip are not compatible with the Jetson platform, which is based on ARM64 architecture. Therefore, we need to manually install a pre-built PyTorch pip wheel and compile or install Torchvision from source.

  1. Uninstall currently installed PyTorch and Torchvision

    pip uninstall torch torchvision
    
  2. Install torch 2.2.0 and torchvision 0.17.2 according to JP5.1.2

    pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-2.2.0-cp38-cp38-linux_aarch64.whl
    pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.17.2+c1d70fe-cp38-cp38-linux_aarch64.whl
    

Note

Visit the PyTorch for Jetson page to access all different versions of PyTorch for different JetPack versions. For a more detailed list on the PyTorch, Torchvision compatibility, visit the PyTorch and Torchvision compatibility page.

Install onnxruntime-gpu

The onnxruntime-gpu package hosted in PyPI does not have aarch64 binaries for the Jetson. So we need to manually install this package. This package is needed for some of the exports.

You can find all available onnxruntime-gpu packagesโ€”organized by JetPack version, Python version, and other compatibility detailsโ€”in the Jetson Zoo ONNX Runtime compatibility matrix. Here we will download and install onnxruntime-gpu 1.17.0 with Python3.8 support.

wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -O onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl
pip install onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl

Note

onnxruntime-gpu will automatically revert back the numpy version to latest. So we need to reinstall numpy to 1.23.5 to fix an issue by executing:

pip install numpy==1.23.5

Use TensorRT on NVIDIA Jetson

Among all the model export formats supported by Ultralytics, TensorRT offers the highest inference performance on NVIDIA Jetson devices, making it our top recommendation for Jetson deployments. For setup instructions and advanced usage, see our dedicated TensorRT integration guide.

Convert Model to TensorRT and Run Inference

The YOLO11n model in PyTorch format is converted to TensorRT to run inference with the exported model.

Example

from ultralytics import YOLO

# Load a YOLO11n PyTorch model
model = YOLO("yolo11n.pt")

# Export the model to TensorRT
model.export(format="engine")  # creates 'yolo11n.engine'

# Load the exported TensorRT model
trt_model = YOLO("yolo11n.engine")

# Run inference
results = trt_model("https://ultralytics.com/images/bus.jpg")
# Export a YOLO11n PyTorch model to TensorRT format
yolo export model=yolo11n.pt format=engine # creates 'yolo11n.engine'

# Run inference with the exported model
yolo predict model=yolo11n.engine source='https://ultralytics.com/images/bus.jpg'

Note

Visit the Export page to access additional arguments when exporting models to different model formats

Use NVIDIA Deep Learning Accelerator (DLA)

NVIDIA Deep Learning Accelerator (DLA) is a specialized hardware component built into NVIDIA Jetson devices that optimizes deep learning inference for energy efficiency and performance. By offloading tasks from the GPU (freeing it up for more intensive processes), DLA enables models to run with lower power consumption while maintaining high throughput, ideal for embedded systems and real-time AI applications.

The following Jetson devices are equipped with DLA hardware:

Jetson DeviceDLA CoresDLA Max Frequency
Jetson AGX Orin Series21.6 GHz
Jetson Orin NX 16GB2614 MHz
Jetson Orin NX 8GB1614 MHz
Jetson AGX Xavier Series21.4 GHz
Jetson Xavier NX Series21.1 GHz

Example

from ultralytics import YOLO

# Load a YOLO11n PyTorch model
model = YOLO("yolo11n.pt")

# Export the model to TensorRT with DLA enabled (only works with FP16 or INT8)
model.export(format="engine", device="dla:0", half=True)  # dla:0 or dla:1 corresponds to the DLA cores

# Load the exported TensorRT model
trt_model = YOLO("yolo11n.engine")

# Run inference
results = trt_model("https://ultralytics.com/images/bus.jpg")
# Export a YOLO11n PyTorch model to TensorRT format with DLA enabled (only works with FP16 or INT8)
# Once DLA core number is specified at export, it will use the same core at inference
yolo export model=yolo11n.pt format=engine device="dla:0" half=True # dla:0 or dla:1 corresponds to the DLA cores

# Run inference with the exported model on the DLA
yolo predict model=yolo11n.engine source='https://ultralytics.com/images/bus.jpg'

Note

When using DLA exports, some layers may not be supported to run on DLA and will fall back to the GPU for execution. This fallback can introduce additional latency and impact the overall inference performance. Therefore, DLA is not primarily designed to reduce inference latency compared to TensorRT running entirely on the GPU. Instead, its primary purpose is to increase throughput and improve energy efficiency.

NVIDIA Jetson YOLO11 Benchmarks

YOLO11 benchmarks were run by the Ultralytics team on 11 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, MNN, NCNN, ExecuTorch. Benchmarks were run on NVIDIA Jetson AGX Thor Developer Kit, NVIDIA Jetson AGX Orin Developer Kit (64GB), NVIDIA Jetson Orin Nano Super Developer Kit and Seeed Studio reComputer J4012 powered by Jetson Orin NX 16GB device at FP32 precision with default input image size of 640.

Comparison Charts

Even though all model exports are working with NVIDIA Jetson, we have only included PyTorch, TorchScript, TensorRT for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart.

NVIDIA Jetson AGX Thor Developer Kit

Jetson AGX Thor Benchmarks
Benchmarked with Ultralytics 8.3.226

NVIDIA Jetson AGX Orin Developer Kit (64GB)

Jetson AGX Orin Benchmarks
Benchmarked with Ultralytics 8.3.157

NVIDIA Jetson Orin Nano Super Developer Kit

Jetson Orin Nano Super Benchmarks
Benchmarked with Ultralytics 8.3.157

NVIDIA Jetson Orin NX 16GB

Jetson Orin NX 16GB Benchmarks
Benchmarked with Ultralytics 8.3.157

Detailed Comparison Tables

The below table represents the benchmark results for five different models (YOLO11n, YOLO11s, YOLO11m, YOLO11l, YOLO11x) across 11 different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, MNN, NCNN, ExecuTorch), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.

NVIDIA Jetson AGX Thor Developer Kit

Performance

FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…5.40.50704.1
TorchScriptโœ…10.50.50833.61
ONNXโœ…10.20.50764.8
OpenVINOโœ…10.40.505816.48
TensorRT (FP32)โœ…12.60.50771.70
TensorRT (FP16)โœ…7.70.50751.20
TensorRT (INT8)โœ…6.20.48581.29
TF SavedModelโœ…25.70.507640.35
TF GraphDefโœ…10.30.507640.55
TF Liteโœ…10.30.5075206.74
MNNโœ…10.10.507523.47
NCNNโœ…10.20.504122.05
ExecuTorchโœ…10.20.507534.28
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…18.40.57706.10
TorchScriptโœ…36.60.57835.33
ONNXโœ…36.30.57837.01
OpenVINOโœ…36.40.580933.08
TensorRT (FP32)โœ…40.10.57842.57
TensorRT (FP16)โœ…20.80.57961.55
TensorRT (INT8)โœ…12.70.55141.50
TF SavedModelโœ…90.80.578280.55
TF GraphDefโœ…36.30.578280.82
TF Liteโœ…36.30.5782615.29
MNNโœ…36.20.579054.12
NCNNโœ…36.30.580640.76
ExecuTorchโœ…36.20.578267.21
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…38.80.625011.4
TorchScriptโœ…77.30.630410.16
ONNXโœ…76.90.630412.35
OpenVINOโœ…77.10.628477.81
TensorRT (FP32)โœ…80.70.63055.29
TensorRT (FP16)โœ…41.30.62942.42
TensorRT (INT8)โœ…23.70.61332.20
TF SavedModelโœ…192.40.6306184.66
TF GraphDefโœ…76.90.6306187.91
TF Liteโœ…76.90.63061845.09
MNNโœ…76.80.6298143.52
NCNNโœ…76.90.630895.86
ExecuTorchโœ…76.90.6306167.94
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…49.00.637014.0
TorchScriptโœ…97.60.640913.77
ONNXโœ…97.00.641016.37
OpenVINOโœ…97.30.637798.86
TensorRT (FP32)โœ…101.00.63966.71
TensorRT (FP16)โœ…51.50.63583.26
TensorRT (INT8)โœ…29.70.61903.21
TF SavedModelโœ…242.70.6409246.93
TF GraphDefโœ…97.00.6409251.84
TF Liteโœ…97.00.64092383.45
MNNโœ…96.90.6361176.53
NCNNโœ…97.00.6373118.05
ExecuTorchโœ…97.00.6409211.46
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…109.30.699021.70
TorchScriptโœ…218.10.690020.99
ONNXโœ…217.50.690024.07
OpenVINOโœ…217.80.6872187.33
TensorRT (FP32)โœ…220.00.690211.70
TensorRT (FP16)โœ…114.60.68815.10
TensorRT (INT8)โœ…59.90.68574.53
TF SavedModelโœ…543.90.6900489.91
TF GraphDefโœ…217.50.6900503.21
TF Liteโœ…217.50.69005164.31
MNNโœ…217.30.6905350.37
NCNNโœ…217.50.6901230.63
ExecuTorchโœ…217.40.6900419.9

Benchmarked with Ultralytics 8.3.226

Note

Inference time does not include pre/ post-processing.

NVIDIA Jetson AGX Orin Developer Kit (64GB)

Performance

FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…5.40.51019.40
TorchScriptโœ…10.50.508311.00
ONNXโœ…10.20.507748.32
OpenVINOโœ…10.40.505827.24
TensorRT (FP32)โœ…12.10.50853.93
TensorRT (FP16)โœ…8.30.50632.55
TensorRT (INT8)โœ…5.40.47192.18
TF SavedModelโœ…25.90.507766.87
TF GraphDefโœ…10.30.507765.68
TF Liteโœ…10.30.5077272.92
MNNโœ…10.10.505936.33
NCNNโœ…10.20.503128.51
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…18.40.578312.10
TorchScriptโœ…36.50.578211.01
ONNXโœ…36.30.5782107.54
OpenVINOโœ…36.40.581055.03
TensorRT (FP32)โœ…38.10.57816.52
TensorRT (FP16)โœ…21.40.58033.65
TensorRT (INT8)โœ…12.10.57352.81
TF SavedModelโœ…91.00.5782132.73
TF GraphDefโœ…36.40.5782134.96
TF Liteโœ…36.30.5782798.21
MNNโœ…36.20.577782.35
NCNNโœ…36.20.578456.07
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…38.80.626522.20
TorchScriptโœ…77.30.630721.47
ONNXโœ…76.90.6307270.89
OpenVINOโœ…77.10.6284129.10
TensorRT (FP32)โœ…78.80.630612.53
TensorRT (FP16)โœ…41.90.63056.25
TensorRT (INT8)โœ…23.20.62914.69
TF SavedModelโœ…192.70.6307299.95
TF GraphDefโœ…77.10.6307310.58
TF Liteโœ…77.00.63072400.54
MNNโœ…76.80.6308213.56
NCNNโœ…76.80.6284141.18
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…49.00.636427.70
TorchScriptโœ…97.60.639927.94
ONNXโœ…97.00.6409345.47
OpenVINOโœ…97.30.6378161.93
TensorRT (FP32)โœ…99.10.640616.11
TensorRT (FP16)โœ…52.60.63768.08
TensorRT (INT8)โœ…30.80.62086.12
TF SavedModelโœ…243.10.6409390.78
TF GraphDefโœ…97.20.6409398.76
TF Liteโœ…97.10.64093037.05
MNNโœ…96.90.6372265.46
NCNNโœ…96.90.6364179.68
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…109.30.700544.40
TorchScriptโœ…218.10.689847.49
ONNXโœ…217.50.6900682.98
OpenVINOโœ…217.80.6876298.15
TensorRT (FP32)โœ…219.60.690428.50
TensorRT (FP16)โœ…112.20.688713.55
TensorRT (INT8)โœ…60.00.65749.40
TF SavedModelโœ…544.30.6900749.85
TF GraphDefโœ…217.70.6900753.86
TF Liteโœ…217.60.69006603.27
MNNโœ…217.30.6868519.77
NCNNโœ…217.30.6849298.58

Benchmarked with Ultralytics 8.3.157

Note

Inference time does not include pre/ post-processing.

NVIDIA Jetson Orin Nano Super Developer Kit

Performance

FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…5.40.510113.70
TorchScriptโœ…10.50.508213.69
ONNXโœ…10.20.508114.47
OpenVINOโœ…10.40.505856.66
TensorRT (FP32)โœ…12.00.50817.44
TensorRT (FP16)โœ…8.20.50614.53
TensorRT (INT8)โœ…5.40.48253.70
TF SavedModelโœ…25.90.5077116.23
TF GraphDefโœ…10.30.5077114.92
TF Liteโœ…10.30.5077340.75
MNNโœ…10.10.505976.26
NCNNโœ…10.20.503145.03
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…18.40.579020.90
TorchScriptโœ…36.50.578121.22
ONNXโœ…36.30.578125.07
OpenVINOโœ…36.40.5810122.98
TensorRT (FP32)โœ…37.90.578313.02
TensorRT (FP16)โœ…21.80.57796.93
TensorRT (INT8)โœ…12.20.57355.08
TF SavedModelโœ…91.00.5782250.65
TF GraphDefโœ…36.40.5782252.69
TF Liteโœ…36.30.5782998.68
MNNโœ…36.20.5781188.01
NCNNโœ…36.20.5784101.37
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…38.80.626646.50
TorchScriptโœ…77.30.630747.95
ONNXโœ…76.90.630753.06
OpenVINOโœ…77.10.6284301.63
TensorRT (FP32)โœ…78.80.630527.86
TensorRT (FP16)โœ…41.70.630913.50
TensorRT (INT8)โœ…23.20.62919.12
TF SavedModelโœ…192.70.6307622.24
TF GraphDefโœ…77.10.6307628.74
TF Liteโœ…77.00.63072997.93
MNNโœ…76.80.6299509.96
NCNNโœ…76.80.6284292.99
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…49.00.636456.50
TorchScriptโœ…97.60.640962.51
ONNXโœ…97.00.639968.35
OpenVINOโœ…97.30.6378376.03
TensorRT (FP32)โœ…99.20.639635.59
TensorRT (FP16)โœ…52.10.636117.48
TensorRT (INT8)โœ…30.90.620711.87
TF SavedModelโœ…243.10.6409807.47
TF GraphDefโœ…97.20.6409822.88
TF Liteโœ…97.10.64093792.23
MNNโœ…96.90.6372631.16
NCNNโœ…96.90.6364350.46
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…109.30.700590.00
TorchScriptโœ…218.10.6901113.40
ONNXโœ…217.50.6901122.94
OpenVINOโœ…217.80.6876713.1
TensorRT (FP32)โœ…219.50.690466.93
TensorRT (FP16)โœ…112.20.689232.58
TensorRT (INT8)โœ…61.50.661219.90
TF SavedModelโœ…544.30.69001605.4
TF GraphDefโœ…217.80.69002961.8
TF Liteโœ…217.60.69008234.86
MNNโœ…217.30.68931254.18
NCNNโœ…217.30.6849725.50

Benchmarked with Ultralytics 8.3.157

Note

Inference time does not include pre/ post-processing.

NVIDIA Jetson Orin NX 16GB

Performance

FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…5.40.510112.90
TorchScriptโœ…10.50.508213.17
ONNXโœ…10.20.508115.43
OpenVINOโœ…10.40.505839.80
TensorRT (FP32)โœ…11.80.50817.94
TensorRT (FP16)โœ…8.10.50854.73
TensorRT (INT8)โœ…5.40.47863.90
TF SavedModelโœ…25.90.507788.48
TF GraphDefโœ…10.30.507786.67
TF Liteโœ…10.30.5077302.55
MNNโœ…10.10.505952.73
NCNNโœ…10.20.503132.04
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…18.40.579021.70
TorchScriptโœ…36.50.578122.71
ONNXโœ…36.30.578126.49
OpenVINOโœ…36.40.581084.73
TensorRT (FP32)โœ…37.80.578313.77
TensorRT (FP16)โœ…21.20.57967.31
TensorRT (INT8)โœ…12.00.57355.33
TF SavedModelโœ…91.00.5782185.06
TF GraphDefโœ…36.40.5782186.45
TF Liteโœ…36.30.5782882.58
MNNโœ…36.20.5775126.36
NCNNโœ…36.20.578466.73
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…38.80.626645.00
TorchScriptโœ…77.30.630751.87
ONNXโœ…76.90.630756.00
OpenVINOโœ…77.10.6284202.69
TensorRT (FP32)โœ…78.70.630530.38
TensorRT (FP16)โœ…41.80.630214.48
TensorRT (INT8)โœ…23.20.62919.74
TF SavedModelโœ…192.70.6307445.58
TF GraphDefโœ…77.10.6307460.94
TF Liteโœ…77.00.63072653.65
MNNโœ…76.80.6308339.38
NCNNโœ…76.80.6284187.64
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…49.00.636456.60
TorchScriptโœ…97.60.640966.72
ONNXโœ…97.00.639971.92
OpenVINOโœ…97.30.6378254.17
TensorRT (FP32)โœ…99.20.640638.89
TensorRT (FP16)โœ…51.90.636318.59
TensorRT (INT8)โœ…30.90.620712.60
TF SavedModelโœ…243.10.6409575.98
TF GraphDefโœ…97.20.6409583.79
TF Liteโœ…97.10.64093353.41
MNNโœ…96.90.6367421.33
NCNNโœ…96.90.6364228.26
FormatStatusSize on disk (MB)mAP50-95(B)Inference time (ms/im)
PyTorchโœ…109.30.700598.50
TorchScriptโœ…218.10.6901123.03
ONNXโœ…217.50.6901129.55
OpenVINOโœ…217.80.6876483.44
TensorRT (FP32)โœ…219.60.690475.92
TensorRT (FP16)โœ…112.10.688535.78
TensorRT (INT8)โœ…61.60.659221.60
TF SavedModelโœ…544.30.69001120.43
TF GraphDefโœ…217.70.69001172.35
TF Liteโœ…217.60.69007283.63
MNNโœ…217.30.6877840.16
NCNNโœ…217.30.6849474.41

Benchmarked with Ultralytics 8.3.157

Note

Inference time does not include pre/ post-processing.

Explore more benchmarking efforts by Seeed Studio running on different versions of NVIDIA Jetson hardware.

Reproduce Our Results

To reproduce the above Ultralytics benchmarks on all export formats run this code:

Example

from ultralytics import YOLO

# Load a YOLO11n PyTorch model
model = YOLO("yolo11n.pt")

# Benchmark YOLO11n speed and accuracy on the COCO128 dataset for all all export formats
results = model.benchmark(data="coco128.yaml", imgsz=640)
# Benchmark YOLO11n speed and accuracy on the COCO128 dataset for all all export formats
yolo benchmark model=yolo11n.pt data=coco128.yaml imgsz=640

Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. data='coco.yaml' (5000 val images).

Best Practices when using NVIDIA Jetson

When using NVIDIA Jetson, there are a couple of best practices to follow in order to enable maximum performance on the NVIDIA Jetson running YOLO11.

  1. Enable MAX Power Mode

    Enabling MAX Power Mode on the Jetson will make sure all CPU, GPU cores are turned on.

    sudo nvpmodel -m 0
    
  2. Enable Jetson Clocks

    Enabling Jetson Clocks will make sure all CPU, GPU cores are clocked at their maximum frequency.

    sudo jetson_clocks
    
  3. Install Jetson Stats Application

    We can use jetson stats application to monitor the temperatures of the system components and check other system details such as view CPU, GPU, RAM utilization, change power modes, set to max clocks, check JetPack information

    sudo apt update
    sudo pip install jetson-stats
    sudo reboot
    jtop
    

Jetson Stats

Next Steps

Congratulations on successfully setting up YOLO11 on your NVIDIA Jetson! For further learning and support, visit more guide at Ultralytics YOLO11 Docs!

FAQ

How do I deploy Ultralytics YOLO11 on NVIDIA Jetson devices?

Deploying Ultralytics YOLO11 on NVIDIA Jetson devices is a straightforward process. First, flash your Jetson device with the NVIDIA JetPack SDK. Then, either use a pre-built Docker image for quick setup or manually install the required packages. Detailed steps for each approach can be found in sections Quick Start with Docker and Start with Native Installation.

What performance benchmarks can I expect from YOLO11 models on NVIDIA Jetson devices?

YOLO11 models have been benchmarked on various NVIDIA Jetson devices showing significant performance improvements. For example, the TensorRT format delivers the best inference performance. The table in the Detailed Comparison Tables section provides a comprehensive view of performance metrics like mAP50-95 and inference time across different model formats.

Why should I use TensorRT for deploying YOLO11 on NVIDIA Jetson?

TensorRT is highly recommended for deploying YOLO11 models on NVIDIA Jetson due to its optimal performance. It accelerates inference by leveraging the Jetson's GPU capabilities, ensuring maximum efficiency and speed. Learn more about how to convert to TensorRT and run inference in the Use TensorRT on NVIDIA Jetson section.

How can I install PyTorch and Torchvision on NVIDIA Jetson?

To install PyTorch and Torchvision on NVIDIA Jetson, first uninstall any existing versions that may have been installed via pip. Then, manually install the compatible PyTorch and Torchvision versions for the Jetson's ARM64 architecture. Detailed instructions for this process are provided in the Install PyTorch and Torchvision section.

What are the best practices for maximizing performance on NVIDIA Jetson when using YOLO11?

To maximize performance on NVIDIA Jetson with YOLO11, follow these best practices:

  1. Enable MAX Power Mode to utilize all CPU and GPU cores.
  2. Enable Jetson Clocks to run all cores at their maximum frequency.
  3. Install the Jetson Stats application for monitoring system metrics.

For commands and additional details, refer to the Best Practices when using NVIDIA Jetson section.



๐Ÿ“… Created 1 year ago โœ๏ธ Updated 4 days ago
glenn-jocherlakshanthadRizwanMunawarBurhan-QpicsalexY-T-GLaughing-qjustincdavisMatthewNoyceUltralyticsAssistantAhelsamahy

Comments