─░├žeri─če ge├ž

H─▒zl─▒ Ba┼člang─▒├ž K─▒lavuzu: NVIDIA Jetson ile Ultralytics YOLOv8

Bu kapsaml─▒ k─▒lavuz, Ultralytics YOLOv8 adresinin NVIDIA Jetson cihazlar─▒na da─č─▒t─▒lmas─▒ i├žin ayr─▒nt─▒l─▒ bir yol g├Âsterir. Ayr─▒ca, bu k├╝├ž├╝k ve g├╝├žl├╝ cihazlarda YOLOv8 'un yeteneklerini g├Âstermek i├žin performans k─▒yaslamalar─▒n─▒ sergilemektedir.

NVIDIA Jetson Ekosistemi

Not

This guide has been tested with both Seeed Studio reComputer J4012 which is based on NVIDIA Jetson Orin NX 16GB running the latest stable JetPack release of JP5.1.3 and Seeed Studio reComputer J1020 v2 which is based on NVIDIA Jetson Nano 4GB running JetPack release of JP4.6.1. It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.

NVIDIA Jetson nedir?

NVIDIA Jetson, u├ž cihazlara h─▒zland─▒r─▒lm─▒┼č AI (yapay zeka) bili┼čimi getirmek i├žin tasarlanm─▒┼č bir dizi g├Âm├╝l├╝ bili┼čim kart─▒d─▒r. Bu kompakt ve g├╝├žl├╝ cihazlar NVIDIA'n─▒n GPU mimarisi ├╝zerine in┼ča edilmi┼čtir ve bulut bili┼čim kaynaklar─▒na ihtiya├ž duymadan karma┼č─▒k yapay zeka algoritmalar─▒n─▒ ve derin ├Â─črenme modellerini do─črudan cihaz ├╝zerinde ├žal─▒┼čt─▒rabilir. Jetson kartlar─▒ genellikle robotik, otonom ara├žlar, end├╝striyel otomasyon ve yapay zeka ├ž─▒kar─▒m─▒n─▒n d├╝┼č├╝k gecikme s├╝resi ve y├╝ksek verimlilikle yerel olarak ger├žekle┼čtirilmesi gereken di─čer uygulamalarda kullan─▒l─▒r. Ayr─▒ca, bu kartlar ARM64 mimarisini temel al─▒r ve geleneksel GPU bilgi i┼člem cihazlar─▒na k─▒yasla daha d├╝┼č├╝k g├╝├žle ├žal─▒┼č─▒r.

NVIDIA Jetson Serisi Kar┼č─▒la┼čt─▒rmas─▒

Jetson Orin, NVIDIA Ampere mimarisine dayanan NVIDIA Jetson ailesinin en son yinelemesidir ve ├Ânceki nesillere k─▒yasla b├╝y├╝k ├Âl├ž├╝de geli┼čtirilmi┼č yapay zeka performans─▒ sunar. A┼ča─č─▒daki tabloda ekosistemdeki Jetson cihazlar─▒ndan birka├ž─▒ kar┼č─▒la┼čt─▒r─▒lm─▒┼čt─▒r.

Jetson AGX Orin 64GB Jetson Orin NX 16GB Jetson Orin Nano 8GB Jetson AGX Xavier Jetson Xavier NX Jetson Nano
Yapay Zeka Performans─▒ 275 TOPS 100 TOPS 40 TOP 32 TOPS 21 TOPS 472 GFLOPS
GPU 64 Tensor ├çekirdekli 2048 ├žekirdekli NVIDIA Ampere mimarili GPU 32 Tensor ├çekirdekli 1024 ├žekirdekli NVIDIA Ampere mimarili GPU 32 Tensor ├çekirdekli 1024 ├žekirdekli NVIDIA Ampere mimarili GPU 64 Tensor ├çekirdekli 512 ├žekirdekli NVIDIA Volta mimarili GPU 48 Tensor ├çekirdekli 384 ├žekirdekli NVIDIA VoltaÔäó mimarili GPU 128 ├žekirdekli NVIDIA MaxwellÔäó mimarili GPU
GPU Maksimum Frekans─▒ 1,3 GHz 918 MHz 625 MHz 1377 MHz 1100 MHz 921MHz
CPU 12 ├žekirdekli NVIDIA Arm┬« Cortex A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3 8 ├žekirdekli NVIDIA Arm┬« Cortex A78AE v8.2 64-bit CPU 2MB L2 + 4MB L3 6 ├žekirdekli Arm┬« Cortex┬«-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L3 8 ├žekirdekli NVIDIA Carmel Arm┬«v8.2 64-bit CPU 8MB L2 + 4MB L3 6 ├žekirdekli NVIDIA Carmel Arm┬«v8.2 64-bit CPU 6MB L2 + 4MB L3 D├Ârt ├çekirdekli Arm┬« Cortex┬«-A57 MPCore i┼člemci
CPU Maksimum Frekans─▒ 2,2 GHz 2.0 GHz 1,5 GHz 2,2 GHz 1,9 GHz 1.43GHz
Haf─▒za 64GB 256-bit LPDDR5 204,8GB/sn 16GB 128-bit LPDDR5 102,4GB/s 8GB 128-bit LPDDR5 68 GB/sn 32GB 256-bit LPDDR4x 136,5GB/s 8GB 128-bit LPDDR4x 59,7GB/sn 4GB 64-bit LPDDR4 25,6GB/s"

Daha ayr─▒nt─▒l─▒ bir kar┼č─▒la┼čt─▒rma tablosu i├žin l├╝tfen resmi NVIDIA Jetson sayfas─▒n─▒n Teknik ├ľzellikler b├Âl├╝m├╝n├╝ ziyaret edin.

NVIDIA JetPack nedir?

Jetson mod├╝llerine g├╝├ž veren NVIDIA JetPack SDK, en kapsaml─▒ ├ž├Âz├╝md├╝r ve u├žtan uca h─▒zland─▒r─▒lm─▒┼č yapay zeka uygulamalar─▒ olu┼čturmak i├žin tam geli┼čtirme ortam─▒ sa─člar ve pazara sunma s├╝resini k─▒salt─▒r. JetPack, ├Âny├╝kleyicili Jetson Linux, Linux ├žekirde─či, Ubuntu masa├╝st├╝ ortam─▒ ve GPU hesaplama, multimedya, grafik ve bilgisayarla g├Ârmenin h─▒zland─▒r─▒lmas─▒ i├žin eksiksiz bir k├╝t├╝phane seti i├žerir. Ayr─▒ca hem ana bilgisayar hem de geli┼čtirici kiti i├žin ├Ârnekler, belgeler ve geli┼čtirici ara├žlar─▒ i├žerir ve video analizi ak─▒┼č─▒ i├žin DeepStream, robotik i├žin Isaac ve konu┼čma yapay zekas─▒ i├žin Riva gibi daha ├╝st d├╝zey SDK'lar─▒ destekler.

NVIDIA Jetson'a Flash JetPack

Bir NVIDIA Jetson cihaz─▒na sahip olduktan sonraki ilk ad─▒m, NVIDIA JetPack'i cihaza flash'lamakt─▒r. NVIDIA Jetson cihazlar─▒n─▒ fla┼člaman─▒n birka├ž farkl─▒ yolu vard─▒r.

  1. If you own an official NVIDIA Development Kit such as the Jetson Orin Nano Developer Kit, you can download an image and prepare an SD card with JetPack for booting the device.
  2. If you own any other NVIDIA Development Kit, you can flash JetPack to the device using SDK Manager.
  3. If you own a Seeed Studio reComputer J4012 device, you can flash JetPack to the included SSD and if you own a Seeed Studio reComputer J1020 v2 device, you can flash JetPack to the eMMC/ SSD.
  4. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow command-line flashing.

Not

Yukar─▒daki 3. ve 4. y├Ântemler i├žin, sistemi fla┼člad─▒ktan ve cihaz─▒ ba┼člatt─▒ktan sonra, gerekli t├╝m di─čer JetPack bile┼čenlerini y├╝klemek i├žin l├╝tfen cihaz terminaline "sudo apt update && sudo apt install nvidia-jetpack -y" yaz─▒n.

Run on JetPack 5.x

If you own a Jetson Xavier NX, AGX Xavier, AGX Orin, Orin Nano or Orin NX which supports JetPack 5.x, you can continue to follow this guide. However, if you have a legacy device such as Jetson Nano, please skip to Run on JetPack 4.x.

Kurulum Ultralytics

Bir sonraki Bilgisayarla G├Ârme projenizi olu┼čturmak i├žin NVIDIA Jetson'da Ultralytics paketini kurman─▒n iki yolu vard─▒r. Bunlardan birini kullanabilirsiniz.

Docker ile ba┼člay─▒n

NVIDIA Jetson'da Ultralytics YOLOv8 kullanmaya ba┼člaman─▒n en h─▒zl─▒ yolu, Jetson i├žin ├Ânceden olu┼čturulmu┼č docker g├Âr├╝nt├╝s├╝ ile ├žal─▒┼čt─▒rmakt─▒r.

Docker konteynerini ├žekmek ve Jetson ├╝zerinde ├žal─▒┼čt─▒rmak i├žin a┼ča─č─▒daki komutu y├╝r├╝t├╝n. Bu, Python3 ortam─▒nda PyTorch ve Torchvision i├žeren l4t-pytorch docker g├Âr├╝nt├╝s├╝ne dayanmaktad─▒r.

t=ultralytics/ultralytics:latest-jetson-jetpack5 && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t

Bu i┼člem tamamland─▒ktan sonra NVIDIA Jetson ├╝zerinde TensorRT adresini kullan b├Âl├╝m├╝ne ge├žin.

Docker olmadan ba┼člay─▒n

Ultralytics Paketini Y├╝kleyin

Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on NVIDIA TensorRT exports because TensorRT will make sure we can get the maximum performance out of the Jetson devices.

  1. Paket listesini g├╝ncelleyin, pip y├╝kleyin ve en son s├╝r├╝me y├╝kseltin

    sudo apt update
    sudo apt install python3-pip -y
    pip install -U pip
    
  2. Kurulum ultralytics iste─če ba─čl─▒ ba─č─▒ml─▒l─▒klarla pip paketi

    pip install ultralytics[export]
    
  3. Cihaz─▒ yeniden ba┼člat─▒n

    sudo reboot
    
PyTorch ve Torchvision'─▒ y├╝kleyin

Yukar─▒daki ultralytics kurulumu Torch ve Torchvision paketlerini y├╝kleyecektir. Ancak, pip ile y├╝klenen bu 2 paket ARM64 mimarisine dayanan Jetson platformunda ├žal─▒┼čmak i├žin uyumlu de─čildir. Bu nedenle, ├Ânceden olu┼čturulmu┼č PyTorch pip wheel'i manuel olarak y├╝klememiz ve Torchvision'─▒ kaynaktan derlememiz / y├╝klememiz gerekir.

  1. ┼×u anda y├╝kl├╝ olan PyTorch ve Torchvision'─▒ kald─▒r─▒n

    pip uninstall torch torchvision
    
  2. JP5.1.3'e g├Âre PyTorch 2.1.0'─▒ y├╝kleyin

    sudo apt-get install -y libopenblas-base libopenmpi-dev
    wget https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl -O torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
    pip install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
    
  3. Torchvision v0.16.2'yi PyTorch v2.1.0'a g├Âre y├╝kleyin

    sudo apt install -y libjpeg-dev zlib1g-dev
    git clone https://github.com/pytorch/vision torchvision
    cd torchvision
    git checkout v0.16.2
    python3 setup.py install --user
    

Farkl─▒ JetPack s├╝r├╝mleri i├žin PyTorch 'un t├╝m farkl─▒ s├╝r├╝mlerine eri┼čmek i├žin Jetson i├žinPyTorch sayfas─▒n─▒ ziyaret edin. PyTorch , Torchvision uyumlulu─ču hakk─▒nda daha ayr─▒nt─▒l─▒ bir liste i├žin PyTorch ve Torchvision uyumlulu─ču sayfas─▒n─▒ ziyaret edin.

Kurulum onnxruntime-gpu

Bu onnxruntime-gpu PyPI'da bar─▒nd─▒r─▒lan paketin aarch64 Jetson i├žin ikili dosyalar. Bu y├╝zden bu paketi manuel olarak y├╝klememiz gerekiyor. Bu paket baz─▒ d─▒┼ča aktar─▒mlar i├žin gereklidir.

Hepsi farkl─▒ onnxruntime-gpu packages corresponding to different JetPack and Python versions are listed Burada. Ancak, burada indirip kuraca─č─▒z onnxruntime-gpu 1.17.0 ile Python3.8 Bu k─▒lavuz i├žin kulland─▒─č─▒m─▒z JetPack i├žin destek.

wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -O onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl
pip install onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl

Not

onnxruntime-gpu numpy s├╝r├╝m├╝n├╝ otomatik olarak en son s├╝r├╝me geri d├Ând├╝recektir. Bu y├╝zden numpy'yi yeniden y├╝klememiz gerekiyor 1.23.5 ├žal─▒┼čt─▒rarak bir sorunu d├╝zeltmek i├žin:

pip install numpy==1.23.5

Run on JetPack 4.x

Here we support to run Ultralytics on legacy hardware such as the Jetson Nano. Currently we use Docker to achieve this.

Execute the below command to pull the Docker container and run on Jetson. This is based on l4t-cuda docker image which contains CUDA in a L4T environment.

t=ultralytics/ultralytics:latest-jetson-jetpack4 && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t

NVIDIA Jetson'da TensorRT adresini kullan─▒n

Ultralytics taraf─▒ndan desteklenen t├╝m model d─▒┼ča aktarma formatlar─▒ aras─▒nda TensorRT , NVIDIA Jetson cihazlar─▒yla ├žal─▒┼č─▒rken en iyi ├ž─▒kar─▒m performans─▒n─▒ sunar ve bizim tavsiyemiz Jetson ile TensorRT adresini kullanman─▒zd─▒r. Ayr─▒caburada TensorRT hakk─▒nda ayr─▒nt─▒l─▒ bir belgemiz bulunmaktad─▒r.

Modeli TensorRT 'a D├Ân├╝┼čt├╝r├╝n ve ├ç─▒kar─▒m ├çal─▒┼čt─▒r─▒n

PyTorch bi├žimindeki YOLOv8n modeli, d─▒┼ča aktar─▒lan modelle ├ž─▒kar─▒m yapmak i├žin TensorRT bi├žimine d├Ân├╝┼čt├╝r├╝l├╝r.

├ľrnek

from ultralytics import YOLO

# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")

# Export the model
model.export(format="engine")  # creates 'yolov8n.engine'

# Load the exported TensorRT model
trt_model = YOLO("yolov8n.engine")

# Run inference
results = trt_model("https://ultralytics.com/images/bus.jpg")
# Export a YOLOv8n PyTorch model to TensorRT format
yolo export model=yolov8n.pt format=engine  # creates 'yolov8n.engine'

# Run inference with the exported model
yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg'

Not

Modelleri farkl─▒ model formatlar─▒na aktar─▒rken ek arg├╝manlara eri┼čmek i├žin D─▒┼ča Aktar sayfas─▒n─▒ ziyaret edin

NVIDIA Jetson Orin YOLOv8 Kar┼č─▒la┼čt─▒rmalar

YOLOv8 benchmarks were run by the Ultralytics team on 10 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on Seeed Studio reComputer J4012 powered by Jetson Orin NX 16GB device at FP32 precision with default input image size of 640.

Kar┼č─▒la┼čt─▒rma Tablosu

Even though all model exports are working with NVIDIA Jetson, we have only included PyTorch, TorchScript, TensorRT for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart.

NVIDIA Jetson Ekosistemi

Detayl─▒ Kar┼č─▒la┼čt─▒rma Tablosu

The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.

Performans

Bi├žim Durum Disk ├╝zerindeki boyut (MB) mAP50-95(B) ├ç─▒kar─▒m s├╝resi (ms/im)
PyTorch Ôťů 6.2 0.6381 14.3
TorchScript Ôťů 12.4 0.6117 13.3
ONNX Ôťů 12.2 0.6092 70.6
OpenVINO Ôťů 12.3 0.6092 104.2
TensorRT Ôťů 13.6 0.6117 8.9
TF SavedModel Ôťů 30.6 0.6092 141.74
TF GraphDef Ôťů 12.3 0.6092 199.93
TF Lite Ôťů 12.3 0.6092 349.18
PaddlePaddle Ôťů 24.4 0.6030 555
NCNN Ôťů 12.2 0.6092 32
Bi├žim Durum Disk ├╝zerindeki boyut (MB) mAP50-95(B) ├ç─▒kar─▒m s├╝resi (ms/im)
PyTorch Ôťů 21.5 0.6967 18
TorchScript Ôťů 43.0 0.7136 23.81
ONNX Ôťů 42.8 0.7136 185.55
OpenVINO Ôťů 42.9 0.7136 243.97
TensorRT Ôťů 44.0 0.7136 14.82
TF SavedModel Ôťů 107 0.7136 260.03
TF GraphDef Ôťů 42.8 0.7136 423.4
TF Lite Ôťů 42.8 0.7136 1046.64
PaddlePaddle Ôťů 85.5 0.7140 1464
NCNN Ôťů 42.7 0.7200 63
Bi├žim Durum Disk ├╝zerindeki boyut (MB) mAP50-95(B) ├ç─▒kar─▒m s├╝resi (ms/im)
PyTorch Ôťů 49.7 0.7370 36.4
TorchScript Ôťů 99.2 0.7285 53.58
ONNX Ôťů 99 0.7280 452.09
OpenVINO Ôťů 99.1 0.7280 544.36
TensorRT Ôťů 100.3 0.7285 33.21
TF SavedModel Ôťů 247.5 0.7280 543.65
TF GraphDef Ôťů 99 0.7280 906.63
TF Lite Ôťů 99 0.7280 2758.08
PaddlePaddle Ôťů 197.9 0.7280 3678
NCNN Ôťů 98.9 0.7260 135
Bi├žim Durum Disk ├╝zerindeki boyut (MB) mAP50-95(B) ├ç─▒kar─▒m s├╝resi (ms/im)
PyTorch Ôťů 83.7 0.7768 61.3
TorchScript Ôťů 167.2 0.7554 87.9
ONNX Ôťů 166.8 0.7551 852.29
OpenVINO Ôťů 167 0.7551 1012.6
TensorRT Ôťů 168.4 0.7554 51.23
TF SavedModel Ôťů 417.2 0.7551 990.45
TF GraphDef Ôťů 166.9 0.7551 1649.86
TF Lite Ôťů 166.9 0.7551 5652.37
PaddlePaddle Ôťů 333.6 0.7551 7114.67
NCNN Ôťů 166.8 0.7685 231.9
Bi├žim Durum Disk ├╝zerindeki boyut (MB) mAP50-95(B) ├ç─▒kar─▒m s├╝resi (ms/im)
PyTorch Ôťů 130.5 0.7759 93
TorchScript Ôťů 260.7 0.7472 135.1
ONNX Ôťů 260.4 0.7479 1296.13
OpenVINO Ôťů 260.6 0.7479 1502.15
TensorRT Ôťů 261.8 0.7469 84.53
TF SavedModel Ôťů 651.1 0.7479 1451.76
TF GraphDef Ôťů 260.5 0.7479 4029.36
TF Lite Ôťů 260.4 0.7479 8772.86
PaddlePaddle Ôťů 520.8 0.7479 10619.53
NCNN Ôťů 260.4 0.7646 376.38

Explore more benchmarking efforts by Seeed Studio running on different versions of NVIDIA Jetson hardware.

Sonu├žlar─▒m─▒z─▒ Yeniden ├ťretin

Yukar─▒daki Ultralytics k─▒yaslamalar─▒n─▒ t├╝m d─▒┼ča aktarma bi├žimlerinde yeniden olu┼čturmak i├žin bu kodu ├žal─▒┼čt─▒r─▒n:

├ľrnek

from ultralytics import YOLO

# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")

# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
results = model.benchmarks(data="coco8.yaml", imgsz=640)
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
yolo benchmark model=yolov8n.pt data=coco8.yaml imgsz=640

K─▒yaslama sonu├žlar─▒n─▒n, bir sistemin tam donan─▒m ve yaz─▒l─▒m yap─▒land─▒rmas─▒n─▒n yan─▒ s─▒ra k─▒yaslamalar─▒n ├žal─▒┼čt─▒r─▒ld─▒─č─▒ s─▒rada sistemin mevcut i┼č y├╝k├╝ne ba─čl─▒ olarak de─či┼čebilece─čini unutmay─▒n. En g├╝venilir sonu├žlar i├žin ├žok say─▒da g├Âr├╝nt├╝ i├žeren bir veri k├╝mesi kullan─▒n, ├Ârn. data='coco8.yaml' (4 val images), ordata='coco.yaml'` (5000 val g├Âr├╝nt├╝).

NVIDIA Jetson kullan─▒rken En ─░yi Uygulamalar

NVIDIA Jetson kullan─▒rken, YOLOv8 ├žal─▒┼čt─▒ran NVIDIA Jetson'da maksimum performans sa─člamak i├žin izlenecek birka├ž en iyi uygulama vard─▒r.

  1. MAX G├╝├ž Modunu Etkinle┼čtir

    Jetson'da MAX G├╝├ž Modunu etkinle┼čtirmek, t├╝m CPU, GPU ├žekirdeklerinin a├ž─▒k oldu─čundan emin olacakt─▒r.

    sudo nvpmodel -m 0
    
  2. Jetson Saatlerini Etkinle┼čtir

    Jetson Clocks'u etkinle┼čtirmek t├╝m CPU ve GPU ├žekirdeklerinin maksimum frekansta ├žal─▒┼čt─▒─č─▒ndan emin olman─▒z─▒ sa─člar.

    sudo jetson_clocks
    
  3. Jetson Stats Uygulamas─▒n─▒ Y├╝kleme

    Sistem bile┼čenlerinin s─▒cakl─▒klar─▒n─▒ izlemek ve CPU, GPU, RAM kullan─▒m─▒n─▒ g├Âr├╝nt├╝lemek, g├╝├ž modlar─▒n─▒ de─či┼čtirmek, maksimum saatlere ayarlamak, JetPack bilgilerini kontrol etmek gibi di─čer sistem ayr─▒nt─▒lar─▒n─▒ kontrol etmek i├žin jetson stats uygulamas─▒n─▒ kullanabiliriz.

    sudo apt update
    sudo pip install jetson-stats
    sudo reboot
    jtop
    

Jetson ─░statistikleri

Sonraki Ad─▒mlar

NVIDIA Jetson cihaz─▒n─▒zda YOLOv8 kurulumunu ba┼čar─▒yla tamamlad─▒─č─▒n─▒z i├žin tebrikler! Daha fazla bilgi ve destek i├žin Ultralytics YOLOv8 Docs adresindeki k─▒lavuzu ziyaret edin!



Created 2024-04-02, Updated 2024-06-11
Authors: lakshanthad (3), Ahelsamahy (1), glenn-jocher (7), Burhan-Q (2)

Yorumlar