Docs spelling and grammar fixes (#13307)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: RainRat <rainrat78@yahoo.ca>
This commit is contained in:
parent
bddea17bf3
commit
064e2fd282
48 changed files with 179 additions and 172 deletions
|
|
@ -73,7 +73,7 @@ After this is done, skip to [Use TensorRT on NVIDIA Jetson section](#use-tensorr
|
|||
|
||||
#### Install Ultralytics Package
|
||||
|
||||
Here we will install ultralyics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on [NVIDIA TensorRT exports](../integrations/tensorrt.md) because TensoRT will make sure we can get the maximum performance out of the Jetson devices.
|
||||
Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on [NVIDIA TensorRT exports](../integrations/tensorrt.md) because TensorRT will make sure we can get the maximum performance out of the Jetson devices.
|
||||
|
||||
1. Update packages list, install pip and upgrade to latest
|
||||
|
||||
|
|
@ -129,7 +129,7 @@ Visit the [PyTorch for Jetson page](https://forums.developer.nvidia.com/t/pytorc
|
|||
|
||||
The [onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu/) package hosted in PyPI does not have `aarch64` binaries for the Jetson. So we need to manually install this package. This package is needed for some of the exports.
|
||||
|
||||
All different `onnxruntime-gpu` packages corresponsing to different JetPack and Python versions are listed [here](https://elinux.org/Jetson_Zoo#ONNX_Runtime). However, here we will download and install `onnxruntime-gpu 1.17.0` with `Python3.8` support for the JetPack we are using for this guide.
|
||||
All different `onnxruntime-gpu` packages corresponding to different JetPack and Python versions are listed [here](https://elinux.org/Jetson_Zoo#ONNX_Runtime). However, here we will download and install `onnxruntime-gpu 1.17.0` with `Python3.8` support for the JetPack we are using for this guide.
|
||||
|
||||
```bash
|
||||
wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -O onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl
|
||||
|
|
@ -185,11 +185,11 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
|
|||
|
||||
## NVIDIA Jetson Orin YOLOv8 Benchmarks
|
||||
|
||||
YOLOv8 benchmarks were run by the Ultralytics team on 10 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF Graphdef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on Seeed Studio reComputer J4012 powered by Jetson Orin NX 16GB device at FP32 precision with default input image size of 640.
|
||||
YOLOv8 benchmarks were run by the Ultralytics team on 10 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on Seeed Studio reComputer J4012 powered by Jetson Orin NX 16GB device at FP32 precision with default input image size of 640.
|
||||
|
||||
### Comparison Chart
|
||||
|
||||
Eventhough all model exports are working with NVIDIA Jetson, we have only included **PyTorch, TorchScript, TensorRT** for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart.
|
||||
Even though all model exports are working with NVIDIA Jetson, we have only included **PyTorch, TorchScript, TensorRT** for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart.
|
||||
|
||||
<div style="text-align: center;">
|
||||
<img width="800" src="https://github.com/ultralytics/ultralytics/assets/20147381/202950fa-c24a-43ec-90c8-4d7b6a6c406e" alt="NVIDIA Jetson Ecosystem">
|
||||
|
|
@ -197,7 +197,7 @@ Eventhough all model exports are working with NVIDIA Jetson, we have only includ
|
|||
|
||||
### Detailed Comparison Table
|
||||
|
||||
The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF Graphdef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
|
||||
The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
|
||||
|
||||
!!! Performance
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue