From 316434cd30dc76770537d65844f1ec2e08e6314b Mon Sep 17 00:00:00 2001 From: Lakshantha Dissanayake Date: Thu, 9 Jan 2025 01:19:57 -0800 Subject: [PATCH] Update Benchmarks for NVIDIA DeepStream running on NVIDIA Jetson (#18603) --- docs/en/guides/deepstream-nvidia-jetson.md | 56 +++++++++++++++++++--- 1 file changed, 49 insertions(+), 7 deletions(-) diff --git a/docs/en/guides/deepstream-nvidia-jetson.md b/docs/en/guides/deepstream-nvidia-jetson.md index 678d1b11..1170eddc 100644 --- a/docs/en/guides/deepstream-nvidia-jetson.md +++ b/docs/en/guides/deepstream-nvidia-jetson.md @@ -336,15 +336,57 @@ deepstream-app -c deepstream_app_config.txt ## Benchmark Results -The following table summarizes how YOLO11s models perform at different TensorRT precision levels with an input size of 640x640 on NVIDIA Jetson Orin NX 16GB. +The following benchmarks summarizes how YOLO11 models perform at different TensorRT precision levels with an input size of 640x640 on NVIDIA Jetson Orin NX 16GB. -| Model Name | Precision | Inference Time (ms/im) | FPS | -| ---------- | --------- | ---------------------- | ---- | -| YOLO11s | FP32 | 14.6 | 68.5 | -| | FP16 | 7.94 | 126 | -| | INT8 | 5.95 | 168 | +### Comparison Chart -### Acknowledgements +
Jetson DeepStream Benchmarks Chart
+ +### Detailed Comparison Table + +!!! performance + + === "YOLO11n" + + | Format | Status | Inference time (ms/im) | + |-----------------|--------|------------------------| + | TensorRT (FP32) | ✅ | 8.64 | + | TensorRT (FP16) | ✅ | 5.27 | + | TensorRT (INT8) | ✅ | 4.54 | + + === "YOLO11s" + + | Format | Status | Inference time (ms/im) | + |-----------------|--------|------------------------| + | TensorRT (FP32) | ✅ | 14.53 | + | TensorRT (FP16) | ✅ | 7.91 | + | TensorRT (INT8) | ✅ | 6.05 | + + === "YOLO11m" + + | Format | Status | Inference time (ms/im) | + |-----------------|--------|------------------------| + | TensorRT (FP32) | ✅ | 32.05 | + | TensorRT (FP16) | ✅ | 15.55 | + | TensorRT (INT8) | ✅ | 10.43 | + + === "YOLO11l" + + | Format | Status | Inference time (ms/im) | + |-----------------|--------|------------------------| + | TensorRT (FP32) | ✅ | 39.68 | + | TensorRT (FP16) | ✅ | 19.88 | + | TensorRT (INT8) | ✅ | 13.64 | + + === "YOLO11x" + + | Format | Status | Inference time (ms/im) | + |-----------------|--------|------------------------| + | TensorRT (FP32) | ✅ | 80.65 | + | TensorRT (FP16) | ✅ | 39.06 | + | TensorRT (INT8) | ✅ | 22.83 | + +## Acknowledgements This guide was initially created by our friends at Seeed Studio, Lakshantha and Elaine.