Add Docs models JS charts (#18905)

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Glenn Jocher 2025-01-26 20:01:56 +01:00 committed by GitHub
parent a9e832b7b1
commit 8a185f6ebe
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 137 additions and 2 deletions

View file

@ -36,6 +36,11 @@ The Ultralytics Python API provides pre-trained PaddlePaddle RT-DETR models with
- RT-DETR-L: 53.0% AP on COCO val2017, 114 FPS on T4 GPU
- RT-DETR-X: 54.8% AP on COCO val2017, 74 FPS on T4 GPU
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["RTDETRv2"]'></canvas>
## Usage Examples
This example provides simple RT-DETR training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.

View file

@ -55,6 +55,11 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLO11"]'></canvas>
!!! performance
=== "Detection (COCO)"

View file

@ -53,6 +53,11 @@ YOLOv10 comes in various model scales to cater to different application needs:
## Performance
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv10"]'></canvas>
YOLOv10 outperforms previous YOLO versions and other state-of-the-art models in terms of accuracy and efficiency. For example, YOLOv10-S is 1.8x faster than RT-DETR-R18 with similar AP on the COCO dataset, and YOLOv10-B has 46% less latency and 25% fewer parameters than YOLOv9-C with the same performance.
| Model | Input Size | AP<sup>val</sup> | FLOPs (G) | Latency (ms) |

View file

@ -32,6 +32,11 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv5"]'></canvas>
!!! performance
=== "Detection"

View file

@ -22,6 +22,11 @@ keywords: Meituan YOLOv6, object detection, real-time applications, BiC module,
## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv6-3.0"]'></canvas>
YOLOv6 provides various pre-trained models with different scales:
- YOLOv6-N: 37.5% AP on COCO val2017 at 1187 FPS with NVIDIA T4 GPU.

View file

@ -12,7 +12,14 @@ YOLOv7 is a state-of-the-art real-time object detector that surpasses all known
## Comparison of SOTA object detectors
From the results in the YOLO comparison table we know that the proposed method has the best speed-accuracy trade-off comprehensively. If we compare YOLOv7-tiny-SiLU with YOLOv5-N (r6.1), our method is 127 fps faster and 10.7% more accurate on AP. In addition, YOLOv7 has 51.4% AP at frame rate of 161 fps, while PPYOLOE-L with the same AP has only 78 fps frame rate. In terms of parameter usage, YOLOv7 is 41% less than PPYOLOE-L. If we compare YOLOv7-X with 114 fps inference speed to YOLOv5-L (r6.1) with 99 fps inference speed, YOLOv7-X can improve AP by 3.9%. If YOLOv7-X is compared with YOLOv5-X (r6.1) of similar scale, the inference speed of YOLOv7-X is 31 fps faster. In addition, in terms the amount of parameters and computation, YOLOv7-X reduces 22% of parameters and 8% of computation compared to YOLOv5-X (r6.1), but improves AP by 2.2% ([Source](https://arxiv.org/pdf/2207.02696)).
From the results in the YOLO comparison table we know that the proposed method has the best speed-accuracy trade-off comprehensively. If we compare YOLOv7-tiny-SiLU with YOLOv5-N (r6.1), our method is 127 fps faster and 10.7% more accurate on AP. In addition, YOLOv7 has 51.4% AP at frame rate of 161 fps, while PPYOLOE-L with the same AP has only 78 fps frame rate. In terms of parameter usage, YOLOv7 is 41% less than PPYOLOE-L.
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv7"]'></canvas>
If we compare YOLOv7-X with 114 fps inference speed to YOLOv5-L (r6.1) with 99 fps inference speed, YOLOv7-X can improve AP by 3.9%. If YOLOv7-X is compared with YOLOv5-X (r6.1) of similar scale, the inference speed of YOLOv7-X is 31 fps faster. In addition, in terms the amount of parameters and computation, YOLOv7-X reduces 22% of parameters and 8% of computation compared to YOLOv5-X (r6.1), but improves AP by 2.2% ([Source](https://arxiv.org/pdf/2207.02696)).
| Model | Params<br><sup>(M) | FLOPs<br><sup>(G) | Size<br><sup>(pixels) | FPS | AP<sup>test / val<br>50-95 | AP<sup>test<br>50 | AP<sup>test<br>75 | AP<sup>test<br>S | AP<sup>test<br>M | AP<sup>test<br>L |
| --------------------- | ------------------ | ----------------- | --------------------- | ------- | -------------------------- | ----------------- | ----------------- | ---------------- | ---------------- | ---------------- |

View file

@ -48,6 +48,11 @@ This table provides an overview of the YOLOv8 model variants, highlighting their
## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv8"]'></canvas>
!!! performance
=== "Detection (COCO)"

View file

@ -86,6 +86,11 @@ By benchmarking, you can ensure that your model not only performs well in contro
## Performance on MS COCO Dataset
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv9"]'></canvas>
The performance of YOLOv9 on the [COCO dataset](../datasets/detect/coco.md) exemplifies its significant advancements in real-time object detection, setting new benchmarks across various model sizes. Table 1 presents a comprehensive comparison of state-of-the-art real-time object detectors, illustrating YOLOv9's superior efficiency and [accuracy](https://www.ultralytics.com/glossary/accuracy).
**Table 1. Comparison of State-of-the-Art Real-Time Object Detectors**