Add Docs models JS charts (#18905)

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Glenn Jocher 2025-01-26 20:01:56 +01:00 committed by GitHub
parent a9e832b7b1
commit 8a185f6ebe
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 137 additions and 2 deletions

View file

@ -36,6 +36,11 @@ The Ultralytics Python API provides pre-trained PaddlePaddle RT-DETR models with
- RT-DETR-L: 53.0% AP on COCO val2017, 114 FPS on T4 GPU - RT-DETR-L: 53.0% AP on COCO val2017, 114 FPS on T4 GPU
- RT-DETR-X: 54.8% AP on COCO val2017, 74 FPS on T4 GPU - RT-DETR-X: 54.8% AP on COCO val2017, 74 FPS on T4 GPU
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["RTDETRv2"]'></canvas>
## Usage Examples ## Usage Examples
This example provides simple RT-DETR training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages. This example provides simple RT-DETR training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.

View file

@ -55,6 +55,11 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
## Performance Metrics ## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLO11"]'></canvas>
!!! performance !!! performance
=== "Detection (COCO)" === "Detection (COCO)"

View file

@ -53,6 +53,11 @@ YOLOv10 comes in various model scales to cater to different application needs:
## Performance ## Performance
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv10"]'></canvas>
YOLOv10 outperforms previous YOLO versions and other state-of-the-art models in terms of accuracy and efficiency. For example, YOLOv10-S is 1.8x faster than RT-DETR-R18 with similar AP on the COCO dataset, and YOLOv10-B has 46% less latency and 25% fewer parameters than YOLOv9-C with the same performance. YOLOv10 outperforms previous YOLO versions and other state-of-the-art models in terms of accuracy and efficiency. For example, YOLOv10-S is 1.8x faster than RT-DETR-R18 with similar AP on the COCO dataset, and YOLOv10-B has 46% less latency and 25% fewer parameters than YOLOv9-C with the same performance.
| Model | Input Size | AP<sup>val</sup> | FLOPs (G) | Latency (ms) | | Model | Input Size | AP<sup>val</sup> | FLOPs (G) | Latency (ms) |

View file

@ -32,6 +32,11 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
## Performance Metrics ## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv5"]'></canvas>
!!! performance !!! performance
=== "Detection" === "Detection"

View file

@ -22,6 +22,11 @@ keywords: Meituan YOLOv6, object detection, real-time applications, BiC module,
## Performance Metrics ## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv6-3.0"]'></canvas>
YOLOv6 provides various pre-trained models with different scales: YOLOv6 provides various pre-trained models with different scales:
- YOLOv6-N: 37.5% AP on COCO val2017 at 1187 FPS with NVIDIA T4 GPU. - YOLOv6-N: 37.5% AP on COCO val2017 at 1187 FPS with NVIDIA T4 GPU.

View file

@ -12,7 +12,14 @@ YOLOv7 is a state-of-the-art real-time object detector that surpasses all known
## Comparison of SOTA object detectors ## Comparison of SOTA object detectors
From the results in the YOLO comparison table we know that the proposed method has the best speed-accuracy trade-off comprehensively. If we compare YOLOv7-tiny-SiLU with YOLOv5-N (r6.1), our method is 127 fps faster and 10.7% more accurate on AP. In addition, YOLOv7 has 51.4% AP at frame rate of 161 fps, while PPYOLOE-L with the same AP has only 78 fps frame rate. In terms of parameter usage, YOLOv7 is 41% less than PPYOLOE-L. If we compare YOLOv7-X with 114 fps inference speed to YOLOv5-L (r6.1) with 99 fps inference speed, YOLOv7-X can improve AP by 3.9%. If YOLOv7-X is compared with YOLOv5-X (r6.1) of similar scale, the inference speed of YOLOv7-X is 31 fps faster. In addition, in terms the amount of parameters and computation, YOLOv7-X reduces 22% of parameters and 8% of computation compared to YOLOv5-X (r6.1), but improves AP by 2.2% ([Source](https://arxiv.org/pdf/2207.02696)). From the results in the YOLO comparison table we know that the proposed method has the best speed-accuracy trade-off comprehensively. If we compare YOLOv7-tiny-SiLU with YOLOv5-N (r6.1), our method is 127 fps faster and 10.7% more accurate on AP. In addition, YOLOv7 has 51.4% AP at frame rate of 161 fps, while PPYOLOE-L with the same AP has only 78 fps frame rate. In terms of parameter usage, YOLOv7 is 41% less than PPYOLOE-L.
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv7"]'></canvas>
If we compare YOLOv7-X with 114 fps inference speed to YOLOv5-L (r6.1) with 99 fps inference speed, YOLOv7-X can improve AP by 3.9%. If YOLOv7-X is compared with YOLOv5-X (r6.1) of similar scale, the inference speed of YOLOv7-X is 31 fps faster. In addition, in terms the amount of parameters and computation, YOLOv7-X reduces 22% of parameters and 8% of computation compared to YOLOv5-X (r6.1), but improves AP by 2.2% ([Source](https://arxiv.org/pdf/2207.02696)).
| Model | Params<br><sup>(M) | FLOPs<br><sup>(G) | Size<br><sup>(pixels) | FPS | AP<sup>test / val<br>50-95 | AP<sup>test<br>50 | AP<sup>test<br>75 | AP<sup>test<br>S | AP<sup>test<br>M | AP<sup>test<br>L | | Model | Params<br><sup>(M) | FLOPs<br><sup>(G) | Size<br><sup>(pixels) | FPS | AP<sup>test / val<br>50-95 | AP<sup>test<br>50 | AP<sup>test<br>75 | AP<sup>test<br>S | AP<sup>test<br>M | AP<sup>test<br>L |
| --------------------- | ------------------ | ----------------- | --------------------- | ------- | -------------------------- | ----------------- | ----------------- | ---------------- | ---------------- | ---------------- | | --------------------- | ------------------ | ----------------- | --------------------- | ------- | -------------------------- | ----------------- | ----------------- | ---------------- | ---------------- | ---------------- |

View file

@ -48,6 +48,11 @@ This table provides an overview of the YOLOv8 model variants, highlighting their
## Performance Metrics ## Performance Metrics
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv8"]'></canvas>
!!! performance !!! performance
=== "Detection (COCO)" === "Detection (COCO)"

View file

@ -86,6 +86,11 @@ By benchmarking, you can ensure that your model not only performs well in contro
## Performance on MS COCO Dataset ## Performance on MS COCO Dataset
<script async src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
<script defer src="../../javascript/benchmark.js"></script>
<canvas id="modelComparisonChart" width="1024" height="400" active-models='["YOLOv9"]'></canvas>
The performance of YOLOv9 on the [COCO dataset](../datasets/detect/coco.md) exemplifies its significant advancements in real-time object detection, setting new benchmarks across various model sizes. Table 1 presents a comprehensive comparison of state-of-the-art real-time object detectors, illustrating YOLOv9's superior efficiency and [accuracy](https://www.ultralytics.com/glossary/accuracy). The performance of YOLOv9 on the [COCO dataset](../datasets/detect/coco.md) exemplifies its significant advancements in real-time object detection, setting new benchmarks across various model sizes. Table 1 presents a comprehensive comparison of state-of-the-art real-time object detectors, illustrating YOLOv9's superior efficiency and [accuracy](https://www.ultralytics.com/glossary/accuracy).
**Table 1. Comparison of State-of-the-Art Real-Time Object Detectors** **Table 1. Comparison of State-of-the-Art Real-Time Object Detectors**

93
docs/model_data.py Normal file
View file

@ -0,0 +1,93 @@
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
data = {
"YOLO11": {
"n": {"size": 640, "map": 39.5, "cpu": 56.1, "t4": 1.5, "params": 2.6, "flops": 6.5},
"s": {"size": 640, "map": 47.0, "cpu": 90.0, "t4": 2.5, "params": 9.4, "flops": 21.5},
"m": {"size": 640, "map": 51.5, "cpu": 183.2, "t4": 4.7, "params": 20.1, "flops": 68.0},
"l": {"size": 640, "map": 53.4, "cpu": 238.6, "t4": 6.2, "params": 25.3, "flops": 86.9},
"x": {"size": 640, "map": 54.7, "cpu": 462.8, "t4": 11.3, "params": 56.9, "flops": 194.9},
},
"YOLOv10": {
"n": {"size": 640, "map": 39.5, "cpu": "", "t4": 1.56, "params": 2.3, "flops": 6.7},
"s": {"size": 640, "map": 46.7, "cpu": "", "t4": 2.66, "params": 7.2, "flops": 21.6},
"m": {"size": 640, "map": 51.3, "cpu": "", "t4": 5.48, "params": 15.4, "flops": 59.1},
"b": {"size": 640, "map": 52.7, "cpu": "", "t4": 6.54, "params": 24.4, "flops": 92.0},
"l": {"size": 640, "map": 53.3, "cpu": "", "t4": 8.33, "params": 29.5, "flops": 120.3},
"x": {"size": 640, "map": 54.4, "cpu": "", "t4": 12.2, "params": 56.9, "flops": 160.4},
},
"YOLOv9": {
"t": {"size": 640, "map": 38.3, "cpu": "", "t4": 2.3, "params": 2.0, "flops": 7.7},
"s": {"size": 640, "map": 46.8, "cpu": "", "t4": 3.54, "params": 7.1, "flops": 26.4},
"m": {"size": 640, "map": 51.4, "cpu": "", "t4": 6.43, "params": 20.0, "flops": 76.3},
"c": {"size": 640, "map": 53.0, "cpu": "", "t4": 7.16, "params": 25.3, "flops": 102.1},
"e": {"size": 640, "map": 55.6, "cpu": "", "t4": 16.77, "params": 57.3, "flops": 189.0},
},
"YOLOv8": {
"n": {"size": 640, "map": 37.3, "cpu": 80.4, "t4": 1.47, "params": 3.2, "flops": 8.7},
"s": {"size": 640, "map": 44.9, "cpu": 128.4, "t4": 2.66, "params": 11.2, "flops": 28.6},
"m": {"size": 640, "map": 50.2, "cpu": 234.7, "t4": 5.86, "params": 25.9, "flops": 78.9},
"l": {"size": 640, "map": 52.9, "cpu": 375.2, "t4": 9.06, "params": 43.7, "flops": 165.2},
"x": {"size": 640, "map": 53.9, "cpu": 479.1, "t4": 14.37, "params": 68.2, "flops": 257.8},
},
"YOLOv7": {
"l": {"size": 640, "map": 51.4, "cpu": "", "t4": 6.84, "params": 36.9, "flops": 104.7},
"x": {"size": 640, "map": 53.1, "cpu": "", "t4": 11.57, "params": 71.3, "flops": 189.9},
},
"YOLOv6-3.0": {
"n": {"size": 640, "map": 37.5, "cpu": "", "t4": 1.17, "params": 4.7, "flops": 11.4},
"s": {"size": 640, "map": 45.0, "cpu": "", "t4": 2.66, "params": 18.5, "flops": 45.3},
"m": {"size": 640, "map": 50.0, "cpu": "", "t4": 5.28, "params": 34.9, "flops": 85.8},
"l": {"size": 640, "map": 52.8, "cpu": "", "t4": 8.95, "params": 59.6, "flops": 150.7},
},
"YOLOv5": {
"n": {"size": 640, "map": 28.0, "cpu": 73.6, "t4": 1.12, "params": 2.6, "flops": 7.7},
"s": {"size": 640, "map": 37.4, "cpu": 120.7, "t4": 1.92, "params": 9.1, "flops": 24.0},
"m": {"size": 640, "map": 45.4, "cpu": 233.9, "t4": 4.03, "params": 25.1, "flops": 64.2},
"l": {"size": 640, "map": 49.0, "cpu": 408.4, "t4": 6.61, "params": 53.2, "flops": 135.0},
"x": {"size": 640, "map": 50.7, "cpu": 763.2, "t4": 11.89, "params": 97.2, "flops": 246.4},
},
"PP-YOLOE+": {
"t": {"size": 640, "map": 39.9, "cpu": "", "t4": 2.84, "params": "", "flops": ""},
"s": {"size": 640, "map": 43.7, "cpu": "", "t4": 2.62, "params": "", "flops": ""},
"m": {"size": 640, "map": 49.8, "cpu": "", "t4": 5.56, "params": "", "flops": ""},
"l": {"size": 640, "map": 52.9, "cpu": "", "t4": 8.36, "params": "", "flops": ""},
"x": {"size": 640, "map": 54.7, "cpu": "", "t4": 14.3, "params": "", "flops": ""},
},
"DAMO-YOLO": {
"t": {"size": 640, "map": 42.0, "cpu": "", "t4": 2.32, "params": 8.5, "flops": 18.1},
"s": {"size": 640, "map": 46.0, "cpu": "", "t4": 3.45, "params": 16.3, "flops": 37.8},
"m": {"size": 640, "map": 49.2, "cpu": "", "t4": 5.09, "params": 28.2, "flops": 61.8},
"l": {"size": 640, "map": 50.8, "cpu": "", "t4": 7.18, "params": 42.1, "flops": 97.3},
},
"YOLOX": {
"nano": {"size": 416, "map": 25.8, "cpu": "", "t4": "", "params": 0.91, "flops": 1.08},
"tiny": {"size": 416, "map": 32.8, "cpu": "", "t4": "", "params": 5.06, "flops": 6.45},
"s": {"size": 640, "map": 40.5, "cpu": "", "t4": 2.56, "params": 9.0, "flops": 26.8},
"m": {"size": 640, "map": 46.9, "cpu": "", "t4": 5.43, "params": 25.3, "flops": 73.8},
"l": {"size": 640, "map": 49.7, "cpu": "", "t4": 9.04, "params": 54.2, "flops": 155.6},
"x": {"size": 640, "map": 51.1, "cpu": "", "t4": 16.1, "params": 99.1, "flops": 281.9},
},
"RTDETRv2": {
"s": {"size": 640, "map": 48.1, "cpu": "", "t4": 5.03, "params": 20, "flops": 60},
"m": {"size": 640, "map": 51.9, "cpu": "", "t4": 7.51, "params": 36, "flops": 100},
"l": {"size": 640, "map": 53.4, "cpu": "", "t4": 9.76, "params": 42, "flops": 136},
"x": {"size": 640, "map": 54.3, "cpu": "", "t4": 15.03, "params": 76, "flops": 259},
},
"EfficientDet": {
"d0": {"size": 640, "map": 34.6, "cpu": 10.2, "t4": 3.92, "params": 3.9, "flops": 2.54},
"d1": {"size": 640, "map": 40.5, "cpu": 13.5, "t4": 7.31, "params": 6.6, "flops": 6.10},
"d2": {"size": 640, "map": 43.0, "cpu": 17.7, "t4": 10.92, "params": 8.1, "flops": 11.0},
"d3": {"size": 640, "map": 47.5, "cpu": 28.0, "t4": 19.59, "params": 12.0, "flops": 24.9},
"d4": {"size": 640, "map": 49.7, "cpu": 42.8, "t4": 33.55, "params": 20.7, "flops": 55.2},
"d5": {"size": 640, "map": 51.5, "cpu": 72.5, "t4": 67.86, "params": 33.7, "flops": 130.0},
"d6": {"size": 640, "map": 52.6, "cpu": 92.8, "t4": 89.29, "params": 51.9, "flops": 226.0},
"d7": {"size": 640, "map": 53.7, "cpu": 122.0, "t4": 128.07, "params": 51.9, "flops": 325.0},
},
}
if __name__ == "__main__":
import json
with open("model_data.json", "w") as f:
json.dump(data, f)

View file

@ -173,7 +173,7 @@ function updateChart(initialDatasets = []) {
label: (tooltipItem) => { label: (tooltipItem) => {
const { dataset, dataIndex } = tooltipItem; const { dataset, dataIndex } = tooltipItem;
const point = dataset.data[dataIndex]; const point = dataset.data[dataIndex];
return `${dataset.label}${point.version.toLowerCase()}: Speed = ${point.x}, mAP = ${point.y}`; // Custom tooltip label. return `${dataset.label}${point.version.toLowerCase()}: Speed = ${point.x}ms/img, mAP50-95 = ${point.y}`; // Custom tooltip label.
}, },
}, },
mode: "nearest", mode: "nearest",