Fix Docs calls to model.benchmark() (#18391)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
Signed-off-by: Kishan Pankajbhai Pipariya <39761387+KishanPipariya@users.noreply.github.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Kishan Pankajbhai Pipariya <39761387+KishanPipariya@users.noreply.github.com>
This commit is contained in:
Glenn Jocher 2024-12-25 14:11:42 +01:00 committed by GitHub
parent 93862d3640
commit d35860d4a1
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 5 additions and 5 deletions

View file

@ -352,7 +352,7 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e
model = YOLO("yolov8n.pt")
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
results = model.benchmarks(data="coco8.yaml")
results = model.benchmark(data="coco8.yaml")
```
=== "CLI"
@ -466,7 +466,7 @@ Yes, you can benchmark YOLOv8 models in various formats including PyTorch, Torch
model = YOLO("yolov8n.pt")
# Benchmark YOLOv8n speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset for all export formats
results = model.benchmarks(data="coco8.yaml")
results = model.benchmark(data="coco8.yaml")
```
=== "CLI"