Add FAQ sections to Modes and Tasks (#14181)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Abirami Vina <abirami.vina@gmail.com> Co-authored-by: RizwanMunawar <chr043416@gmail.com> Co-authored-by: Muhammad Rizwan Munawar <muhammadrizwanmunawar123@gmail.com>
This commit is contained in:
parent
e285d3d1b2
commit
6c13bea7b8
39 changed files with 2247 additions and 481 deletions
|
|
@ -104,3 +104,70 @@ Benchmarks will attempt to run automatically on all possible export formats belo
|
|||
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
|
||||
|
||||
See full `export` details in the [Export](../modes/export.md) page.
|
||||
|
||||
## FAQ
|
||||
|
||||
### How do I benchmark my YOLOv8 model's performance using Ultralytics?
|
||||
|
||||
Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as mean Average Precision (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics.utils.benchmarks import benchmark
|
||||
|
||||
# Benchmark on GPU
|
||||
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
|
||||
```
|
||||
|
||||
For more details on benchmark arguments, visit the [Arguments](#arguments) section.
|
||||
|
||||
### What are the benefits of exporting YOLOv8 models to different formats?
|
||||
|
||||
Exporting YOLOv8 models to different formats such as ONNX, TensorRT, and OpenVINO allows you to optimize performance based on your deployment environment. For instance:
|
||||
|
||||
- **ONNX:** Provides up to 3x CPU speedup.
|
||||
- **TensorRT:** Offers up to 5x GPU speedup.
|
||||
- **OpenVINO:** Specifically optimized for Intel hardware.
|
||||
These formats enhance both the speed and accuracy of your models, making them more efficient for various real-world applications. Visit the [Export](../modes/export.md) page for complete details.
|
||||
|
||||
### Why is benchmarking crucial in evaluating YOLOv8 models?
|
||||
|
||||
Benchmarking your YOLOv8 models is essential for several reasons:
|
||||
|
||||
- **Informed Decisions:** Understand the trade-offs between speed and accuracy.
|
||||
- **Resource Allocation:** Gauge the performance across different hardware options.
|
||||
- **Optimization:** Determine which export format offers the best performance for specific use cases.
|
||||
- **Cost Efficiency:** Optimize hardware usage based on benchmark results.
|
||||
Key metrics such as mAP50-95, Top-5 accuracy, and inference time help in making these evaluations. Refer to the [Key Metrics](#key-metrics-in-benchmark-mode) section for more information.
|
||||
|
||||
### Which export formats are supported by YOLOv8, and what are their advantages?
|
||||
|
||||
YOLOv8 supports a variety of export formats, each tailored for specific hardware and use cases:
|
||||
|
||||
- **ONNX:** Best for CPU performance.
|
||||
- **TensorRT:** Ideal for GPU efficiency.
|
||||
- **OpenVINO:** Optimized for Intel hardware.
|
||||
- **CoreML & TensorFlow:** Useful for iOS and general ML applications.
|
||||
For a complete list of supported formats and their respective advantages, check out the [Supported Export Formats](#supported-export-formats) section.
|
||||
|
||||
### What arguments can I use to fine-tune my YOLOv8 benchmarks?
|
||||
|
||||
When running benchmarks, several arguments can be customized to suit specific needs:
|
||||
|
||||
- **model:** Path to the model file (e.g., "yolov8n.pt").
|
||||
- **data:** Path to a YAML file defining the dataset (e.g., "coco8.yaml").
|
||||
- **imgsz:** The input image size, either as a single integer or a tuple.
|
||||
- **half:** Enable FP16 inference for better performance.
|
||||
- **int8:** Activate INT8 quantization for edge devices.
|
||||
- **device:** Specify the computation device (e.g., "cpu", "cuda:0").
|
||||
- **verbose:** Control the level of logging detail.
|
||||
For a full list of arguments, refer to the [Arguments](#arguments) section.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue