Docs Prettier reformat (#13483)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
2f2e81614f
commit
e5185ccf63
90 changed files with 763 additions and 742 deletions
|
|
@ -63,7 +63,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -23,9 +23,9 @@ This Gradio interface provides an easy and interactive way to perform object det
|
|||
|
||||
## Why Use Gradio for Object Detection?
|
||||
|
||||
* **User-Friendly Interface:** Gradio offers a straightforward platform for users to upload images and visualize detection results without any coding requirement.
|
||||
* **Real-Time Adjustments:** Parameters such as confidence and IoU thresholds can be adjusted on the fly, allowing for immediate feedback and optimization of detection results.
|
||||
* **Broad Accessibility:** The Gradio web interface can be accessed by anyone, making it an excellent tool for demonstrations, educational purposes, and quick experiments.
|
||||
- **User-Friendly Interface:** Gradio offers a straightforward platform for users to upload images and visualize detection results without any coding requirement.
|
||||
- **Real-Time Adjustments:** Parameters such as confidence and IoU thresholds can be adjusted on the fly, allowing for immediate feedback and optimization of detection results.
|
||||
- **Broad Accessibility:** The Gradio web interface can be accessed by anyone, making it an excellent tool for demonstrations, educational purposes, and quick experiments.
|
||||
|
||||
<p align="center">
|
||||
<img width="800" alt="Gradio example screenshot" src="https://github.com/RizwanMunawar/ultralytics/assets/26833433/52ee3cd2-ac59-4c27-9084-0fd05c6c33be">
|
||||
|
|
@ -41,14 +41,14 @@ pip install gradio
|
|||
|
||||
1. **Upload Image:** Click on 'Upload Image' to choose an image file for object detection.
|
||||
2. **Adjust Parameters:**
|
||||
* **Confidence Threshold:** Slider to set the minimum confidence level for detecting objects.
|
||||
* **IoU Threshold:** Slider to set the IoU threshold for distinguishing different objects.
|
||||
- **Confidence Threshold:** Slider to set the minimum confidence level for detecting objects.
|
||||
- **IoU Threshold:** Slider to set the IoU threshold for distinguishing different objects.
|
||||
3. **View Results:** The processed image with detected objects and their labels will be displayed.
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
* **Sample Image 1:** Bus detection with default thresholds.
|
||||
* **Sample Image 2:** Detection on a sports image with default thresholds.
|
||||
- **Sample Image 1:** Bus detection with default thresholds.
|
||||
- **Sample Image 2:** Detection on a sports image with default thresholds.
|
||||
|
||||
## Usage Example
|
||||
|
||||
|
|
@ -104,7 +104,7 @@ if __name__ == "__main__":
|
|||
## Parameters Explanation
|
||||
|
||||
| Parameter Name | Type | Description |
|
||||
|------------------|---------|----------------------------------------------------------|
|
||||
| ---------------- | ------- | -------------------------------------------------------- |
|
||||
| `img` | `Image` | The image on which object detection will be performed. |
|
||||
| `conf_threshold` | `float` | Confidence threshold for detecting objects. |
|
||||
| `iou_threshold` | `float` | Intersection-over-union threshold for object separation. |
|
||||
|
|
@ -112,7 +112,7 @@ if __name__ == "__main__":
|
|||
### Gradio Interface Components
|
||||
|
||||
| Component | Description |
|
||||
|--------------|------------------------------------------|
|
||||
| ------------ | ---------------------------------------- |
|
||||
| Image Input | To upload the image for detection. |
|
||||
| Sliders | To adjust confidence and IoU thresholds. |
|
||||
| Image Output | To display the detection results. |
|
||||
|
|
|
|||
|
|
@ -86,20 +86,20 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
|
|||
We also support a variety of model export formats for deployment in different environments. Here are the available formats:
|
||||
|
||||
| Format | `format` Argument | Model | Metadata | Arguments |
|
||||
|---------------------------------------------------|-------------------|---------------------------|----------|----------------------------------------------------------------------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
|
||||
| [TorchScript](../integrations/torchscript.md) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize`, `batch` |
|
||||
| [ONNX](../integrations/onnx.md) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset`, `batch` |
|
||||
| [OpenVINO](../integrations/openvino.md) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [TensorRT](../integrations/tensorrt.md) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace`, `int8`, `batch` |
|
||||
| [CoreML](../integrations/coreml.md) | `coreml` | `yolov8n.mlpackage` | ✅ | `imgsz`, `half`, `int8`, `nms`, `batch` |
|
||||
| [TF SavedModel](../integrations/tf-savedmodel.md) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras`, `int8`, `batch` |
|
||||
| [TF GraphDef](../integrations/tf-graphdef.md) | `pb` | `yolov8n.pb` | ❌ | `imgsz`, `batch` |
|
||||
| [TF Lite](../integrations/tflite.md) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [TF Edge TPU](../integrations/edge-tpu.md) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
|
||||
| [TF.js](../integrations/tfjs.md) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [PaddlePaddle](../integrations/paddlepaddle.md) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz`, `batch` |
|
||||
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
|
||||
| ------------------------------------------------- | ----------------- | ------------------------- | -------- | -------------------------------------------------------------------- |
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
|
||||
| [TorchScript](../integrations/torchscript.md) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize`, `batch` |
|
||||
| [ONNX](../integrations/onnx.md) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset`, `batch` |
|
||||
| [OpenVINO](../integrations/openvino.md) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [TensorRT](../integrations/tensorrt.md) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace`, `int8`, `batch` |
|
||||
| [CoreML](../integrations/coreml.md) | `coreml` | `yolov8n.mlpackage` | ✅ | `imgsz`, `half`, `int8`, `nms`, `batch` |
|
||||
| [TF SavedModel](../integrations/tf-savedmodel.md) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras`, `int8`, `batch` |
|
||||
| [TF GraphDef](../integrations/tf-graphdef.md) | `pb` | `yolov8n.pb` | ❌ | `imgsz`, `batch` |
|
||||
| [TF Lite](../integrations/tflite.md) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [TF Edge TPU](../integrations/edge-tpu.md) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
|
||||
| [TF.js](../integrations/tfjs.md) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [PaddlePaddle](../integrations/paddlepaddle.md) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz`, `batch` |
|
||||
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
|
||||
|
||||
Explore the links to learn more about each integration and how to get the most out of them with Ultralytics. See full `export` details in the [Export](../modes/export.md) page.
|
||||
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this
|
|||
export MLFLOW_EXPERIMENT_NAME=<your_experiment_name>
|
||||
```
|
||||
|
||||
Or use the `project=<project>` argument when training a YOLO model, i.e. `yolo train project=my_project`.
|
||||
Or use the `project=<project>` argument when training a YOLO model, i.e. `yolo train project=my_project`.
|
||||
|
||||
2. **Set a Run Name**: Similar to setting a project name, you can set the run name via an environment variable:
|
||||
|
||||
|
|
@ -76,7 +76,7 @@ Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this
|
|||
export MLFLOW_RUN=<your_run_name>
|
||||
```
|
||||
|
||||
Or use the `name=<name>` argument when training a YOLO model, i.e. `yolo train project=my_project name=my_name`.
|
||||
Or use the `name=<name>` argument when training a YOLO model, i.e. `yolo train project=my_project name=my_name`.
|
||||
|
||||
3. **Start Local MLflow Server**: To start tracking, use:
|
||||
|
||||
|
|
@ -84,7 +84,7 @@ Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this
|
|||
mlflow server --backend-store-uri runs/mlflow'
|
||||
```
|
||||
|
||||
This will start a local server at http://127.0.0.1:5000 by default and save all mlflow logs to the 'runs/mlflow' directory. To specify a different URI, set the `MLFLOW_TRACKING_URI` environment variable.
|
||||
This will start a local server at http://127.0.0.1:5000 by default and save all mlflow logs to the 'runs/mlflow' directory. To specify a different URI, set the `MLFLOW_TRACKING_URI` environment variable.
|
||||
|
||||
4. **Kill MLflow Server Instances**: To stop all running MLflow instances, run:
|
||||
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ Before diving into the usage instructions, it's important to note that while all
|
|||
```bash
|
||||
# Export a YOLOv8n PyTorch model to NCNN format
|
||||
yolo export model=yolov8n.pt format=ncnn # creates '/yolov8n_ncnn_model'
|
||||
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='./yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ Export a YOLOv8n model to OpenVINO format and run inference with the exported mo
|
|||
## Arguments
|
||||
|
||||
| Key | Value | Description |
|
||||
|----------|--------------|------------------------------------------------------|
|
||||
| -------- | ------------ | ---------------------------------------------------- |
|
||||
| `format` | `'openvino'` | format to export to |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `half` | `False` | FP16 quantization |
|
||||
|
|
@ -118,27 +118,27 @@ Benchmarks below run on Intel® Data Center GPU Flex 170 at FP32 precision.
|
|||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|-------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 21.79 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 23.24 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 37.22 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3703 | 3.29 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 31.89 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 32.71 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.4472 | 43.42 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4470 | 3.92 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 50.75 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 47.90 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.4999 | 63.16 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 49.8 | 0.4997 | 7.11 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 77.45 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.5268 | 85.71 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 88.94 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5264 | 9.37 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 100.09 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 114.64 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 110.32 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5367 | 15.02 |
|
||||
| ------- | ----------- | ------ | --------- | ----------- | ---------------------- |
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 21.79 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 23.24 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 37.22 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3703 | 3.29 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 31.89 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 32.71 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.4472 | 43.42 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4470 | 3.92 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 50.75 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 47.90 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.4999 | 63.16 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 49.8 | 0.4997 | 7.11 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 77.45 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.5268 | 85.71 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 88.94 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5264 | 9.37 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 100.09 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 114.64 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 110.32 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5367 | 15.02 |
|
||||
|
||||
This table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across four different formats (PyTorch, TorchScript, ONNX, OpenVINO), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
|
||||
|
||||
|
|
@ -157,27 +157,27 @@ Benchmarks below run on Intel® Arc 770 GPU at FP32 precision.
|
|||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|---------------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 88.79 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 102.66 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 57.98 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3703 | 8.52 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 189.83 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 227.58 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.7 | 0.4472 | 142.03 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4469 | 9.19 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 411.64 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 517.12 |
|
||||
| YOLOv8m | ONNX | ✅ | 98.9 | 0.4999 | 298.68 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.4996 | 12.55 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 725.73 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.1 | 0.5268 | 892.83 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 576.11 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5262 | 17.62 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 988.92 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 1186.42 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 768.90 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5367 | 19 |
|
||||
| ------- | ----------- | ------ | --------- | ------------------- | ---------------------- |
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 88.79 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 102.66 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 57.98 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3703 | 8.52 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 189.83 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 227.58 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.7 | 0.4472 | 142.03 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4469 | 9.19 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 411.64 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 517.12 |
|
||||
| YOLOv8m | ONNX | ✅ | 98.9 | 0.4999 | 298.68 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.4996 | 12.55 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 725.73 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.1 | 0.5268 | 892.83 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 576.11 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5262 | 17.62 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 988.92 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 1186.42 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 768.90 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5367 | 19 |
|
||||
|
||||
### Intel Xeon CPU
|
||||
|
||||
|
|
@ -192,27 +192,27 @@ Benchmarks below run on 4th Gen Intel® Xeon® Scalable CPU at FP32 precision.
|
|||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|---------------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 24.36 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 23.93 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 39.86 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3704 | 11.34 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 33.77 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 34.84 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.4472 | 43.23 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4471 | 13.86 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 53.91 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 53.51 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.4999 | 64.16 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.4996 | 28.79 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 75.78 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.5268 | 79.13 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 88.45 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5263 | 56.23 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 96.60 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 114.28 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 111.02 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5371 | 83.28 |
|
||||
| ------- | ----------- | ------ | --------- | ------------------- | ---------------------- |
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 24.36 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 23.93 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 39.86 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3704 | 11.34 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 33.77 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 34.84 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.4472 | 43.23 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4471 | 13.86 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 53.91 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 53.51 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.4999 | 64.16 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.4996 | 28.79 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 75.78 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.5268 | 79.13 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 88.45 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5263 | 56.23 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 96.60 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 114.28 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 111.02 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5371 | 83.28 |
|
||||
|
||||
### Intel Core CPU
|
||||
|
||||
|
|
@ -225,27 +225,27 @@ Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
|
|||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|---------------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.4478 | 104.61 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.4525 | 112.39 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.4525 | 28.02 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.4504 | 23.53 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.5885 | 194.83 |
|
||||
| YOLOv8s | TorchScript | ✅ | 43.0 | 0.5962 | 202.01 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.5962 | 65.74 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.5966 | 38.66 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.6101 | 355.23 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.6120 | 424.78 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.6120 | 173.39 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.6091 | 69.80 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.6591 | 593.00 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.6580 | 697.54 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.6580 | 342.15 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.0708 | 117.69 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.6651 | 804.65 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.8 | 0.6650 | 921.46 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.6650 | 526.66 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.6619 | 158.73 |
|
||||
| ------- | ----------- | ------ | --------- | ------------------- | ---------------------- |
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.4478 | 104.61 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.4525 | 112.39 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.4525 | 28.02 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.4504 | 23.53 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.5885 | 194.83 |
|
||||
| YOLOv8s | TorchScript | ✅ | 43.0 | 0.5962 | 202.01 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.5962 | 65.74 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.5966 | 38.66 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.6101 | 355.23 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.6120 | 424.78 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.6120 | 173.39 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.6091 | 69.80 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.6591 | 593.00 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.6580 | 697.54 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.6580 | 342.15 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.0708 | 117.69 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.6651 | 804.65 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.8 | 0.6650 | 921.46 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.6650 | 526.66 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.6619 | 158.73 |
|
||||
|
||||
## Reproduce Our Results
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ The ability to export to PaddlePaddle model format allows you to optimize your [
|
|||
<img width="75%" src="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/imgs/logo.png?raw=true" alt="PaddlePaddle Logo">
|
||||
</p>
|
||||
|
||||
Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PArallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
|
||||
Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PA**rallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
|
||||
|
||||
It offers tools and resources similar to popular frameworks like TensorFlow and PyTorch, making it accessible for developers of all experience levels. From farming and factories to service businesses, PaddlePaddle's large developer community of over 4.77 million is helping create and deploy AI applications.
|
||||
|
||||
|
|
@ -44,7 +44,7 @@ PaddlePaddle provides a range of options, each offering a distinct balance of ea
|
|||
|
||||
- **Paddle Lite**: Paddle Lite is designed for deployment on mobile and embedded devices where resources are limited. It optimizes models for smaller sizes and faster inference on ARM CPUs, GPUs, and other specialized hardware.
|
||||
|
||||
- **Paddle.js**: Paddle.js enables you to deploy PaddlePaddle models directly within web browsers. Paddle.js can either load a pre-trained model or transform a model from [paddle-hub](https://github.com/PaddlePaddle/PaddleHub) with model transforming tools provided by Paddle.js. It can run in browsers that support WebGL/WebGPU/WebAssembly.
|
||||
- **Paddle.js**: Paddle.js enables you to deploy PaddlePaddle models directly within web browsers. Paddle.js can either load a pre-trained model or transform a model from [paddle-hub](https://github.com/PaddlePaddle/PaddleHub) with model transforming tools provided by Paddle.js. It can run in browsers that support WebGL/WebGPU/WebAssembly.
|
||||
|
||||
## Export to PaddlePaddle: Converting Your YOLOv8 Model
|
||||
|
||||
|
|
@ -57,7 +57,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ To install the required packages, run:
|
|||
The `tune()` method in YOLOv8 provides an easy-to-use interface for hyperparameter tuning with Ray Tune. It accepts several arguments that allow you to customize the tuning process. Below is a detailed explanation of each parameter:
|
||||
|
||||
| Parameter | Type | Description | Default Value |
|
||||
|-----------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
|
||||
| --------------- | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- |
|
||||
| `data` | `str` | The dataset configuration file (in YAML format) to run the tuner on. This file should specify the training and validation data paths, as well as other dataset-specific settings. | |
|
||||
| `space` | `dict, optional` | A dictionary defining the hyperparameter search space for Ray Tune. Each key corresponds to a hyperparameter name, and the value specifies the range of values to explore during tuning. If not provided, YOLOv8 uses a default search space with various hyperparameters. | |
|
||||
| `grace_period` | `int, optional` | The grace period in epochs for the [ASHA scheduler](https://docs.ray.io/en/latest/tune/api/schedulers.html) in Ray Tune. The scheduler will not terminate any trial before this number of epochs, allowing the model to have some minimum training before making a decision on early stopping. | 10 |
|
||||
|
|
@ -76,7 +76,7 @@ By customizing these parameters, you can fine-tune the hyperparameter optimizati
|
|||
The following table lists the default search space parameters for hyperparameter tuning in YOLOv8 with Ray Tune. Each parameter has a specific value range defined by `tune.uniform()`.
|
||||
|
||||
| Parameter | Value Range | Description |
|
||||
|-------------------|----------------------------|------------------------------------------|
|
||||
| ----------------- | -------------------------- | ---------------------------------------- |
|
||||
| `lr0` | `tune.uniform(1e-5, 1e-1)` | Initial learning rate |
|
||||
| `lrf` | `tune.uniform(0.01, 1.0)` | Final learning rate factor |
|
||||
| `momentum` | `tune.uniform(0.6, 0.98)` | Momentum |
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
@ -139,7 +139,7 @@ The arguments provided when using [export](../modes/export.md) for an Ultralytic
|
|||
|
||||
!!! note
|
||||
|
||||
During calibration, twice the `batch` size provided will be used. Using small batches can lead to inaccurate scaling during calibration. This is because the process adjusts based on the data it sees. Small batches might not capture the full range of values, leading to issues with the final calibration, so the `batch` size is doubled automatically. If no batch size is specified `batch=1`, calibration will be run at `batch=1 * 2` to reduce calibration scaling errors.
|
||||
During calibration, twice the `batch` size provided will be used. Using small batches can lead to inaccurate scaling during calibration. This is because the process adjusts based on the data it sees. Small batches might not capture the full range of values, leading to issues with the final calibration, so the `batch` size is doubled automatically. If no batch size is specified `batch=1`, calibration will be run at `batch=1 * 2` to reduce calibration scaling errors.
|
||||
|
||||
Experimentation by NVIDIA led them to recommend using at least 500 calibration images that are representative of the data for your model, with INT8 quantization calibration. This is a guideline and not a _hard_ requirement, and <u>**you will need to experiment with what is required to perform well for your dataset**.</u> Since the calibration data is required for INT8 calibration with TensorRT, make certain to use the `data` argument when `int8=True` for TensorRT and use `data="my_dataset.yaml"`, which will use the images from [validation](../modes/val.md) to calibrate with. When no value is passed for `data` with export to TensorRT with INT8 quantization, the default will be to use one of the ["small" example datasets based on the model task](../datasets/index.md) instead of throwing an error.
|
||||
|
||||
|
|
@ -166,13 +166,13 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
# Run inference
|
||||
result = model.predict("https://ultralytics.com/images/bus.jpg")
|
||||
```
|
||||
|
||||
|
||||
1. Exports with dynamic axes, this will be enabled by default when exporting with `int8=True` even when not explicitly set. See [export arguments](../modes/export.md#arguments) for additional information.
|
||||
2. Sets max batch size of 8 for exported model, which calibrates with `batch = 2 * 8` to avoid scaling errors during calibration.
|
||||
3. Allocates 4 GiB of memory instead of allocating the entire device for conversion process.
|
||||
4. Uses [COCO dataset](../datasets/detect/coco.md) for calibration, specifically the images used for [validation](../modes/val.md) (5,000 total).
|
||||
|
||||
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
|
|
@ -219,7 +219,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
See [Detection Docs](../tasks/detect.md) for usage examples with these models trained on [COCO](../datasets/detect/coco.md), which include 80 pre-trained classes.
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -234,8 +234,8 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
=== "Segmentation (COCO)"
|
||||
|
||||
See [Segmentation Docs](../tasks/segment.md) for usage examples with these models trained on [COCO](../datasets/segment/coco.md), which include 80 pre-trained classes.
|
||||
|
||||
!!! note
|
||||
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n-seg.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | mAP<sup>val<br>50(M) | mAP<sup>val<br>50-95(M) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -251,7 +251,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
See [Classification Docs](../tasks/classify.md) for usage examples with these models trained on [ImageNet](../datasets/classify/imagenet.md), which include 1000 pre-trained classes.
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n-cls.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | top-1 | top-5 | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -267,7 +267,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
See [Pose Estimation Docs](../tasks/pose.md) for usage examples with these models trained on [COCO](../datasets/pose/coco.md), which include 1 pre-trained class, "person".
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n-pose.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | mAP<sup>val<br>50(P) | mAP<sup>val<br>50-95(P) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -283,7 +283,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
See [Oriented Detection Docs](../tasks/obb.md) for usage examples with these models trained on [DOTAv1](../datasets/obb/dota-v2.md#dota-v10), which include 15 pre-trained classes.
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n-obb.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -303,7 +303,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
Tested with Windows 10.0.19045, `python 3.10.9`, `ultralytics==8.2.4`, `tensorrt==10.0.0b6`
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -318,8 +318,8 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
=== "RTX 3060 12 GB"
|
||||
|
||||
Tested with Windows 10.0.22631, `python 3.11.9`, `ultralytics==8.2.4`, `tensorrt==10.0.1`
|
||||
|
||||
!!! note
|
||||
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n.engine`
|
||||
|
||||
|
||||
|
|
@ -336,7 +336,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
Tested with Pop!_OS 22.04 LTS, `python 3.10.12`, `ultralytics==8.2.4`, `tensorrt==8.6.1.post1`
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
@ -356,7 +356,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
Tested with JetPack 6.0 (L4T 36.3) Ubuntu 22.04.4 LTS, `python 3.10.12`, `ultralytics==8.2.16`, `tensorrt==10.0.1`
|
||||
|
||||
!!! note
|
||||
!!! note
|
||||
Inference times shown for `mean`, `min` (fastest), and `max` (slowest) for each test using pre-trained weights `yolov8n.engine`
|
||||
|
||||
| Precision | Eval test | mean<br>(ms) | min \| max<br>(ms) | mAP<sup>val<br>50(B) | mAP<sup>val<br>50-95(B) | `batch` | size<br><sup>(pixels) |
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -96,7 +96,7 @@ Before diving into the usage instructions, it's important to note that while all
|
|||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TFLite format
|
||||
yolo export model=yolov8n.pt format=tflite # creates 'yolov8n_float32.tflite'
|
||||
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='yolov8n_float32.tflite' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ To install the required package, run:
|
|||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
|
|
|
|||
|
|
@ -128,7 +128,7 @@ After running the usage code snippet, you can access the Weights & Biases (W&B)
|
|||
|
||||
- **Real-Time Metrics Tracking**: Observe metrics like loss, accuracy, and validation scores as they evolve during the training, offering immediate insights for model tuning.
|
||||
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/TB76U9O"><a href="//imgur.com/D6NVnmN">Take a look at how the experiments are tracked using Weights & Biases.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/TB76U9O"><a href="//imgur.com/D6NVnmN">Take a look at how the experiments are tracked using Weights & Biases.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
|
||||
- **Hyperparameter Optimization**: Weights & Biases aids in fine-tuning critical parameters such as learning rate, batch size, and more, enhancing the performance of YOLOv8.
|
||||
|
||||
|
|
@ -136,7 +136,7 @@ After running the usage code snippet, you can access the Weights & Biases (W&B)
|
|||
|
||||
- **Visualization of Training Progress**: Graphical representations of key metrics provide an intuitive understanding of the model's performance across epochs.
|
||||
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/kU5h7W4" data-context="false" ><a href="//imgur.com/a/kU5h7W4">Take a look at how Weights & Biases helps you visualize validation results.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/kU5h7W4" data-context="false" ><a href="//imgur.com/a/kU5h7W4">Take a look at how Weights & Biases helps you visualize validation results.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
|
||||
- **Resource Monitoring**: Keep track of CPU, GPU, and memory usage to optimize the efficiency of the training process.
|
||||
|
||||
|
|
@ -144,7 +144,7 @@ After running the usage code snippet, you can access the Weights & Biases (W&B)
|
|||
|
||||
- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases' image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays).
|
||||
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/UTSiufs" data-context="false" ><a href="//imgur.com/a/UTSiufs">Take a look at how Weights & Biases' image overlays helps visualize model inferences.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/UTSiufs" data-context="false" ><a href="//imgur.com/a/UTSiufs">Take a look at how Weights & Biases' image overlays helps visualize model inferences.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
|
||||
By using these features, you can effectively track, analyze, and optimize your YOLOv8 model's training, ensuring the best possible performance and efficiency.
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue