Fixed OpenVINO Docs formatting (#14773)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
2ebd01faa8
commit
3b1517f589
8 changed files with 98 additions and 96 deletions
|
|
@ -64,6 +64,8 @@ Export a YOLOv8n model to OpenVINO format and run inference with the exported mo
|
|||
| `format` | `'openvino'` | format to export to |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `half` | `False` | FP16 quantization |
|
||||
| `int8` | `False` | INT8 quantization |
|
||||
| `batch` | `1` | batch size for inference |
|
||||
|
||||
## Benefits of OpenVINO
|
||||
|
||||
|
|
@ -262,14 +264,14 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e
|
|||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
|
||||
results = model.benchmarks(data="coco8.yaml")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
|
||||
yolo benchmark model=yolov8n.pt data=coco8.yaml
|
||||
```
|
||||
|
||||
|
|
@ -295,22 +297,22 @@ Exporting YOLOv8 models to the OpenVINO format can significantly enhance CPU spe
|
|||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO("yolov8n.pt")
|
||||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Export the model
|
||||
model.export(format="openvino") # creates 'yolov8n_openvino_model/'
|
||||
```
|
||||
# Export the model
|
||||
model.export(format="openvino") # creates 'yolov8n_openvino_model/'
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to OpenVINO format
|
||||
yolo export model=yolov8n.pt format=openvino # creates 'yolov8n_openvino_model/'
|
||||
```
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to OpenVINO format
|
||||
yolo export model=yolov8n.pt format=openvino # creates 'yolov8n_openvino_model/'
|
||||
```
|
||||
|
||||
For more information, refer to the [export formats documentation](../modes/export.md).
|
||||
|
||||
|
|
@ -333,22 +335,22 @@ After exporting a YOLOv8 model to OpenVINO format, you can run inference using P
|
|||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the exported OpenVINO model
|
||||
ov_model = YOLO("yolov8n_openvino_model/")
|
||||
# Load the exported OpenVINO model
|
||||
ov_model = YOLO("yolov8n_openvino_model/")
|
||||
|
||||
# Run inference
|
||||
results = ov_model("https://ultralytics.com/images/bus.jpg")
|
||||
```
|
||||
# Run inference
|
||||
results = ov_model("https://ultralytics.com/images/bus.jpg")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n_openvino_model source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
```bash
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n_openvino_model source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
Refer to our [predict mode documentation](../modes/predict.md) for more details.
|
||||
|
||||
|
|
@ -370,21 +372,21 @@ Yes, you can benchmark YOLOv8 models in various formats including PyTorch, Torch
|
|||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO("yolov8n.pt")
|
||||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
|
||||
results = model.benchmarks(data="coco8.yaml")
|
||||
```
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
|
||||
results = model.benchmarks(data="coco8.yaml")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
|
||||
yolo benchmark model=yolov8n.pt data=coco8.yaml
|
||||
```
|
||||
```bash
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
|
||||
yolo benchmark model=yolov8n.pt data=coco8.yaml
|
||||
```
|
||||
|
||||
For detailed benchmark results, refer to our [benchmarks section](#openvino-yolov8-benchmarks) and [export formats](../modes/export.md) documentation.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue