Docs improvements and redirect fixes (#16287)
Signed-off-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
02e995383d
commit
887b46216c
38 changed files with 82 additions and 85 deletions
|
|
@ -46,7 +46,7 @@ Here are some of the standout functionalities:
|
|||
|
||||
## Usage Examples
|
||||
|
||||
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
|
||||
Export a YOLOv8n model to a different format like ONNX or TensorRT. See the Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! example
|
||||
|
||||
|
|
@ -112,7 +112,7 @@ Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It
|
|||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
For more details on the process, including advanced options like handling different input sizes, refer to the [ONNX](../integrations/onnx.md) section.
|
||||
For more details on the process, including advanced options like handling different input sizes, refer to the [ONNX section](../integrations/onnx.md).
|
||||
|
||||
### What are the benefits of using TensorRT for model export?
|
||||
|
||||
|
|
@ -122,7 +122,7 @@ Using TensorRT for model export offers significant performance improvements. YOL
|
|||
- **Speed:** Achieve faster inference through advanced optimizations.
|
||||
- **Compatibility:** Integrate smoothly with NVIDIA hardware.
|
||||
|
||||
To learn more about integrating TensorRT, see the [TensorRT](../integrations/tensorrt.md) integration guide.
|
||||
To learn more about integrating TensorRT, see the [TensorRT integration guide](../integrations/tensorrt.md).
|
||||
|
||||
### How do I enable INT8 quantization when exporting my YOLOv8 model?
|
||||
|
||||
|
|
@ -145,7 +145,7 @@ INT8 quantization is an excellent way to compress the model and speed up inferen
|
|||
yolo export model=yolov8n.pt format=onnx int8=True # export model with INT8 quantization
|
||||
```
|
||||
|
||||
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export](../modes/export.md) section.
|
||||
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export section](../modes/export.md).
|
||||
|
||||
### Why is dynamic input size important when exporting models?
|
||||
|
||||
|
|
@ -182,4 +182,4 @@ Understanding and configuring export arguments is crucial for optimizing model p
|
|||
- **`optimize:`** Applies specific optimizations for mobile or constrained environments.
|
||||
- **`int8:`** Enables INT8 quantization, highly beneficial for edge deployments.
|
||||
|
||||
For a detailed list and explanations of all the export arguments, visit the [Export Arguments](#arguments) section.
|
||||
For a detailed list and explanations of all the export arguments, visit the [Export Arguments section](#arguments).
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue