Docs improvements and redirect fixes (#16287)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Glenn Jocher 2024-09-15 00:27:46 +02:00 committed by GitHub
parent 02e995383d
commit 887b46216c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
38 changed files with 82 additions and 85 deletions

View file

@ -46,7 +46,7 @@ Here are some of the standout functionalities:
## Usage Examples
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
Export a YOLOv8n model to a different format like ONNX or TensorRT. See the Arguments section below for a full list of export arguments.
!!! example
@ -112,7 +112,7 @@ Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
For more details on the process, including advanced options like handling different input sizes, refer to the [ONNX](../integrations/onnx.md) section.
For more details on the process, including advanced options like handling different input sizes, refer to the [ONNX section](../integrations/onnx.md).
### What are the benefits of using TensorRT for model export?
@ -122,7 +122,7 @@ Using TensorRT for model export offers significant performance improvements. YOL
- **Speed:** Achieve faster inference through advanced optimizations.
- **Compatibility:** Integrate smoothly with NVIDIA hardware.
To learn more about integrating TensorRT, see the [TensorRT](../integrations/tensorrt.md) integration guide.
To learn more about integrating TensorRT, see the [TensorRT integration guide](../integrations/tensorrt.md).
### How do I enable INT8 quantization when exporting my YOLOv8 model?
@ -145,7 +145,7 @@ INT8 quantization is an excellent way to compress the model and speed up inferen
yolo export model=yolov8n.pt format=onnx int8=True # export model with INT8 quantization
```
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export](../modes/export.md) section.
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export section](../modes/export.md).
### Why is dynamic input size important when exporting models?
@ -182,4 +182,4 @@ Understanding and configuring export arguments is crucial for optimizing model p
- **`optimize:`** Applies specific optimizations for mobile or constrained environments.
- **`int8:`** Enables INT8 quantization, highly beneficial for edge deployments.
For a detailed list and explanations of all the export arguments, visit the [Export Arguments](#arguments) section.
For a detailed list and explanations of all the export arguments, visit the [Export Arguments section](#arguments).

View file

@ -68,7 +68,7 @@ Track mode is used for tracking objects in real-time using a YOLOv8 model. In th
## [Benchmark](benchmark.md)
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation, and pose) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various formats like ONNX, OpenVINO, TensorRT, and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
[Benchmark Examples](benchmark.md){ .md-button }

View file

@ -123,14 +123,14 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model with 2 GPUs
# Train the model with MPS
results = model.train(data="coco8.yaml", epochs=100, imgsz=640, device="mps")
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model using GPUs 0 and 1
# Start training from a pretrained *.pt model using MPS
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
```
@ -169,7 +169,7 @@ Below is an example of how to resume an interrupted training using Python and vi
By setting `resume=True`, the `train` function will continue training from where it left off, using the state stored in the 'path/to/last.pt' file. If the `resume` argument is omitted or set to `False`, the `train` function will start a new training session.
Remember that checkpoints are saved at the end of every epoch by default, or at fixed interval using the `save_period` argument, so you must complete at least 1 epoch to resume a training run.
Remember that checkpoints are saved at the end of every epoch by default, or at fixed intervals using the `save_period` argument, so you must complete at least 1 epoch to resume a training run.
## Train Settings

View file

@ -47,7 +47,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
## Usage Examples
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! example
@ -165,7 +165,7 @@ These benefits ensure that your models are evaluated thoroughly and can be optim
### Can I validate my YOLOv8 model using a custom dataset?
Yes, you can validate your YOLOv8 model using a custom dataset. Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the validation data, class names, and other relevant details.
Yes, you can validate your YOLOv8 model using a [custom dataset](https://docs.ultralytics.com/datasets/). Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the validation data, class names, and other relevant details.
Example in Python: