Update to lowercase MkDocs admonitions (#15990)
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
ce24c7273e
commit
c2b647a768
133 changed files with 529 additions and 521 deletions
|
|
@ -24,7 +24,7 @@ The output of an image classifier is a single class label and a confidence score
|
|||
<strong>Watch:</strong> Explore Ultralytics YOLO Tasks: Image Classification using Ultralytics HUB
|
||||
</p>
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip
|
||||
|
||||
YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml).
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose model
|
|||
|
||||
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -86,7 +86,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
|
|||
|
||||
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -114,7 +114,7 @@ Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument
|
|||
|
||||
Use a trained YOLOv8n-cls model to run predictions on images.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -142,7 +142,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
|
|||
|
||||
Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -180,7 +180,7 @@ YOLOv8 models, such as `yolov8n-cls.pt`, are designed for efficient image classi
|
|||
|
||||
To train a YOLOv8 model, you can use either Python or CLI commands. For example, to train a `yolov8n-cls` model on the MNIST160 dataset for 100 epochs at an image size of 64:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -210,7 +210,7 @@ Pretrained YOLOv8 classification models can be found in the [Models](https://git
|
|||
|
||||
You can export a trained YOLOv8 model to various formats using Python or CLI commands. For instance, to export a model to ONNX format:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -236,7 +236,7 @@ For detailed export options, refer to the [Export](../modes/export.md) page.
|
|||
|
||||
To validate a trained model's accuracy on a dataset like MNIST160, you can use the following Python or CLI commands:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ The output of an object detector is a set of bounding boxes that enclose the obj
|
|||
<strong>Watch:</strong> Object Detection with Pre-trained Ultralytics YOLOv8 Model.
|
||||
</p>
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip
|
||||
|
||||
YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
|
||||
|
||||
|
|
@ -48,7 +48,7 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
|
|||
|
||||
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -85,7 +85,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
|
|||
|
||||
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -115,7 +115,7 @@ Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need t
|
|||
|
||||
Use a trained YOLOv8n model to run predictions on images.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -143,7 +143,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
|
|||
|
||||
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -181,7 +181,7 @@ Training a YOLOv8 model on a custom dataset involves a few steps:
|
|||
2. **Load the Model**: Use the Ultralytics YOLO library to load a pre-trained model or create a new model from a YAML file.
|
||||
3. **Train the Model**: Execute the `train` method in Python or the `yolo detect train` command in CLI.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -219,7 +219,7 @@ For a detailed list and performance metrics, refer to the [Models](https://githu
|
|||
|
||||
To validate the accuracy of your trained YOLOv8 model, you can use the `.val()` method in Python or the `yolo detect val` command in CLI. This will provide metrics like mAP50-95, mAP50, and more.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -246,7 +246,7 @@ For more validation details, visit the [Val](../modes/val.md) page.
|
|||
|
||||
Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, TensorRT, CoreML, and more to ensure compatibility across different platforms and devices.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ To use Ultralytics YOLOv8 for object detection, follow these steps:
|
|||
2. Train the YOLOv8 model using the detection task.
|
||||
3. Use the model to make predictions by feeding in new images or video frames.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ The output of an oriented object detector is a set of rotated bounding boxes tha
|
|||
|
||||
<!-- youtube video link for obb task -->
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip
|
||||
|
||||
YOLOv8 OBB models use the `-obb` suffix, i.e. `yolov8n-obb.pt` and are pretrained on [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml).
|
||||
|
||||
|
|
@ -69,7 +69,7 @@ YOLOv8 pretrained OBB models are shown here, which are pretrained on the [DOTAv1
|
|||
|
||||
Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -107,7 +107,7 @@ OBB dataset format can be found in detail in the [Dataset Guide](../datasets/obb
|
|||
Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No argument need to passed as the `model`
|
||||
retains its training `data` and arguments as model attributes.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -137,7 +137,7 @@ retains its training `data` and arguments as model attributes.
|
|||
|
||||
Use a trained YOLOv8n-obb model to run predictions on images.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -165,7 +165,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
|
|||
|
||||
Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -203,7 +203,7 @@ Oriented Bounding Boxes (OBB) include an additional angle to enhance object loca
|
|||
|
||||
To train a YOLOv8n-obb model with a custom dataset, follow the example below using Python or CLI:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -233,7 +233,7 @@ YOLOv8-OBB models are pretrained on datasets like [DOTAv1](https://github.com/ul
|
|||
|
||||
Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Python or CLI:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -259,7 +259,7 @@ For more export formats and details, refer to the [Export](../modes/export.md) p
|
|||
|
||||
To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown below:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ The output of a pose estimation model is a set of points that represent the keyp
|
|||
</tr>
|
||||
</table>
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip
|
||||
|
||||
YOLOv8 _pose_ models use the `-pose` suffix, i.e. `yolov8n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks.
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models ar
|
|||
|
||||
Train a YOLOv8-pose model on the COCO128-pose dataset.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -120,7 +120,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
|
|||
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
|
||||
retains its training `data` and arguments as model attributes.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -150,7 +150,7 @@ retains its training `data` and arguments as model attributes.
|
|||
|
||||
Use a trained YOLOv8n-pose model to run predictions on images.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -178,7 +178,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
|
|||
|
||||
Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ The output of an instance segmentation model is a set of masks or contours that
|
|||
<strong>Watch:</strong> Run Segmentation with Pre-Trained Ultralytics YOLOv8 Model in Python.
|
||||
</p>
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip
|
||||
|
||||
YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
|
|||
|
||||
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -87,7 +87,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](.
|
|||
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
|
||||
retains its training `data` and arguments as model attributes.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ retains its training `data` and arguments as model attributes.
|
|||
|
||||
Use a trained YOLOv8n-seg model to run predictions on images.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -149,7 +149,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
|
|||
|
||||
Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -183,7 +183,7 @@ See full `export` details in the [Export](../modes/export.md) page.
|
|||
|
||||
To train a YOLOv8 segmentation model on a custom dataset, you first need to prepare your dataset in the YOLO segmentation format. You can use tools like [JSON2YOLO](https://github.com/ultralytics/JSON2YOLO) to convert datasets from other formats. Once your dataset is ready, you can train the model using Python or CLI commands:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -217,7 +217,7 @@ Ultralytics YOLOv8 is a state-of-the-art model recognized for its high accuracy
|
|||
|
||||
Loading and validating a pretrained YOLOv8 segmentation model is straightforward. Here's how you can do it using both Python and CLI:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -245,7 +245,7 @@ These steps will provide you with validation metrics like Mean Average Precision
|
|||
|
||||
Exporting a YOLOv8 segmentation model to ONNX format is simple and can be done using Python or CLI commands:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue