Update to lowercase MkDocs admonitions (#15990)

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
MatthewNoyce 2024-09-06 16:33:26 +01:00 committed by GitHub
parent ce24c7273e
commit c2b647a768
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
133 changed files with 529 additions and 521 deletions

View file

@ -21,7 +21,7 @@ Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help
For example, users can load a model, train it, evaluate its performance on a validation set, and even export it to ONNX format with just a few lines of code.
!!! Example "Python"
!!! example "Python"
```python
from ultralytics import YOLO
@ -49,7 +49,7 @@ For example, users can load a model, train it, evaluate its performance on a val
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image.
!!! Example "Train"
!!! example "Train"
=== "From pretrained(recommended)"
@ -82,7 +82,7 @@ Train mode is used for training a YOLOv8 model on a custom dataset. In this mode
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance.
!!! Example "Val"
!!! example "Val"
=== "Val after training"
@ -120,7 +120,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model predicts the classes and locations of objects in the input images or videos.
!!! Example "Predict"
!!! example "Predict"
=== "From source"
@ -191,7 +191,7 @@ Predict mode is used for making predictions using a trained YOLOv8 model on new
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is converted to a format that can be used by other software applications or hardware devices. This mode is useful when deploying the model to production environments.
!!! Example "Export"
!!! example "Export"
=== "Export to ONNX"
@ -219,7 +219,7 @@ Export mode is used for exporting a YOLOv8 model to a format that can be used fo
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful for applications such as surveillance systems or self-driving cars.
!!! Example "Track"
!!! example "Track"
=== "Python"
@ -242,7 +242,7 @@ Track mode is used for tracking objects in real-time using a YOLOv8 model. In th
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
!!! Example "Benchmark"
!!! example "Benchmark"
=== "Python"
@ -260,7 +260,7 @@ Benchmark mode is used to profile the speed and accuracy of various export forma
Explorer API can be used to explore datasets with advanced semantic, vector-similarity and SQL search among other features. It also enabled searching for images based on their content using natural language by utilizing the power of LLMs. The Explorer API allows you to write your own dataset exploration notebooks or scripts to get insights into your datasets.
!!! Example "Semantic Search Using Explorer"
!!! example "Semantic Search Using Explorer"
=== "Using Images"
@ -304,7 +304,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
!!! Tip "Detection Trainer Example"
!!! tip "Detection Trainer Example"
```python
from ultralytics.models.yolo import DetectionPredictor, DetectionTrainer, DetectionValidator