Update to lowercase MkDocs admonitions (#15990)
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
ce24c7273e
commit
c2b647a768
133 changed files with 529 additions and 521 deletions
|
|
@ -43,7 +43,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
|
|||
- **OpenVINO:** For Intel hardware optimization
|
||||
- **CoreML, TensorFlow SavedModel, and More:** For diverse deployment needs.
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip "Tip"
|
||||
|
||||
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
|
||||
* Export to TensorRT for up to 5x GPU speedup.
|
||||
|
|
@ -52,7 +52,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
|
|||
|
||||
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -97,7 +97,7 @@ See full `export` details in the [Export](../modes/export.md) page.
|
|||
|
||||
Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as mean Average Precision (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ Here are some of the standout functionalities:
|
|||
- **Optimized Inference:** Exported models are optimized for quicker inference times.
|
||||
- **Tutorial Videos:** In-depth guides and tutorials for a smooth exporting experience.
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip "Tip"
|
||||
|
||||
* Export to [ONNX](../integrations/onnx.md) or [OpenVINO](../integrations/openvino.md) for up to 3x CPU speedup.
|
||||
* Export to [TensorRT](../integrations/tensorrt.md) for up to 5x GPU speedup.
|
||||
|
|
@ -48,7 +48,7 @@ Here are some of the standout functionalities:
|
|||
|
||||
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -90,7 +90,7 @@ Available YOLOv8 export formats are in the table below. You can export to any fo
|
|||
|
||||
Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It provides both Python and CLI methods for exporting models.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -128,7 +128,7 @@ To learn more about integrating TensorRT, see the [TensorRT](../integrations/ten
|
|||
|
||||
INT8 quantization is an excellent way to compress the model and speed up inference, especially on edge devices. Here's how you can enable INT8 quantization:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -153,7 +153,7 @@ Dynamic input size allows the exported model to handle varying image dimensions,
|
|||
|
||||
To enable this feature, use the `dynamic=True` flag during export:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ Benchmark mode is used to profile the speed and accuracy of various export forma
|
|||
|
||||
Training a custom object detection model with Ultralytics YOLOv8 involves using the train mode. You need a dataset formatted in YOLO format, containing images and corresponding annotation files. Use the following command to start the training process:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -108,7 +108,7 @@ Ultralytics YOLOv8 uses various metrics during the validation process to assess
|
|||
|
||||
You can run the following command to start the validation:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -132,7 +132,7 @@ Refer to the [Validation Guide](../modes/val.md) for further details.
|
|||
|
||||
Ultralytics YOLOv8 offers export functionality to convert your trained model into various deployment formats such as ONNX, TensorRT, CoreML, and more. Use the following example to export your model:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -156,7 +156,7 @@ Detailed steps for each export format can be found in the [Export Guide](../mode
|
|||
|
||||
Benchmark mode in Ultralytics YOLOv8 is used to analyze the speed and accuracy of various export formats such as ONNX, TensorRT, and OpenVINO. It provides metrics like model size, `mAP50-95` for object detection, and inference time across different hardware setups, helping you choose the most suitable format for your deployment needs.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -179,7 +179,7 @@ For more details, refer to the [Benchmark Guide](../modes/benchmark.md).
|
|||
|
||||
Real-time object tracking can be achieved using the track mode in Ultralytics YOLOv8. This mode extends object detection capabilities to track objects across video frames or live feeds. Use the following example to enable tracking:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ YOLOv8's predict mode is designed to be robust and versatile, featuring:
|
|||
|
||||
Ultralytics YOLO models return either a Python list of `Results` objects, or a memory-efficient Python generator of `Results` objects when `stream=True` is passed to the model during inference:
|
||||
|
||||
!!! Example "Predict"
|
||||
!!! example "Predict"
|
||||
|
||||
=== "Return a list with `stream=False`"
|
||||
|
||||
|
|
@ -100,7 +100,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
|
|||
|
||||
YOLOv8 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip "Tip"
|
||||
|
||||
Use `stream=True` for processing long videos or large datasets to efficiently manage memory. When `stream=False`, the results for all frames or data points are stored in memory, which can quickly add up and cause out-of-memory errors for large inputs. In contrast, `stream=True` utilizes a generator, which only keeps the results of the current frame or data point in memory, significantly reducing memory consumption and preventing out-of-memory issues.
|
||||
|
||||
|
|
@ -123,7 +123,7 @@ YOLOv8 can process different types of input sources for inference, as shown in t
|
|||
|
||||
Below are code examples for using each source type:
|
||||
|
||||
!!! Example "Prediction sources"
|
||||
!!! example "Prediction sources"
|
||||
|
||||
=== "image"
|
||||
|
||||
|
|
@ -351,7 +351,7 @@ Below are code examples for using each source type:
|
|||
|
||||
`model.predict()` accepts multiple arguments that can be passed at inference time to override defaults:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -442,7 +442,7 @@ The below table contains valid Ultralytics video formats.
|
|||
|
||||
All Ultralytics `predict()` calls will return a list of `Results` objects:
|
||||
|
||||
!!! Example "Results"
|
||||
!!! example "Results"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -494,7 +494,7 @@ For more details see the [`Results` class documentation](../reference/engine/res
|
|||
|
||||
`Boxes` object can be used to index, manipulate, and convert bounding boxes to different formats.
|
||||
|
||||
!!! Example "Boxes"
|
||||
!!! example "Boxes"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -532,7 +532,7 @@ For more details see the [`Boxes` class documentation](../reference/engine/resul
|
|||
|
||||
`Masks` object can be used index, manipulate and convert masks to segments.
|
||||
|
||||
!!! Example "Masks"
|
||||
!!! example "Masks"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -565,7 +565,7 @@ For more details see the [`Masks` class documentation](../reference/engine/resul
|
|||
|
||||
`Keypoints` object can be used index, manipulate and normalize coordinates.
|
||||
|
||||
!!! Example "Keypoints"
|
||||
!!! example "Keypoints"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -599,7 +599,7 @@ For more details see the [`Keypoints` class documentation](../reference/engine/r
|
|||
|
||||
`Probs` object can be used index, get `top1` and `top5` indices and scores of classification.
|
||||
|
||||
!!! Example "Probs"
|
||||
!!! example "Probs"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -634,7 +634,7 @@ For more details see the [`Probs` class documentation](../reference/engine/resul
|
|||
|
||||
`OBB` object can be used to index, manipulate, and convert oriented bounding boxes to different formats.
|
||||
|
||||
!!! Example "OBB"
|
||||
!!! example "OBB"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -672,7 +672,7 @@ For more details see the [`OBB` class documentation](../reference/engine/results
|
|||
|
||||
The `plot()` method in `Results` objects facilitates visualization of predictions by overlaying detected objects (such as bounding boxes, masks, keypoints, and probabilities) onto the original image. This method returns the annotated image as a NumPy array, allowing for easy display or saving.
|
||||
|
||||
!!! Example "Plotting"
|
||||
!!! example "Plotting"
|
||||
|
||||
```python
|
||||
from PIL import Image
|
||||
|
|
@ -728,7 +728,7 @@ Ensuring thread safety during inference is crucial when you are running multiple
|
|||
|
||||
When using YOLO models in a multi-threaded application, it's important to instantiate separate model objects for each thread or employ thread-local storage to prevent conflicts:
|
||||
|
||||
!!! Example "Thread-Safe Inference"
|
||||
!!! example "Thread-Safe Inference"
|
||||
|
||||
Instantiate a single model inside each thread for thread-safe inference:
|
||||
```python
|
||||
|
|
@ -755,7 +755,7 @@ For an in-depth look at thread-safe inference with YOLO models and step-by-step
|
|||
|
||||
Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`).
|
||||
|
||||
!!! Example "Streaming for-loop"
|
||||
!!! example "Streaming for-loop"
|
||||
|
||||
```python
|
||||
import cv2
|
||||
|
|
|
|||
|
|
@ -56,13 +56,13 @@ The default tracker is BoT-SORT.
|
|||
|
||||
## Tracking
|
||||
|
||||
!!! Warning "Tracker Threshold Information"
|
||||
!!! warning "Tracker Threshold Information"
|
||||
|
||||
If object confidence score will be low, i.e lower than [`track_high_thresh`](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/trackers/bytetrack.yaml#L5), then there will be no tracks successfully returned and updated.
|
||||
|
||||
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -97,7 +97,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
|
|||
|
||||
## Configuration
|
||||
|
||||
!!! Warning "Tracker Threshold Information"
|
||||
!!! warning "Tracker Threshold Information"
|
||||
|
||||
If object confidence score will be low, i.e lower than [`track_high_thresh`](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/trackers/bytetrack.yaml#L5), then there will be no tracks successfully returned and updated.
|
||||
|
||||
|
|
@ -105,7 +105,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
|
|||
|
||||
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](../modes/predict.md#inference-arguments) model page.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -128,7 +128,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
|
|||
|
||||
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -155,7 +155,7 @@ For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/tr
|
|||
|
||||
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
|
||||
|
||||
!!! Example "Streaming for-loop with tracking"
|
||||
!!! example "Streaming for-loop with tracking"
|
||||
|
||||
```python
|
||||
import cv2
|
||||
|
|
@ -204,7 +204,7 @@ Visualizing object tracks over consecutive frames can provide valuable insights
|
|||
|
||||
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
|
||||
|
||||
!!! Example "Plotting tracks over multiple video frames"
|
||||
!!! example "Plotting tracks over multiple video frames"
|
||||
|
||||
```python
|
||||
from collections import defaultdict
|
||||
|
|
@ -281,7 +281,7 @@ The `daemon=True` parameter in `threading.Thread` means that these threads will
|
|||
|
||||
Finally, after all threads have completed their task, the windows displaying the results are closed using `cv2.destroyAllWindows()`.
|
||||
|
||||
!!! Example "Streaming for-loop with tracking"
|
||||
!!! example "Streaming for-loop with tracking"
|
||||
|
||||
```python
|
||||
import threading
|
||||
|
|
@ -378,7 +378,7 @@ Multi-object tracking in video analytics involves both identifying objects and m
|
|||
|
||||
You can configure a custom tracker by copying an existing tracker configuration file (e.g., `custom_tracker.yaml`) from the [Ultralytics tracker configuration directory](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modifying parameters as needed, except for the `tracker_type`. Use this file in your tracking model like so:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -399,7 +399,7 @@ You can configure a custom tracker by copying an existing tracker configuration
|
|||
|
||||
To run object tracking on multiple video streams simultaneously, you can use Python's `threading` module. Each thread will handle a separate video stream. Here's an example of how you can set this up:
|
||||
|
||||
!!! Example "Multithreaded Tracking"
|
||||
!!! example "Multithreaded Tracking"
|
||||
|
||||
```python
|
||||
import threading
|
||||
|
|
@ -454,7 +454,7 @@ These applications benefit from Ultralytics YOLO's ability to process high-frame
|
|||
|
||||
To visualize object tracks over multiple video frames, you can use the YOLO model's tracking features along with OpenCV to draw the paths of detected objects. Here's an example script that demonstrates this:
|
||||
|
||||
!!! Example "Plotting tracks over multiple video frames"
|
||||
!!! example "Plotting tracks over multiple video frames"
|
||||
|
||||
```python
|
||||
from collections import defaultdict
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ The following are some notable features of YOLOv8's Train mode:
|
|||
- **Hyperparameter Configuration:** The option to modify hyperparameters through YAML configuration files or CLI arguments.
|
||||
- **Visualization and Monitoring:** Real-time tracking of training metrics and visualization of the learning process for better insights.
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip "Tip"
|
||||
|
||||
* YOLOv8 datasets like COCO, VOC, ImageNet and many others automatically download on first use, i.e. `yolo train data=coco.yaml`
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ The following are some notable features of YOLOv8's Train mode:
|
|||
|
||||
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device='cpu'` will be used. See Arguments section below for a full list of training arguments.
|
||||
|
||||
!!! Example "Single-GPU and CPU Training Example"
|
||||
!!! example "Single-GPU and CPU Training Example"
|
||||
|
||||
Device is determined automatically. If a GPU is available then it will be used, otherwise training will start on CPU.
|
||||
|
||||
|
|
@ -84,7 +84,7 @@ Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The trainin
|
|||
|
||||
Multi-GPU training allows for more efficient utilization of available hardware resources by distributing the training load across multiple GPUs. This feature is available through both the Python API and the command-line interface. To enable multi-GPU training, specify the GPU device IDs you wish to use.
|
||||
|
||||
!!! Example "Multi-GPU Training Example"
|
||||
!!! example "Multi-GPU Training Example"
|
||||
|
||||
To train with 2 GPUs, CUDA devices 0 and 1 use the following commands. Expand to additional GPUs as required.
|
||||
|
||||
|
|
@ -113,7 +113,7 @@ With the support for Apple M1 and M2 chips integrated in the Ultralytics YOLO mo
|
|||
|
||||
To enable training on Apple M1 and M2 chips, you should specify 'mps' as your device when initiating the training process. Below is an example of how you could do this in Python and via the command line:
|
||||
|
||||
!!! Example "MPS Training Example"
|
||||
!!! example "MPS Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -146,7 +146,7 @@ You can easily resume training in Ultralytics YOLO by setting the `resume` argum
|
|||
|
||||
Below is an example of how to resume an interrupted training using Python and via the command line:
|
||||
|
||||
!!! Example "Resume Training Example"
|
||||
!!! example "Resume Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -276,7 +276,7 @@ To use a logger, select it from the dropdown menu in the code snippet above and
|
|||
|
||||
To use Comet:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -295,7 +295,7 @@ Remember to sign in to your Comet account on their website and get your API key.
|
|||
|
||||
To use ClearML:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -314,7 +314,7 @@ After running this script, you will need to sign in to your ClearML account on t
|
|||
|
||||
To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb):
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
|
@ -325,7 +325,7 @@ To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ul
|
|||
|
||||
To use TensorBoard locally run the below command and view results at http://localhost:6006/.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "CLI"
|
||||
|
||||
|
|
@ -343,7 +343,7 @@ After setting up your logger, you can then proceed with your model training. All
|
|||
|
||||
To train an object detection model using Ultralytics YOLOv8, you can either use the Python API or the CLI. Below is an example for both:
|
||||
|
||||
!!! Example "Single-GPU and CPU Training Example"
|
||||
!!! example "Single-GPU and CPU Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -380,7 +380,7 @@ These features make training efficient and customizable to your needs. For more
|
|||
|
||||
To resume training from an interrupted session, set the `resume` argument to `True` and specify the path to the last saved checkpoint.
|
||||
|
||||
!!! Example "Resume Training Example"
|
||||
!!! example "Resume Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -406,7 +406,7 @@ Check the section on [Resuming Interrupted Trainings](#resuming-interrupted-trai
|
|||
|
||||
Yes, Ultralytics YOLOv8 supports training on Apple M1 and M2 chips utilizing the Metal Performance Shaders (MPS) framework. Specify 'mps' as your training device.
|
||||
|
||||
!!! Example "MPS Training Example"
|
||||
!!! example "MPS Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
|
|||
- **CLI and Python API:** Choose from command-line interface or Python API based on your preference for validation.
|
||||
- **Data Compatibility:** Works seamlessly with datasets used during the training phase as well as custom datasets.
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip "Tip"
|
||||
|
||||
* YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()`
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
|
|||
|
||||
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -102,7 +102,7 @@ Each of these settings plays a vital role in the validation process, allowing fo
|
|||
|
||||
The below examples showcase YOLO model validation with custom arguments in Python and CLI.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue