YOLO11 Tasks, Modes, Usage, Macros and Solutions Updates (#16593)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Ultralytics Assistant 2024-10-01 15:41:15 +02:00 committed by GitHub
parent 3093fc9ec2
commit 51e93d6111
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
31 changed files with 541 additions and 541 deletions

View file

@ -1,7 +1,7 @@
---
comments: true
description: Learn how to evaluate your YOLOv8 model's performance in real-world scenarios using benchmark mode. Optimize speed, accuracy, and resource allocation across export formats.
keywords: model benchmarking, YOLOv8, Ultralytics, performance evaluation, export formats, ONNX, TensorRT, OpenVINO, CoreML, TensorFlow, optimization, mAP50-95, inference time
description: Learn how to evaluate your YOLO11 model's performance in real-world scenarios using benchmark mode. Optimize speed, accuracy, and resource allocation across export formats.
keywords: model benchmarking, YOLO11, Ultralytics, performance evaluation, export formats, ONNX, TensorRT, OpenVINO, CoreML, TensorFlow, optimization, mAP50-95, inference time
---
# Model Benchmarking with Ultralytics YOLO
@ -10,7 +10,7 @@ keywords: model benchmarking, YOLOv8, Ultralytics, performance evaluation, expor
## Introduction
Once your model is trained and validated, the next logical step is to evaluate its performance in various real-world scenarios. Benchmark mode in Ultralytics YOLOv8 serves this purpose by providing a robust framework for assessing the speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) of your model across a range of export formats.
Once your model is trained and validated, the next logical step is to evaluate its performance in various real-world scenarios. Benchmark mode in Ultralytics YOLO11 serves this purpose by providing a robust framework for assessing the speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) of your model across a range of export formats.
<p align="center">
<br>
@ -50,7 +50,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
## Usage Examples
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
Run YOLO11n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
!!! example
@ -60,13 +60,13 @@ Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT
from ultralytics.utils.benchmarks import benchmark
# Benchmark on GPU
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
```
=== "CLI"
```bash
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
yolo benchmark model=yolo11n.pt data='coco8.yaml' imgsz=640 half=False device=0
```
## Arguments
@ -75,7 +75,7 @@ Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` prov
| Key | Default Value | Description |
| --------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolov8n.pt"` for pre-trained models or configuration files. |
| `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolo11n.pt"` for pre-trained models or configuration files. |
| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for [validation data](https://www.ultralytics.com/glossary/validation-data). Example: `"coco8.yaml"`. |
| `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. |
| `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. |
@ -93,9 +93,9 @@ See full `export` details in the [Export](../modes/export.md) page.
## FAQ
### How do I benchmark my YOLOv8 model's performance using Ultralytics?
### How do I benchmark my YOLO11 model's performance using Ultralytics?
Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as [mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
Ultralytics YOLO11 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as [mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
!!! example
@ -105,29 +105,29 @@ Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance ac
from ultralytics.utils.benchmarks import benchmark
# Benchmark on GPU
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
```
=== "CLI"
```bash
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
yolo benchmark model=yolo11n.pt data='coco8.yaml' imgsz=640 half=False device=0
```
For more details on benchmark arguments, visit the [Arguments](#arguments) section.
### What are the benefits of exporting YOLOv8 models to different formats?
### What are the benefits of exporting YOLO11 models to different formats?
Exporting YOLOv8 models to different formats such as ONNX, TensorRT, and OpenVINO allows you to optimize performance based on your deployment environment. For instance:
Exporting YOLO11 models to different formats such as ONNX, TensorRT, and OpenVINO allows you to optimize performance based on your deployment environment. For instance:
- **ONNX:** Provides up to 3x CPU speedup.
- **TensorRT:** Offers up to 5x GPU speedup.
- **OpenVINO:** Specifically optimized for Intel hardware.
These formats enhance both the speed and accuracy of your models, making them more efficient for various real-world applications. Visit the [Export](../modes/export.md) page for complete details.
### Why is benchmarking crucial in evaluating YOLOv8 models?
### Why is benchmarking crucial in evaluating YOLO11 models?
Benchmarking your YOLOv8 models is essential for several reasons:
Benchmarking your YOLO11 models is essential for several reasons:
- **Informed Decisions:** Understand the trade-offs between speed and accuracy.
- **Resource Allocation:** Gauge the performance across different hardware options.
@ -135,9 +135,9 @@ Benchmarking your YOLOv8 models is essential for several reasons:
- **Cost Efficiency:** Optimize hardware usage based on benchmark results.
Key metrics such as mAP50-95, Top-5 accuracy, and inference time help in making these evaluations. Refer to the [Key Metrics](#key-metrics-in-benchmark-mode) section for more information.
### Which export formats are supported by YOLOv8, and what are their advantages?
### Which export formats are supported by YOLO11, and what are their advantages?
YOLOv8 supports a variety of export formats, each tailored for specific hardware and use cases:
YOLO11 supports a variety of export formats, each tailored for specific hardware and use cases:
- **ONNX:** Best for CPU performance.
- **TensorRT:** Ideal for GPU efficiency.
@ -145,11 +145,11 @@ YOLOv8 supports a variety of export formats, each tailored for specific hardware
- **CoreML & [TensorFlow](https://www.ultralytics.com/glossary/tensorflow):** Useful for iOS and general ML applications.
For a complete list of supported formats and their respective advantages, check out the [Supported Export Formats](#supported-export-formats) section.
### What arguments can I use to fine-tune my YOLOv8 benchmarks?
### What arguments can I use to fine-tune my YOLO11 benchmarks?
When running benchmarks, several arguments can be customized to suit specific needs:
- **model:** Path to the model file (e.g., "yolov8n.pt").
- **model:** Path to the model file (e.g., "yolo11n.pt").
- **data:** Path to a YAML file defining the dataset (e.g., "coco8.yaml").
- **imgsz:** The input image size, either as a single integer or a tuple.
- **half:** Enable FP16 inference for better performance.

View file

@ -1,7 +1,7 @@
---
comments: true
description: Learn how to export your YOLOv8 model to various formats like ONNX, TensorRT, and CoreML. Achieve maximum compatibility and performance.
keywords: YOLOv8, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine Learning, Inference, Deployment
description: Learn how to export your YOLO11 model to various formats like ONNX, TensorRT, and CoreML. Achieve maximum compatibility and performance.
keywords: YOLO11, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine Learning, Inference, Deployment
---
# Model Export with Ultralytics YOLO
@ -10,7 +10,7 @@ keywords: YOLOv8, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine
## Introduction
The ultimate goal of training a model is to deploy it for real-world applications. Export mode in Ultralytics YOLOv8 offers a versatile range of options for exporting your trained model to different formats, making it deployable across various platforms and devices. This comprehensive guide aims to walk you through the nuances of model exporting, showcasing how to achieve maximum compatibility and performance.
The ultimate goal of training a model is to deploy it for real-world applications. Export mode in Ultralytics YOLO11 offers a versatile range of options for exporting your trained model to different formats, making it deployable across various platforms and devices. This comprehensive guide aims to walk you through the nuances of model exporting, showcasing how to achieve maximum compatibility and performance.
<p align="center">
<br>
@ -23,7 +23,7 @@ The ultimate goal of training a model is to deploy it for real-world application
<strong>Watch:</strong> How To Export Custom Trained Ultralytics YOLOv8 Model and Run Live Inference on Webcam.
</p>
## Why Choose YOLOv8's Export Mode?
## Why Choose YOLO11's Export Mode?
- **Versatility:** Export to multiple formats including ONNX, TensorRT, CoreML, and more.
- **Performance:** Gain up to 5x GPU speedup with TensorRT and 3x CPU speedup with ONNX or OpenVINO.
@ -46,7 +46,7 @@ Here are some of the standout functionalities:
## Usage Examples
Export a YOLOv8n model to a different format like ONNX or TensorRT. See the Arguments section below for a full list of export arguments.
Export a YOLO11n model to a different format like ONNX or TensorRT. See the Arguments section below for a full list of export arguments.
!!! example
@ -56,7 +56,7 @@ Export a YOLOv8n model to a different format like ONNX or TensorRT. See the Argu
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@ -66,7 +66,7 @@ Export a YOLOv8n model to a different format like ONNX or TensorRT. See the Argu
=== "CLI"
```bash
yolo export model=yolov8n.pt format=onnx # export official model
yolo export model=yolo11n.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
@ -80,15 +80,15 @@ Adjusting these parameters allows for customization of the export process to fit
## Export Formats
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n.onnx`. Usage examples are shown for your model after export completes.
Available YOLO11 export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolo11n.onnx`. Usage examples are shown for your model after export completes.
{% include "macros/export-table.md" %}
## FAQ
### How do I export a YOLOv8 model to ONNX format?
### How do I export a YOLO11 model to ONNX format?
Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It provides both Python and CLI methods for exporting models.
Exporting a YOLO11 model to ONNX format is straightforward with Ultralytics. It provides both Python and CLI methods for exporting models.
!!! example
@ -98,7 +98,7 @@ Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@ -108,7 +108,7 @@ Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It
=== "CLI"
```bash
yolo export model=yolov8n.pt format=onnx # export official model
yolo export model=yolo11n.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
@ -116,7 +116,7 @@ For more details on the process, including advanced options like handling differ
### What are the benefits of using TensorRT for model export?
Using TensorRT for model export offers significant performance improvements. YOLOv8 models exported to TensorRT can achieve up to a 5x GPU speedup, making it ideal for real-time inference applications.
Using TensorRT for model export offers significant performance improvements. YOLO11 models exported to TensorRT can achieve up to a 5x GPU speedup, making it ideal for real-time inference applications.
- **Versatility:** Optimize models for a specific hardware setup.
- **Speed:** Achieve faster inference through advanced optimizations.
@ -124,7 +124,7 @@ Using TensorRT for model export offers significant performance improvements. YOL
To learn more about integrating TensorRT, see the [TensorRT integration guide](../integrations/tensorrt.md).
### How do I enable INT8 quantization when exporting my YOLOv8 model?
### How do I enable INT8 quantization when exporting my YOLO11 model?
INT8 quantization is an excellent way to compress the model and speed up inference, especially on edge devices. Here's how you can enable INT8 quantization:
@ -135,14 +135,14 @@ INT8 quantization is an excellent way to compress the model and speed up inferen
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # Load a model
model = YOLO("yolo11n.pt") # Load a model
model.export(format="onnx", int8=True)
```
=== "CLI"
```bash
yolo export model=yolov8n.pt format=onnx int8=True # export model with INT8 quantization
yolo export model=yolo11n.pt format=onnx int8=True # export model with INT8 quantization
```
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export section](../modes/export.md).
@ -160,14 +160,14 @@ To enable this feature, use the `dynamic=True` flag during export:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
model.export(format="onnx", dynamic=True)
```
=== "CLI"
```bash
yolo export model=yolov8n.pt format=onnx dynamic=True
yolo export model=yolo11n.pt format=onnx dynamic=True
```
For additional context, refer to the [dynamic input size configuration](#arguments).

View file

@ -1,7 +1,7 @@
---
comments: true
description: Harness the power of Ultralytics YOLOv8 for real-time, high-speed inference on various data sources. Learn about predict mode, key features, and practical applications.
keywords: Ultralytics, YOLOv8, model prediction, inference, predict mode, real-time inference, computer vision, machine learning, streaming, high performance
description: Harness the power of Ultralytics YOLO11 for real-time, high-speed inference on various data sources. Learn about predict mode, key features, and practical applications.
keywords: Ultralytics, YOLO11, model prediction, inference, predict mode, real-time inference, computer vision, machine learning, streaming, high performance
---
# Model Prediction with Ultralytics YOLO
@ -10,7 +10,7 @@ keywords: Ultralytics, YOLOv8, model prediction, inference, predict mode, real-t
## Introduction
In the world of [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv), the process of making sense out of visual data is called 'inference' or 'prediction'. Ultralytics YOLOv8 offers a powerful feature known as **predict mode** that is tailored for high-performance, real-time inference on a wide range of data sources.
In the world of [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv), the process of making sense out of visual data is called 'inference' or 'prediction'. Ultralytics YOLO11 offers a powerful feature known as **predict mode** that is tailored for high-performance, real-time inference on a wide range of data sources.
<p align="center">
<br>
@ -32,7 +32,7 @@ In the world of [machine learning](https://www.ultralytics.com/glossary/machine-
## Why Use Ultralytics YOLO for Inference?
Here's why you should consider YOLOv8's predict mode for your various inference needs:
Here's why you should consider YOLO11's predict mode for your various inference needs:
- **Versatility:** Capable of making inferences on images, videos, and even live streams.
- **Performance:** Engineered for real-time, high-speed processing without sacrificing [accuracy](https://www.ultralytics.com/glossary/accuracy).
@ -41,7 +41,7 @@ Here's why you should consider YOLOv8's predict mode for your various inference
### Key Features of Predict Mode
YOLOv8's predict mode is designed to be robust and versatile, featuring:
YOLO11's predict mode is designed to be robust and versatile, featuring:
- **Multiple Data Source Compatibility:** Whether your data is in the form of individual images, a collection of images, video files, or real-time video streams, predict mode has you covered.
- **Streaming Mode:** Use the streaming feature to generate a memory-efficient generator of `Results` objects. Enable this by setting `stream=True` in the predictor's call method.
@ -58,7 +58,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # pretrained YOLOv8n model
model = YOLO("yolo11n.pt") # pretrained YOLO11n model
# Run batched inference on a list of images
results = model(["image1.jpg", "image2.jpg"]) # return a list of Results objects
@ -80,7 +80,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # pretrained YOLOv8n model
model = YOLO("yolo11n.pt") # pretrained YOLO11n model
# Run batched inference on a list of images
results = model(["image1.jpg", "image2.jpg"], stream=True) # return a generator of Results objects
@ -98,7 +98,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
## Inference Sources
YOLOv8 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
YOLO11 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
!!! tip
@ -131,8 +131,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define path to the image file
source = "path/to/image.jpg"
@ -147,8 +147,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define current screenshot as source
source = "screen"
@ -163,8 +163,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define remote image or video URL
source = "https://ultralytics.com/images/bus.jpg"
@ -181,8 +181,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Open an image using PIL
source = Image.open("path/to/image.jpg")
@ -199,8 +199,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Read an image using OpenCV
source = cv2.imread("path/to/image.jpg")
@ -217,8 +217,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Create a random numpy array of HWC shape (640, 640, 3) with values in range [0, 255] and type uint8
source = np.random.randint(low=0, high=255, size=(640, 640, 3), dtype="uint8")
@ -235,8 +235,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Create a random torch tensor of BCHW shape (1, 3, 640, 640) with values in range [0, 1] and type float32
source = torch.rand(1, 3, 640, 640, dtype=torch.float32)
@ -251,8 +251,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define a path to a CSV file with images, URLs, videos and directories
source = "path/to/file.csv"
@ -267,8 +267,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define path to video file
source = "path/to/video.mp4"
@ -283,8 +283,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define path to directory containing images and videos for inference
source = "path/to/dir"
@ -299,8 +299,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define a glob search for all JPG files in a directory
source = "path/to/dir/*.jpg"
@ -318,8 +318,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Define source as YouTube video URL
source = "https://youtu.be/LNwODJXcvt4"
@ -335,8 +335,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Single stream with batch-size 1 inference
source = "rtsp://example.com/media.mp4" # RTSP, RTMP, TCP, or IP streaming address
@ -354,8 +354,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Multiple streams with batched inference (e.g., batch-size 8 for 8 streams)
source = "path/to/list.streams" # *.streams text file with one streaming address per line
@ -385,8 +385,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Run inference on 'bus.jpg' with arguments
model.predict("bus.jpg", save=True, imgsz=320, conf=0.5)
@ -402,7 +402,7 @@ Visualization arguments:
## Image and Video Formats
YOLOv8 supports various image and video formats, as specified in [ultralytics/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/utils.py). See the tables below for the valid suffixes and example predict commands.
YOLO11 supports various image and video formats, as specified in [ultralytics/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/utils.py). See the tables below for the valid suffixes and example predict commands.
### Images
@ -449,8 +449,8 @@ All Ultralytics `predict()` calls will return a list of `Results` objects:
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Run inference on an image
results = model("bus.jpg") # list of 1 Results object
@ -501,8 +501,8 @@ For more details see the [`Results` class documentation](../reference/engine/res
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@ -540,7 +540,7 @@ For more details see the [`Boxes` class documentation](../reference/engine/resul
from ultralytics import YOLO
# Load a pretrained YOLOv8n-seg Segment model
model = YOLO("yolov8n-seg.pt")
model = YOLO("yolo11n-seg.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@ -573,7 +573,7 @@ For more details see the [`Masks` class documentation](../reference/engine/resul
from ultralytics import YOLO
# Load a pretrained YOLOv8n-pose Pose model
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@ -607,7 +607,7 @@ For more details see the [`Keypoints` class documentation](../reference/engine/r
from ultralytics import YOLO
# Load a pretrained YOLOv8n-cls Classify model
model = YOLO("yolov8n-cls.pt")
model = YOLO("yolo11n-cls.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@ -642,7 +642,7 @@ For more details see the [`Probs` class documentation](../reference/engine/resul
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n-obb.pt")
model = YOLO("yolo11n-obb.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@ -682,7 +682,7 @@ The `plot()` method in `Results` objects facilitates visualization of prediction
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Run inference on 'bus.jpg'
results = model(["bus.jpg", "zidane.jpg"]) # results list
@ -747,8 +747,8 @@ When using YOLO models in a multi-threaded application, it's important to instan
# Starting threads that each have their own model instance
Thread(target=thread_safe_predict, args=("yolov8n.pt", "image1.jpg")).start()
Thread(target=thread_safe_predict, args=("yolov8n.pt", "image2.jpg")).start()
Thread(target=thread_safe_predict, args=("yolo11n.pt", "image1.jpg")).start()
Thread(target=thread_safe_predict, args=("yolo11n.pt", "image2.jpg")).start()
```
For an in-depth look at thread-safe inference with YOLO models and step-by-step instructions, please refer to our [YOLO Thread-Safe Inference Guide](../guides/yolo-thread-safe-inference.md). This guide will provide you with all the necessary information to avoid common pitfalls and ensure that your multi-threaded inference runs smoothly.
@ -765,7 +765,7 @@ Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/your/video/file.mp4"

View file

@ -60,7 +60,7 @@ The default tracker is BoT-SORT.
If object confidence score will be low, i.e lower than [`track_high_thresh`](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/trackers/bytetrack.yaml#L5), then there will be no tracks successfully returned and updated.
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLO11n, YOLO11n-seg and YOLO11n-pose.
!!! example
@ -70,9 +70,9 @@ To run the tracker on video streams, use a trained Detect, Segment or Pose model
from ultralytics import YOLO
# Load an official or custom model
model = YOLO("yolov8n.pt") # Load an official Detect model
model = YOLO("yolov8n-seg.pt") # Load an official Segment model
model = YOLO("yolov8n-pose.pt") # Load an official Pose model
model = YOLO("yolo11n.pt") # Load an official Detect model
model = YOLO("yolo11n-seg.pt") # Load an official Segment model
model = YOLO("yolo11n-pose.pt") # Load an official Pose model
model = YOLO("path/to/best.pt") # Load a custom trained model
# Perform tracking with the model
@ -84,9 +84,9 @@ To run the tracker on video streams, use a trained Detect, Segment or Pose model
```bash
# Perform tracking with various models using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
yolo track model=yolov8n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
yolo track model=yolov8n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
yolo track model=yolo11n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
yolo track model=yolo11n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
yolo track model=path/to/best.pt source="https://youtu.be/LNwODJXcvt4" # Custom trained model
# Track using ByteTrack tracker
@ -113,7 +113,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
from ultralytics import YOLO
# Configure the tracking parameters and run the tracker
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
```
@ -121,7 +121,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
```bash
# Configure tracking parameters and run the tracker using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
```
### Tracker Selection
@ -136,7 +136,7 @@ Ultralytics also allows you to use a modified tracker configuration file. To do
from ultralytics import YOLO
# Load the model and run the tracker with a custom configuration file
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml")
```
@ -144,7 +144,7 @@ Ultralytics also allows you to use a modified tracker configuration file. To do
```bash
# Load the model and run the tracker with a custom configuration file using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) page.
@ -153,7 +153,7 @@ For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/tr
### Persisting Tracks Loop
Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/opencv) (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/opencv) (`cv2`) and YOLO11 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
!!! example "Streaming for-loop with tracking"
@ -162,8 +162,8 @@ Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/open
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/video.mp4"
@ -175,14 +175,14 @@ Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/open
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
# Run YOLO11 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
cv2.imshow("YOLO11 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
@ -200,9 +200,9 @@ Please note the change from `model(frame)` to `model.track(frame)`, which enable
### Plotting Tracks Over Time
Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLOv8, plotting these tracks is a seamless and efficient process.
Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLO11, plotting these tracks is a seamless and efficient process.
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
In the following example, we demonstrate how to utilize YOLO11's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
!!! example "Plotting tracks over multiple video frames"
@ -214,8 +214,8 @@ In the following example, we demonstrate how to utilize YOLOv8's tracking capabi
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/video.mp4"
@ -230,7 +230,7 @@ In the following example, we demonstrate how to utilize YOLOv8's tracking capabi
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
# Run YOLO11 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Get the boxes and track IDs
@ -253,7 +253,7 @@ In the following example, we demonstrate how to utilize YOLOv8's tracking capabi
cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
cv2.imshow("YOLO11 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
@ -275,7 +275,7 @@ In the provided Python script, we make use of Python's `threading` module to run
To ensure that each thread receives the correct parameters (the video file, the model to use and the file index), we define a function `run_tracker_in_thread` that accepts these parameters and contains the main tracking loop. This function reads the video frame by frame, runs the tracker, and displays the results.
Two different models are used in this example: `yolov8n.pt` and `yolov8n-seg.pt`, each tracking objects in a different video file. The video files are specified in `video_file1` and `video_file2`.
Two different models are used in this example: `yolo11n.pt` and `yolo11n-seg.pt`, each tracking objects in a different video file. The video files are specified in `video_file1` and `video_file2`.
The `daemon=True` parameter in `threading.Thread` means that these threads will be closed as soon as the main program finishes. We then start the threads with `start()` and use `join()` to make the main thread wait until both tracker threads have finished.
@ -291,7 +291,7 @@ Finally, after all threads have completed their task, the windows displaying the
from ultralytics import YOLO
# Define model names and video sources
MODEL_NAMES = ["yolov8n.pt", "yolov8n-seg.pt"]
MODEL_NAMES = ["yolo11n.pt", "yolo11n-seg.pt"]
SOURCES = ["path/to/video.mp4", "0"] # local video, 0 for webcam
@ -300,7 +300,7 @@ Finally, after all threads have completed their task, the windows displaying the
Run YOLO tracker in its own thread for concurrent processing.
Args:
model_name (str): The YOLOv8 model object.
model_name (str): The YOLO11 model object.
filename (str): The path to the video file or the identifier for the webcam/external camera source.
"""
model = YOLO(model_name)
@ -357,14 +357,14 @@ You can configure a custom tracker by copying an existing tracker configuration
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml")
```
=== "CLI"
```bash
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
### How can I run object tracking on multiple video streams simultaneously?
@ -381,7 +381,7 @@ To run object tracking on multiple video streams simultaneously, you can use Pyt
from ultralytics import YOLO
# Define model names and video sources
MODEL_NAMES = ["yolov8n.pt", "yolov8n-seg.pt"]
MODEL_NAMES = ["yolo11n.pt", "yolo11n-seg.pt"]
SOURCES = ["path/to/video.mp4", "0"] # local video, 0 for webcam
@ -390,7 +390,7 @@ To run object tracking on multiple video streams simultaneously, you can use Pyt
Run YOLO tracker in its own thread for concurrent processing.
Args:
model_name (str): The YOLOv8 model object.
model_name (str): The YOLO11 model object.
filename (str): The path to the video file or the identifier for the webcam/external camera source.
"""
model = YOLO(model_name)
@ -438,7 +438,7 @@ To visualize object tracks over multiple video frames, you can use the YOLO mode
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
track_history = defaultdict(lambda: [])
@ -458,7 +458,7 @@ To visualize object tracks over multiple video frames, you can use the YOLO mode
track.pop(0)
points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)
cv2.imshow("YOLOv8 Tracking", annotated_frame)
cv2.imshow("YOLO11 Tracking", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:

View file

@ -1,7 +1,7 @@
---
comments: true
description: Learn how to efficiently train object detection models using YOLOv8 with comprehensive instructions on settings, augmentation, and hardware utilization.
keywords: Ultralytics, YOLOv8, model training, deep learning, object detection, GPU training, dataset augmentation, hyperparameter tuning, model performance, M1 M2 training
description: Learn how to efficiently train object detection models using YOLO11 with comprehensive instructions on settings, augmentation, and hardware utilization.
keywords: Ultralytics, YOLO11, model training, deep learning, object detection, GPU training, dataset augmentation, hyperparameter tuning, model performance, M1 M2 training
---
# Model Training with Ultralytics YOLO
@ -10,7 +10,7 @@ keywords: Ultralytics, YOLOv8, model training, deep learning, object detection,
## Introduction
Training a [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) model involves feeding it data and adjusting its parameters so that it can make accurate predictions. Train mode in Ultralytics YOLOv8 is engineered for effective and efficient training of object detection models, fully utilizing modern hardware capabilities. This guide aims to cover all the details you need to get started with training your own models using YOLOv8's robust set of features.
Training a [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) model involves feeding it data and adjusting its parameters so that it can make accurate predictions. Train mode in Ultralytics YOLO11 is engineered for effective and efficient training of object detection models, fully utilizing modern hardware capabilities. This guide aims to cover all the details you need to get started with training your own models using YOLO11's robust set of features.
<p align="center">
<br>
@ -20,12 +20,12 @@ Training a [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train a YOLOv8 model on Your Custom Dataset in Google Colab.
<strong>Watch:</strong> How to Train a YOLO model on Your Custom Dataset in Google Colab.
</p>
## Why Choose Ultralytics YOLO for Training?
Here are some compelling reasons to opt for YOLOv8's Train mode:
Here are some compelling reasons to opt for YOLO11's Train mode:
- **Efficiency:** Make the most out of your hardware, whether you're on a single-GPU setup or scaling across multiple GPUs.
- **Versatility:** Train on custom datasets in addition to readily available ones like COCO, VOC, and ImageNet.
@ -34,7 +34,7 @@ Here are some compelling reasons to opt for YOLOv8's Train mode:
### Key Features of Train Mode
The following are some notable features of YOLOv8's Train mode:
The following are some notable features of YOLO11's Train mode:
- **Automatic Dataset Download:** Standard datasets like COCO, VOC, and ImageNet are downloaded automatically on first use.
- **Multi-GPU Support:** Scale your training efforts seamlessly across multiple GPUs to expedite the process.
@ -43,11 +43,11 @@ The following are some notable features of YOLOv8's Train mode:
!!! tip
* YOLOv8 datasets like COCO, VOC, ImageNet and many others automatically download on first use, i.e. `yolo train data=coco.yaml`
* YOLO11 datasets like COCO, VOC, ImageNet and many others automatically download on first use, i.e. `yolo train data=coco.yaml`
## Usage Examples
Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device='cpu'` will be used. See Arguments section below for a full list of training arguments.
Train YOLO11n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device='cpu'` will be used. See Arguments section below for a full list of training arguments.
!!! example "Single-GPU and CPU Training Example"
@ -59,9 +59,9 @@ Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from YAML
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolov8n.yaml").load("yolov8n.pt") # build from YAML and transfer weights
model = YOLO("yolo11n.yaml") # build a new model from YAML
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.yaml").load("yolo11n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
@ -71,13 +71,13 @@ Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/
```bash
# Build a new model from YAML and start training from scratch
yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.yaml pretrained=yolo11n.pt epochs=100 imgsz=640
```
### Multi-GPU Training
@ -94,7 +94,7 @@ Multi-GPU training allows for more efficient utilization of available hardware r
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model with 2 GPUs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640, device=[0, 1])
@ -104,7 +104,7 @@ Multi-GPU training allows for more efficient utilization of available hardware r
```bash
# Start training from a pretrained *.pt model using GPUs 0 and 1
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640 device=0,1
```
### Apple M1 and M2 MPS Training
@ -121,7 +121,7 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model with MPS
results = model.train(data="coco8.yaml", epochs=100, imgsz=640, device="mps")
@ -131,7 +131,7 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de
```bash
# Start training from a pretrained *.pt model using MPS
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640 device=mps
```
While leveraging the computational power of the M1/M2 chips, this enables more efficient processing of the training tasks. For more detailed guidance and advanced configuration options, please refer to the [PyTorch MPS documentation](https://pytorch.org/docs/stable/notes/mps.html).
@ -199,7 +199,7 @@ These settings can be adjusted to meet the specific requirements of the dataset
## Logging
In training a YOLOv8 model, you might find it valuable to keep track of the model's performance over time. This is where logging comes into play. Ultralytics' YOLO provides support for three types of loggers - Comet, ClearML, and TensorBoard.
In training a YOLO11 model, you might find it valuable to keep track of the model's performance over time. This is where logging comes into play. Ultralytics' YOLO provides support for three types of loggers - Comet, ClearML, and TensorBoard.
To use a logger, select it from the dropdown menu in the code snippet above and run it. The chosen logger will be installed and initialized.
@ -272,9 +272,9 @@ After setting up your logger, you can then proceed with your model training. All
## FAQ
### How do I train an [object detection](https://www.ultralytics.com/glossary/object-detection) model using Ultralytics YOLOv8?
### How do I train an [object detection](https://www.ultralytics.com/glossary/object-detection) model using Ultralytics YOLO11?
To train an object detection model using Ultralytics YOLOv8, you can either use the Python API or the CLI. Below is an example for both:
To train an object detection model using Ultralytics YOLO11, you can either use the Python API or the CLI. Below is an example for both:
!!! example "Single-GPU and CPU Training Example"
@ -284,7 +284,7 @@ To train an object detection model using Ultralytics YOLOv8, you can either use
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
@ -293,14 +293,14 @@ To train an object detection model using Ultralytics YOLOv8, you can either use
=== "CLI"
```bash
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For more details, refer to the [Train Settings](#train-settings) section.
### What are the key features of Ultralytics YOLOv8's Train mode?
### What are the key features of Ultralytics YOLO11's Train mode?
The key features of Ultralytics YOLOv8's Train mode include:
The key features of Ultralytics YOLO11's Train mode include:
- **Automatic Dataset Download:** Automatically downloads standard datasets like COCO, VOC, and ImageNet.
- **Multi-GPU Support:** Scale training across multiple GPUs for faster processing.
@ -309,7 +309,7 @@ The key features of Ultralytics YOLOv8's Train mode include:
These features make training efficient and customizable to your needs. For more details, see the [Key Features of Train Mode](#key-features-of-train-mode) section.
### How do I resume training from an interrupted session in Ultralytics YOLOv8?
### How do I resume training from an interrupted session in Ultralytics YOLO11?
To resume training from an interrupted session, set the `resume` argument to `True` and specify the path to the last saved checkpoint.
@ -335,9 +335,9 @@ To resume training from an interrupted session, set the `resume` argument to `Tr
Check the section on [Resuming Interrupted Trainings](#resuming-interrupted-trainings) for more information.
### Can I train YOLOv8 models on Apple M1 and M2 chips?
### Can I train YOLO11 models on Apple M1 and M2 chips?
Yes, Ultralytics YOLOv8 supports training on Apple M1 and M2 chips utilizing the Metal Performance Shaders (MPS) framework. Specify 'mps' as your training device.
Yes, Ultralytics YOLO11 supports training on Apple M1 and M2 chips utilizing the Metal Performance Shaders (MPS) framework. Specify 'mps' as your training device.
!!! example "MPS Training Example"
@ -347,7 +347,7 @@ Yes, Ultralytics YOLOv8 supports training on Apple M1 and M2 chips utilizing the
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Train the model on M1/M2 chip
results = model.train(data="coco8.yaml", epochs=100, imgsz=640, device="mps")
@ -356,14 +356,14 @@ Yes, Ultralytics YOLOv8 supports training on Apple M1 and M2 chips utilizing the
=== "CLI"
```bash
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640 device=mps
```
For more details, refer to the [Apple M1 and M2 MPS Training](#apple-m1-and-m2-mps-training) section.
### What are the common training settings, and how do I configure them?
Ultralytics YOLOv8 allows you to configure a variety of training settings such as batch size, learning rate, epochs, and more through arguments. Here's a brief overview:
Ultralytics YOLO11 allows you to configure a variety of training settings such as batch size, learning rate, epochs, and more through arguments. Here's a brief overview:
| Argument | Default | Description |
| -------- | ------- | ---------------------------------------------------------------------- |

View file

@ -1,7 +1,7 @@
---
comments: true
description: Learn how to validate your YOLOv8 model with precise metrics, easy-to-use tools, and custom settings for optimal performance.
keywords: Ultralytics, YOLOv8, model validation, machine learning, object detection, mAP metrics, Python API, CLI
description: Learn how to validate your YOLO11 model with precise metrics, easy-to-use tools, and custom settings for optimal performance.
keywords: Ultralytics, YOLO11, model validation, machine learning, object detection, mAP metrics, Python API, CLI
---
# Model Validation with Ultralytics YOLO
@ -10,7 +10,7 @@ keywords: Ultralytics, YOLOv8, model validation, machine learning, object detect
## Introduction
Validation is a critical step in the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) pipeline, allowing you to assess the quality of your trained models. Val mode in Ultralytics YOLOv8 provides a robust suite of tools and metrics for evaluating the performance of your [object detection](https://www.ultralytics.com/glossary/object-detection) models. This guide serves as a complete resource for understanding how to effectively use the Val mode to ensure that your models are both accurate and reliable.
Validation is a critical step in the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) pipeline, allowing you to assess the quality of your trained models. Val mode in Ultralytics YOLO11 provides a robust suite of tools and metrics for evaluating the performance of your [object detection](https://www.ultralytics.com/glossary/object-detection) models. This guide serves as a complete resource for understanding how to effectively use the Val mode to ensure that your models are both accurate and reliable.
<p align="center">
<br>
@ -25,7 +25,7 @@ Validation is a critical step in the [machine learning](https://www.ultralytics.
## Why Validate with Ultralytics YOLO?
Here's why using YOLOv8's Val mode is advantageous:
Here's why using YOLO11's Val mode is advantageous:
- **Precision:** Get accurate metrics like mAP50, mAP75, and mAP50-95 to comprehensively evaluate your model.
- **Convenience:** Utilize built-in features that remember training settings, simplifying the validation process.
@ -34,7 +34,7 @@ Here's why using YOLOv8's Val mode is advantageous:
### Key Features of Val Mode
These are the notable functionalities offered by YOLOv8's Val mode:
These are the notable functionalities offered by YOLO11's Val mode:
- **Automated Settings:** Models remember their training configurations for straightforward validation.
- **Multi-Metric Support:** Evaluate your model based on a range of accuracy metrics.
@ -43,11 +43,11 @@ These are the notable functionalities offered by YOLOv8's Val mode:
!!! tip
* YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()`
* YOLO11 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolo11n.pt` or `model('yolo11n.pt').val()`
## Usage Examples
Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
Validate trained YOLO11n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! example
@ -57,7 +57,7 @@ Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/a
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@ -71,7 +71,7 @@ Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/a
=== "CLI"
```bash
yolo detect val model=yolov8n.pt # val official model
yolo detect val model=yolo11n.pt # val official model
yolo detect val model=path/to/best.pt # val custom model
```
@ -95,7 +95,7 @@ The below examples showcase YOLO model validation with custom arguments in Pytho
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Customize validation settings
validation_results = model.val(data="coco8.yaml", imgsz=640, batch=16, conf=0.25, iou=0.6, device="0")
@ -104,20 +104,20 @@ The below examples showcase YOLO model validation with custom arguments in Pytho
=== "CLI"
```bash
yolo val model=yolov8n.pt data=coco8.yaml imgsz=640 batch=16 conf=0.25 iou=0.6 device=0
yolo val model=yolo11n.pt data=coco8.yaml imgsz=640 batch=16 conf=0.25 iou=0.6 device=0
```
## FAQ
### How do I validate my YOLOv8 model with Ultralytics?
### How do I validate my YOLO11 model with Ultralytics?
To validate your YOLOv8 model, you can use the Val mode provided by Ultralytics. For example, using the Python API, you can load a model and run validation with:
To validate your YOLO11 model, you can use the Val mode provided by Ultralytics. For example, using the Python API, you can load a model and run validation with:
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Validate the model
metrics = model.val()
@ -127,14 +127,14 @@ print(metrics.box.map) # map50-95
Alternatively, you can use the command-line interface (CLI):
```bash
yolo val model=yolov8n.pt
yolo val model=yolo11n.pt
```
For further customization, you can adjust various arguments like `imgsz`, `batch`, and `conf` in both Python and CLI modes. Check the [Arguments for YOLO Model Validation](#arguments-for-yolo-model-validation) section for the full list of parameters.
### What metrics can I get from YOLOv8 model validation?
### What metrics can I get from YOLO11 model validation?
YOLOv8 model validation provides several key metrics to assess model performance. These include:
YOLO11 model validation provides several key metrics to assess model performance. These include:
- mAP50 (mean Average Precision at IoU threshold 0.5)
- mAP75 (mean Average Precision at IoU threshold 0.75)
@ -156,16 +156,16 @@ For a complete performance evaluation, it's crucial to review all these metrics.
Using Ultralytics YOLO for validation provides several advantages:
- **[Precision](https://www.ultralytics.com/glossary/precision):** YOLOv8 offers accurate performance metrics including mAP50, mAP75, and mAP50-95.
- **[Precision](https://www.ultralytics.com/glossary/precision):** YOLO11 offers accurate performance metrics including mAP50, mAP75, and mAP50-95.
- **Convenience:** The models remember their training settings, making validation straightforward.
- **Flexibility:** You can validate against the same or different datasets and image sizes.
- **Hyperparameter Tuning:** Validation metrics help in fine-tuning models for better performance.
These benefits ensure that your models are evaluated thoroughly and can be optimized for superior results. Learn more about these advantages in the [Why Validate with Ultralytics YOLO](#why-validate-with-ultralytics-yolo) section.
### Can I validate my YOLOv8 model using a custom dataset?
### Can I validate my YOLO11 model using a custom dataset?
Yes, you can validate your YOLOv8 model using a [custom dataset](https://docs.ultralytics.com/datasets/). Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the [validation data](https://www.ultralytics.com/glossary/validation-data), class names, and other relevant details.
Yes, you can validate your YOLO11 model using a [custom dataset](https://docs.ultralytics.com/datasets/). Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the [validation data](https://www.ultralytics.com/glossary/validation-data), class names, and other relevant details.
Example in Python:
@ -173,7 +173,7 @@ Example in Python:
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Validate with a custom dataset
metrics = model.val(data="path/to/your/custom_dataset.yaml")
@ -183,12 +183,12 @@ print(metrics.box.map) # map50-95
Example using CLI:
```bash
yolo val model=yolov8n.pt data=path/to/your/custom_dataset.yaml
yolo val model=yolo11n.pt data=path/to/your/custom_dataset.yaml
```
For more customizable options during validation, see the [Example Validation with Arguments](#example-validation-with-arguments) section.
### How do I save validation results to a JSON file in YOLOv8?
### How do I save validation results to a JSON file in YOLO11?
To save the validation results to a JSON file, you can set the `save_json` argument to `True` when running validation. This can be done in both the Python API and CLI.
@ -198,7 +198,7 @@ Example in Python:
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Save validation results to JSON
metrics = model.val(save_json=True)
@ -207,7 +207,7 @@ metrics = model.val(save_json=True)
Example using CLI:
```bash
yolo val model=yolov8n.pt save_json=True
yolo val model=yolo11n.pt save_json=True
```
This functionality is particularly useful for further analysis or integration with other tools. Check the [Arguments for YOLO Model Validation](#arguments-for-yolo-model-validation) for more details.