Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Glenn Jocher 2023-11-22 20:45:46 +01:00 committed by GitHub
parent 0c4e97443b
commit 16a13a1ce0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
178 changed files with 14224 additions and 561 deletions

View file

@ -61,7 +61,7 @@ The example showcases the variety and complexity of the objects in the Caltech-1
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -61,7 +61,7 @@ The example showcases the diversity and complexity of the objects in the Caltech
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -64,7 +64,7 @@ The example showcases the variety and complexity of the images in the ImageNet d
If you use the ImageNet dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -59,7 +59,7 @@ The example showcases the variety and complexity of the images in the ImageNet10
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -80,7 +80,7 @@ In this example, the `train` directory contains subdirectories for each class in
## Usage
!!! Example ""
!!! Example
=== "Python"

View file

@ -69,7 +69,7 @@ If you use the MNIST dataset in your
research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -80,7 +80,7 @@ The example showcases the variety and complexity of the data in the Argoverse da
If you use the Argoverse dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the COCO datas
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -62,7 +62,7 @@ The example showcases the variety and complexity of the images in the COCO8 data
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Global Wheat
If you use the Global Wheat Head Dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -48,7 +48,7 @@ When using the Ultralytics YOLO format, organize your training and validation im
Here's how you can use these formats to train your model:
!!! Example ""
!!! Example
=== "Python"
@ -93,7 +93,7 @@ If you have your own dataset and would like to use it for training detection mod
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example ""
!!! Example
=== "Python"

View file

@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Objects365 d
If you use the Objects365 dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -94,7 +94,7 @@ Researchers can gain invaluable insights into the array of computer vision chall
For those employing Open Images V7 in their work, it's prudent to cite the relevant papers and acknowledge the creators:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -77,7 +77,7 @@ The example showcases the variety and complexity of the data in the SKU-110k dat
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -73,7 +73,7 @@ The example showcases the variety and complexity of the data in the VisDrone dat
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -77,7 +77,7 @@ The example showcases the variety and complexity of the images in the VOC datase
If you use the VOC dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -79,7 +79,7 @@ The example showcases the variety and complexity of the data in the xView datase
If you use the xView dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -109,7 +109,7 @@ The dataset's richness offers invaluable insights into object detection challeng
For those leveraging DOTA v2 in their endeavors, it's pertinent to cite the relevant research papers:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -32,7 +32,7 @@ An example of a `*.txt` label file for the above image, which contains an object
To train a model using these OBB formats:
!!! Example ""
!!! Example
=== "Python"
@ -69,7 +69,7 @@ For those looking to introduce their own datasets with oriented bounding boxes,
Transitioning labels from the DOTA dataset format to the YOLO OBB format can be achieved with this script:
!!! Example ""
!!! Example
=== "Python"

View file

@ -77,7 +77,7 @@ The example showcases the variety and complexity of the images in the COCO-Pose
If you use the COCO-Pose dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -62,7 +62,7 @@ The example showcases the variety and complexity of the images in the COCO8-Pose
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -64,7 +64,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example ""
!!! Example
=== "Python"
@ -125,7 +125,7 @@ If you have your own dataset and would like to use it for training pose estimati
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
!!! Example ""
!!! Example
=== "Python"

View file

@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the COCO-Seg d
If you use the COCO-Seg dataset in your research or development work, please cite the original COCO paper and acknowledge the extension to COCO-Seg:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -62,7 +62,7 @@ The example showcases the variety and complexity of the images in the COCO8-Seg
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -66,7 +66,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example ""
!!! Example
=== "Python"
@ -101,7 +101,7 @@ If you have your own dataset and would like to use it for training segmentation
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example ""
!!! Example
=== "Python"
@ -123,7 +123,7 @@ Auto-annotation is an essential feature that allows you to generate a segmentati
To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below:
!!! Example ""
!!! Example
=== "Python"

View file

@ -12,7 +12,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
## Usage
!!! Example ""
!!! Example
=== "Python"

View file

@ -69,7 +69,7 @@ The process is repeated until either the set number of iterations is reached or
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
!!! Example ""
!!! Example
=== "Python"

View file

@ -30,7 +30,7 @@ Without further ado, let's dive in!
- It includes 6 class labels, each with its total instance counts listed below.
| Class Label | Instance Count |
|:------------|:--------------:|
|:------------|:--------------:|
| Apple | 7049 |
| Grapes | 7202 |
| Pineapple | 1613 |

View file

@ -167,7 +167,7 @@ That's it! Now you're equipped to use YOLOv8 with SAHI for both standard and sli
If you use SAHI in your research or development work, please cite the original SAHI paper and acknowledge the authors:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -27,7 +27,7 @@ OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit,
Export a YOLOv8n model to OpenVINO format and run inference with the exported model.
!!! Example ""
!!! Example
=== "Python"
@ -251,7 +251,7 @@ Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
To reproduce the Ultralytics benchmarks above on all export [formats](../modes/export.md) run this code:
!!! Example ""
!!! Example
=== "Python"

View file

@ -30,17 +30,24 @@ FastSAM is designed to address the limitations of the [Segment Anything Model (S
7. **Model Compression Feasibility:** FastSAM demonstrates the feasibility of a path that can significantly reduce the computational effort by introducing an artificial prior to the structure, thus opening new possibilities for large model architecture for general vision tasks.
## Usage
## Available Models, Supported Tasks, and Operating Modes
### Python API
This table presents the available models with their specific pre-trained weights, the tasks they support, and their compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), indicated by ✅ emojis for supported modes and ❌ emojis for unsupported modes.
The FastSAM models are easy to integrate into your Python applications. Ultralytics provides a user-friendly Python API to streamline the process.
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|------------|---------------------|----------------------------------------------|-----------|------------|----------|--------|
| FastSAM-s | `FastSAM-s.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
| FastSAM-x | `FastSAM-x.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
#### Predict Usage
## Usage Examples
The FastSAM models are easy to integrate into your Python applications. Ultralytics provides user-friendly Python API and CLI commands to streamline development.
### Predict Usage
To perform object detection on an image, use the `predict` method as shown below:
!!! Example ""
!!! Example
=== "Python"
```python
@ -83,11 +90,11 @@ To perform object detection on an image, use the `predict` method as shown below
This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image.
#### Val Usage
### Val Usage
Validation of the model on a dataset can be done as follows:
!!! Example ""
!!! Example
=== "Python"
```python
@ -108,11 +115,11 @@ Validation of the model on a dataset can be done as follows:
Please note that FastSAM only supports detection and segmentation of a single class of object. This means it will recognize and segment all objects as the same class. Therefore, when preparing the dataset, you need to convert all object category IDs to 0.
### FastSAM official Usage
## FastSAM official Usage
FastSAM is also available directly from the [https://github.com/CASIA-IVA-Lab/FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) repository. Here is a brief overview of the typical steps you might take to use FastSAM:
#### Installation
### Installation
1. Clone the FastSAM repository:
```shell
@ -136,7 +143,7 @@ FastSAM is also available directly from the [https://github.com/CASIA-IVA-Lab/Fa
pip install git+https://github.com/openai/CLIP.git
```
#### Example Usage
### Example Usage
1. Download a [model checkpoint](https://drive.google.com/file/d/1m1sjY4ihXBU1fZXdQ-Xdj-mDltW-2Rqv/view?usp=sharing).
@ -168,7 +175,7 @@ Additionally, you can try FastSAM through a [Colab demo](https://colab.research.
We would like to acknowledge the FastSAM authors for their significant contributions in the field of real-time instance segmentation:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -17,7 +17,7 @@ Here are some of the key models supported:
3. **[YOLOv5](yolov5.md)**: An improved version of the YOLO architecture by Ultralytics, offering better performance and speed trade-offs compared to previous versions.
4. **[YOLOv6](yolov6.md)**: Released by [Meituan](https://about.meituan.com/) in 2022, and in use in many of the company's autonomous delivery robots.
5. **[YOLOv7](yolov7.md)**: Updated YOLO models released in 2022 by the authors of YOLOv4.
6. **[YOLOv8](yolov8.md)**: The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
6. **[YOLOv8](yolov8.md) NEW 🚀**: The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
7. **[Segment Anything Model (SAM)](sam.md)**: Meta's Segment Anything Model (SAM).
8. **[Mobile Segment Anything Model (MobileSAM)](mobile-sam.md)**: MobileSAM for mobile applications, by Kyung Hee University.
9. **[Fast Segment Anything Model (FastSAM)](fast-sam.md)**: FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
@ -37,7 +37,11 @@ Here are some of the key models supported:
## Getting Started: Usage Examples
!!! Example ""
This example provides simple YOLO training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md) and [Pose](../tasks/pose.md) docs.
!!! Example
=== "Python"

View file

@ -16,6 +16,14 @@ MobileSAM is implemented in various projects including [Grounding-SAM](https://g
MobileSAM is trained on a single GPU with a 100k dataset (1% of the original images) in less than a day. The code for this training will be made available in the future.
## Available Models, Supported Tasks, and Operating Modes
This table presents the available models with their specific pre-trained weights, the tasks they support, and their compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), indicated by ✅ emojis for supported modes and ❌ emojis for unsupported modes.
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|------------|---------------------|----------------------------------------------|-----------|------------|----------|--------|
| MobileSAM | `mobile_sam.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
## Adapting from SAM to MobileSAM
Since MobileSAM retains the same pipeline as the original SAM, we have incorporated the original's pre-processing, post-processing, and all other interfaces. Consequently, those currently using the original SAM can transition to MobileSAM with minimal effort.
@ -61,7 +69,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
### Point Prompt
!!! Example ""
!!! Example
=== "Python"
```python
@ -76,7 +84,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
### Box Prompt
!!! Example ""
!!! Example
=== "Python"
```python
@ -95,7 +103,7 @@ We have implemented `MobileSAM` and `SAM` using the same API. For more usage inf
If you find MobileSAM useful in your research or development work, please consider citing our paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -26,13 +26,11 @@ The Ultralytics Python API provides pre-trained PaddlePaddle RT-DETR models with
- RT-DETR-L: 53.0% AP on COCO val2017, 114 FPS on T4 GPU
- RT-DETR-X: 54.8% AP on COCO val2017, 74 FPS on T4 GPU
## Usage
## Usage Examples
You can use RT-DETR for object detection tasks using the `ultralytics` pip package. The following is a sample code snippet showing how to use RT-DETR models for training and inference:
This example provides simple RT-DETRR training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
!!! Example ""
This example provides simple inference code for RT-DETR. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using RT-DETR with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
!!! Example
=== "Python"
@ -62,26 +60,20 @@ You can use RT-DETR for object detection tasks using the `ultralytics` pip packa
yolo predict model=rtdetr-l.pt source=path/to/bus.jpg
```
### Supported Tasks
## Supported Tasks and Modes
| Model Type | Pre-trained Weights | Tasks Supported |
|---------------------|---------------------|------------------|
| RT-DETR Large | `rtdetr-l.pt` | Object Detection |
| RT-DETR Extra-Large | `rtdetr-x.pt` | Object Detection |
This table presents the model types, the specific pre-trained weights, the tasks supported by each model, and the various modes ([Train](../modes/train.md) , [Val](../modes/val.md), [Predict](../modes/predict.md), [Export](../modes/export.md)) that are supported, indicated by ✅ emojis.
### Supported Modes
| Mode | Supported |
|------------|-----------|
| Inference | ✅ |
| Validation | ✅ |
| Training | ✅ |
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|---------------------|---------------------|----------------------------------------|-----------|------------|----------|--------|
| RT-DETR Large | `rtdetr-l.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| RT-DETR Extra-Large | `rtdetr-x.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
## Citations and Acknowledgements
If you use Baidu's RT-DETR in your research or development work, please cite the [original paper](https://arxiv.org/abs/2304.08069):
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -26,6 +26,15 @@ Example images with overlaid masks from our newly introduced dataset, SA-1B. SA-
For an in-depth look at the Segment Anything Model and the SA-1B dataset, please visit the [Segment Anything website](https://segment-anything.com) and check out the research paper [Segment Anything](https://arxiv.org/abs/2304.02643).
## Available Models, Supported Tasks, and Operating Modes
This table presents the available models with their specific pre-trained weights, the tasks they support, and their compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), indicated by ✅ emojis for supported modes and ❌ emojis for unsupported modes.
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|------------|---------------------|----------------------------------------------|-----------|------------|----------|--------|
| SAM base | `sam_b.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
| SAM large | `sam_l.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
## How to Use SAM: Versatility and Power in Image Segmentation
The Segment Anything Model can be employed for a multitude of downstream tasks that go beyond its training data. This includes edge detection, object proposal generation, instance segmentation, and preliminary text-to-mask prediction. With prompt engineering, SAM can swiftly adapt to new tasks and data distributions in a zero-shot manner, establishing it as a versatile and potent tool for all your image segmentation needs.
@ -122,21 +131,6 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
- More additional args for `Segment everything` see [`Predictor/generate` Reference](../reference/models/sam/predict.md).
## Available Models and Supported Tasks
| Model Type | Pre-trained Weights | Tasks Supported |
|------------|---------------------|-----------------------|
| SAM base | `sam_b.pt` | Instance Segmentation |
| SAM large | `sam_l.pt` | Instance Segmentation |
## Operating Modes
| Mode | Supported |
|------------|-----------|
| Inference | ✅ |
| Validation | ❌ |
| Training | ❌ |
## SAM comparison vs YOLOv8
Here we compare Meta's smallest SAM model, SAM-b, with Ultralytics smallest segmentation model, [YOLOv8n-seg](../tasks/segment.md):
@ -152,7 +146,7 @@ This comparison shows the order-of-magnitude differences in the model sizes and
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
!!! Example ""
!!! Example
=== "Python"
```python
@ -187,7 +181,7 @@ Auto-annotation is a key feature of SAM, allowing users to generate a [segmentat
To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below:
!!! Example ""
!!! Example
=== "Python"
```python
@ -212,7 +206,7 @@ Auto-annotation with pre-trained models can dramatically cut down the time and e
If you find SAM useful in your research or development work, please consider citing our paper:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -34,7 +34,7 @@ Experience the power of next-generation object detection with the pre-trained YO
Each model variant is designed to offer a balance between Mean Average Precision (mAP) and latency, helping you optimize your object detection tasks for both performance and speed.
## Usage
## Usage Examples
Ultralytics has made YOLO-NAS models easy to integrate into your Python applications via our `ultralytics` python package. The package provides a user-friendly Python API to streamline the process.
@ -44,7 +44,7 @@ The following examples show how to use YOLO-NAS models with the `ultralytics` pa
In this example we validate YOLO-NAS-s on the COCO8 dataset.
!!! Example ""
!!! Example
This example provides simple inference and validation code for YOLO-NAS. For handling inference results see [Predict](../modes/predict.md) mode. For using YOLO-NAS with additional modes see [Val](../modes/val.md) and [Export](../modes/export.md). YOLO-NAS on the `ultralytics` package does not support training.
@ -80,33 +80,27 @@ In this example we validate YOLO-NAS-s on the COCO8 dataset.
yolo predict model=yolo_nas_s.pt source=path/to/bus.jpg
```
### Supported Tasks
## Supported Tasks and Modes
The YOLO-NAS models are primarily designed for object detection tasks. You can download the pre-trained weights for each variant of the model as follows:
We offer three variants of the YOLO-NAS models: Small (s), Medium (m), and Large (l). Each variant is designed to cater to different computational and performance needs:
| Model Type | Pre-trained Weights | Tasks Supported |
|------------|-----------------------------------------------------------------------------------------------|------------------|
| YOLO-NAS-s | [yolo_nas_s.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo_nas_s.pt) | Object Detection |
| YOLO-NAS-m | [yolo_nas_m.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo_nas_m.pt) | Object Detection |
| YOLO-NAS-l | [yolo_nas_l.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo_nas_l.pt) | Object Detection |
- **YOLO-NAS-s**: Optimized for environments where computational resources are limited but efficiency is key.
- **YOLO-NAS-m**: Offers a balanced approach, suitable for general-purpose object detection with higher accuracy.
- **YOLO-NAS-l**: Tailored for scenarios requiring the highest accuracy, where computational resources are less of a constraint.
### Supported Modes
Below is a detailed overview of each model, including links to their pre-trained weights, the tasks they support, and their compatibility with different operating modes.
The YOLO-NAS models support both inference and validation modes, allowing you to predict and validate results with ease. Training mode, however, is currently not supported.
| Mode | Supported |
|------------|-----------|
| Inference | ✅ |
| Validation | ✅ |
| Training | ❌ |
Harness the power of the YOLO-NAS models to drive your object detection tasks to new heights of performance and speed.
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|------------|-----------------------------------------------------------------------------------------------|----------------------------------------|-----------|------------|----------|--------|
| YOLO-NAS-s | [yolo_nas_s.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo_nas_s.pt) | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ❌ | ✅ |
| YOLO-NAS-m | [yolo_nas_m.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo_nas_m.pt) | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ❌ | ✅ |
| YOLO-NAS-l | [yolo_nas_l.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo_nas_l.pt) | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ❌ | ✅ |
## Citations and Acknowledgements
If you employ YOLO-NAS in your research or development work, please cite SuperGradients:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -26,34 +26,25 @@ This document presents an overview of three closely related object detection mod
- **YOLOv3u:** This updated model incorporates the anchor-free, objectness-free split head from YOLOv8. By eliminating the need for pre-defined anchor boxes and objectness scores, this detection head design can improve the model's ability to detect objects of varying sizes and shapes. This makes YOLOv3u more robust and accurate for object detection tasks.
## Supported Tasks
## Supported Tasks and Modes
YOLOv3, YOLOv3-Ultralytics, and YOLOv3u all support the following tasks:
The YOLOv3 series, including YOLOv3, YOLOv3-Ultralytics, and YOLOv3u, are designed specifically for object detection tasks. These models are renowned for their effectiveness in various real-world scenarios, balancing accuracy and speed. Each variant offers unique features and optimizations, making them suitable for a range of applications.
- Object Detection
All three models support a comprehensive set of modes, ensuring versatility in various stages of model deployment and development. These modes include [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), providing users with a complete toolkit for effective object detection.
## Supported Modes
| Model Type | Tasks Supported | Inference | Validation | Training | Export |
|--------------------|----------------------------------------|-----------|------------|----------|--------|
| YOLOv3 | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv3-Ultralytics | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv3u | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
All three models support the following modes:
This table provides an at-a-glance view of the capabilities of each YOLOv3 variant, highlighting their versatility and suitability for various tasks and operational modes in object detection workflows.
- Inference
- Validation
- Training
- Export
## Usage Examples
## Performance
This example provides simple YOLOv3 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
Below is a comparison of the performance of the three models. The performance is measured in terms of the Mean Average Precision (mAP) on the COCO dataset:
TODO
## Usage
You can use YOLOv3 for object detection tasks using the Ultralytics repository. The following is a sample code snippet showing how to use YOLOv3 model for inference:
!!! Example ""
This example provides simple inference code for YOLOv3. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using YOLOv3 with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
!!! Example
=== "Python"
@ -91,7 +82,7 @@ You can use YOLOv3 for object detection tasks using the Ultralytics repository.
If you use YOLOv3 in your research, please cite the original YOLO papers and the Ultralytics YOLOv3 repository:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -53,7 +53,7 @@ YOLOv4 is a powerful and efficient object detection model that strikes a balance
We would like to acknowledge the YOLOv4 authors for their significant contributions in the field of real-time object detection:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -20,24 +20,24 @@ YOLOv5u represents an advancement in object detection methodologies. Originating
- **Variety of Pre-trained Models:** Understanding that different tasks require different toolsets, YOLOv5u provides a plethora of pre-trained models. Whether you're focusing on Inference, Validation, or Training, there's a tailor-made model awaiting you. This variety ensures you're not just using a one-size-fits-all solution, but a model specifically fine-tuned for your unique challenge.
## Supported Tasks
## Supported Tasks and Modes
| Model Type | Pre-trained Weights | Task |
|------------|-----------------------------------------------------------------------------------------------------------------------------|-----------|
| YOLOv5u | `yolov5nu`, `yolov5su`, `yolov5mu`, `yolov5lu`, `yolov5xu`, `yolov5n6u`, `yolov5s6u`, `yolov5m6u`, `yolov5l6u`, `yolov5x6u` | Detection |
The YOLOv5u models, with various pre-trained weights, excel in [Object Detection](../tasks/detect.md) tasks. They support a comprehensive range of modes, making them suitable for diverse applications, from development to deployment.
## Supported Modes
| Model Type | Pre-trained Weights | Task | Inference | Validation | Training | Export |
|------------|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------|-----------|------------|----------|--------|
| YOLOv5u | `yolov5nu`, `yolov5su`, `yolov5mu`, `yolov5lu`, `yolov5xu`, `yolov5n6u`, `yolov5s6u`, `yolov5m6u`, `yolov5l6u`, `yolov5x6u` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| Mode | Supported |
|------------|-----------|
| Inference | ✅ |
| Validation | ✅ |
| Training | ✅ |
This table provides a detailed overview of the YOLOv5u model variants, highlighting their applicability in object detection tasks and support for various operational modes such as [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md). This comprehensive support ensures that users can fully leverage the capabilities of YOLOv5u models in a wide range of object detection scenarios.
## Performance Metrics
!!! Performance
=== "Detection"
See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/detect/coco/), which include 80 pre-trained classes.
| Model | YAML | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|---------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [yolov5nu.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5nu.pt) | [yolov5n.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/models/v5/yolov5.yaml) | 640 | 34.3 | 73.6 | 1.06 | 2.6 | 7.7 |
@ -52,13 +52,11 @@ YOLOv5u represents an advancement in object detection methodologies. Originating
| [yolov5l6u.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5l6u.pt) | [yolov5l6.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/models/v5/yolov5-p6.yaml) | 1280 | 55.7 | 1470.9 | 5.47 | 86.1 | 137.4 |
| [yolov5x6u.pt](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | [yolov5x6.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/models/v5/yolov5-p6.yaml) | 1280 | 56.8 | 2436.5 | 8.98 | 155.4 | 250.7 |
## Usage
## Usage Examples
You can use YOLOv5u for object detection tasks using the Ultralytics repository. The following is a sample code snippet showing how to use YOLOv5u model for inference:
This example provides simple YOLOv5 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
!!! Example ""
This example provides simple inference code for YOLOv5. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using YOLOv5 with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
!!! Example
=== "Python"
@ -96,7 +94,7 @@ You can use YOLOv5u for object detection tasks using the Ultralytics repository.
If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows:
!!! Note ""
!!! Quote ""
=== "BibTeX"
```bibtex
@ -112,4 +110,4 @@ If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv
}
```
Special thanks to Glenn Jocher and the Ultralytics team for their work on developing and maintaining the YOLOv5 and YOLOv5u models.
Please note that YOLOv5 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://ultralytics.com/license) licenses.

View file

@ -21,7 +21,7 @@ keywords: Meituan YOLOv6, object detection, Ultralytics, YOLOv6 docs, Bi-directi
- **Enhanced Backbone and Neck Design:** By deepening YOLOv6 to include another stage in the backbone and neck, this model achieves state-of-the-art performance on the COCO dataset at high-resolution input.
- **Self-Distillation Strategy:** A new self-distillation strategy is implemented to boost the performance of smaller models of YOLOv6, enhancing the auxiliary regression branch during training and removing it at inference to avoid a marked speed decline.
## Pre-trained Models
## Performance Metrics
YOLOv6 provides various pre-trained models with different scales:
@ -33,13 +33,11 @@ YOLOv6 provides various pre-trained models with different scales:
YOLOv6 also provides quantized models for different precisions and models optimized for mobile platforms.
## Usage
## Usage Examples
You can use YOLOv6 for object detection tasks using the Ultralytics pip package. The following is a sample code snippet showing how to use YOLOv6 models for training:
This example provides simple YOLOv6 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
!!! Example ""
This example provides simple training code for YOLOv6. For more options including training settings see [Train](../modes/train.md) mode. For using YOLOv6 with additional modes see [Predict](../modes/predict.md), [Val](../modes/val.md) and [Export](../modes/export.md).
!!! Example
=== "Python"
@ -73,29 +71,25 @@ You can use YOLOv6 for object detection tasks using the Ultralytics pip package.
yolo predict model=yolov6n.yaml source=path/to/bus.jpg
```
### Supported Tasks
## Supported Tasks and Modes
| Model Type | Pre-trained Weights | Tasks Supported |
|------------|---------------------|------------------|
| YOLOv6-N | `yolov6-n.pt` | Object Detection |
| YOLOv6-S | `yolov6-s.pt` | Object Detection |
| YOLOv6-M | `yolov6-m.pt` | Object Detection |
| YOLOv6-L | `yolov6-l.pt` | Object Detection |
| YOLOv6-L6 | `yolov6-l6.pt` | Object Detection |
The YOLOv6 series offers a range of models, each optimized for high-performance [Object Detection](../tasks/detect.md). These models cater to varying computational needs and accuracy requirements, making them versatile for a wide array of applications.
## Supported Modes
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|------------|---------------------|----------------------------------------|-----------|------------|----------|--------|
| YOLOv6-N | `yolov6-n.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv6-S | `yolov6-s.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv6-M | `yolov6-m.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv6-L | `yolov6-l.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv6-L6 | `yolov6-l6.pt` | [Object Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| Mode | Supported |
|------------|-----------|
| Inference | ✅ |
| Validation | ✅ |
| Training | ✅ |
This table provides a detailed overview of the YOLOv6 model variants, highlighting their capabilities in object detection tasks and their compatibility with various operational modes such as [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md). This comprehensive support ensures that users can fully leverage the capabilities of YOLOv6 models in a broad range of object detection scenarios.
## Citations and Acknowledgements
We would like to acknowledge the authors for their significant contributions in the field of real-time object detection:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -49,7 +49,7 @@ We regret any inconvenience this may cause and will strive to update this docume
We would like to acknowledge the YOLOv7 authors for their significant contributions in the field of real-time object detection:
!!! Note ""
!!! Quote ""
=== "BibTeX"

View file

@ -19,27 +19,29 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
- **Optimized Accuracy-Speed Tradeoff:** With a focus on maintaining an optimal balance between accuracy and speed, YOLOv8 is suitable for real-time object detection tasks in diverse application areas.
- **Variety of Pre-trained Models:** YOLOv8 offers a range of pre-trained models to cater to various tasks and performance requirements, making it easier to find the right model for your specific use case.
## Supported Tasks
## Supported Tasks and Modes
| Model Type | Pre-trained Weights | Task |
|-------------|---------------------------------------------------------------------------------------------------------------------|-----------------------|
| YOLOv8 | `yolov8n.pt`, `yolov8s.pt`, `yolov8m.pt`, `yolov8l.pt`, `yolov8x.pt` | Detection |
| YOLOv8-seg | `yolov8n-seg.pt`, `yolov8s-seg.pt`, `yolov8m-seg.pt`, `yolov8l-seg.pt`, `yolov8x-seg.pt` | Instance Segmentation |
| YOLOv8-pose | `yolov8n-pose.pt`, `yolov8s-pose.pt`, `yolov8m-pose.pt`, `yolov8l-pose.pt`, `yolov8x-pose.pt`, `yolov8x-pose-p6.pt` | Pose/Keypoints |
| YOLOv8-cls | `yolov8n-cls.pt`, `yolov8s-cls.pt`, `yolov8m-cls.pt`, `yolov8l-cls.pt`, `yolov8x-cls.pt` | Classification |
The YOLOv8 series offers a diverse range of models, each specialized for specific tasks in computer vision. These models are designed to cater to various requirements, from object detection to more complex tasks like instance segmentation, pose/keypoints detection, and classification.
## Supported Modes
Each variant of the YOLOv8 series is optimized for its respective task, ensuring high performance and accuracy. Additionally, these models are compatible with various operational modes including [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), facilitating their use in different stages of deployment and development.
| Mode | Supported |
|------------|-----------|
| Inference | ✅ |
| Validation | ✅ |
| Training | ✅ |
| Model | Filenames | Task | Inference | Validation | Training | Export |
|-------------|----------------------------------------------------------------------------------------------------------------|----------------------------------------------|-----------|------------|----------|--------|
| YOLOv8 | `yolov8n.pt` `yolov8s.pt` `yolov8m.pt` `yolov8l.pt` `yolov8x.pt` | [Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-seg | `yolov8n-seg.pt` `yolov8s-seg.pt` `yolov8m-seg.pt` `yolov8l-seg.pt` `yolov8x-seg.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-pose | `yolov8n-pose.pt` `yolov8s-pose.pt` `yolov8m-pose.pt` `yolov8l-pose.pt` `yolov8x-pose.pt` `yolov8x-pose-p6.pt` | [Pose/Keypoints](../tasks/pose.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-cls | `yolov8n-cls.pt` `yolov8s-cls.pt` `yolov8m-cls.pt` `yolov8l-cls.pt` `yolov8x-cls.pt` | [Classification](../tasks/classify.md) | ✅ | ✅ | ✅ | ✅ |
This table provides an overview of the YOLOv8 model variants, highlighting their applicability in specific tasks and their compatibility with various operational modes such as Inference, Validation, Training, and Export. It showcases the versatility and robustness of the YOLOv8 series, making them suitable for a variety of applications in computer vision.
## Performance Metrics
!!! Performance
=== "Detection (COCO)"
See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/detect/coco/), which include 80 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
@ -62,6 +64,8 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
=== "Segmentation (COCO)"
See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/segment/coco/), which include 80 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
@ -72,6 +76,8 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
=== "Classification (ImageNet)"
See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usage examples with these models trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/), which include 1000 pre-trained classes.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
@ -82,6 +88,8 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
=== "Pose (COCO)"
See [Pose Estimation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/pose/coco/), which include 1 pre-trained class, 'person'.
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
@ -91,13 +99,13 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
## Usage
## Usage Examples
You can use YOLOv8 for object detection tasks using the Ultralytics pip package. The following is a sample code snippet showing how to use YOLOv8 models for inference:
This example provides simple YOLOv8 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
!!! Example ""
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md) and [Pose](../tasks/pose.md) docs.
This example provides simple inference code for YOLOv8. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using YOLOv8 with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
!!! Example
=== "Python"
@ -135,7 +143,7 @@ You can use YOLOv8 for object detection tasks using the Ultralytics pip package.
If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:
!!! Note ""
!!! Quote ""
=== "BibTeX"
@ -151,4 +159,4 @@ If you use the YOLOv8 model or any other software from this repository in your w
}
```
Please note that the DOI is pending and will be added to the citation once it is available. The usage of the software is in accordance with the AGPL-3.0 license.
Please note that the DOI is pending and will be added to the citation once it is available. YOLOv8 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://ultralytics.com/license) licenses.

View file

@ -41,7 +41,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
!!! Example ""
!!! Example
=== "Python"

View file

@ -48,7 +48,7 @@ Here are some of the standout functionalities:
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
!!! Example ""
!!! Example
=== "Python"

View file

@ -28,7 +28,7 @@ In the world of machine learning and computer vision, the process of making sens
| Manufacturing | Sports | Safety |
|:-------------------------------------------------:|:----------------------------------------------------:|:-------------------------------------------:|
| ![Vehicle Spare Parts Detection][car spare parts] | ![Football Player Detection][football player detect] | ![People Fall Detection][human fall detect] |
| Vehicle Spare Parts Detection | Football Player Detection | People Fall Detection |
| Vehicle Spare Parts Detection | Football Player Detection | People Fall Detection |
## Why Use Ultralytics YOLO for Inference?
@ -715,5 +715,7 @@ Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video
This script will run predictions on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
[car spare parts]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/a0f802a8-0776-44cf-8f17-93974a4a28a1
[football player detect]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d320e1f-fc57-4d7f-a691-78ee579c3442
[human fall detect]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/86437c4a-3227-4eee-90ef-9efb697bdb43

View file

@ -32,10 +32,10 @@ The output from Ultralytics trackers is consistent with standard object detectio
## Real-world Applications
| Transportation | Retail | Aquaculture |
| Transportation | Retail | Aquaculture |
|:----------------------------------:|:--------------------------------:|:----------------------------:|
| ![Vehicle Tracking][vehicle track] | ![People Tracking][people track] | ![Fish Tracking][fish track] |
| Vehicle Tracking | People Tracking | Fish Tracking |
| Vehicle Tracking | People Tracking | Fish Tracking |
## Features at a Glance
@ -58,7 +58,7 @@ The default tracker is BoT-SORT.
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
!!! Example ""
!!! Example
=== "Python"
@ -97,7 +97,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](../modes/predict.md#inference-arguments) model page.
!!! Example ""
!!! Example
=== "Python"
@ -120,7 +120,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
!!! Example ""
!!! Example
=== "Python"
@ -354,5 +354,7 @@ To initiate your contribution, please refer to our [Contributing Guide](https://
Together, let's enhance the tracking capabilities of the Ultralytics YOLO ecosystem 🙏!
[vehicle track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/ee6e6038-383b-4f21-ac29-b2a1c7d386ab
[people track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/93bb4ee2-77a0-4e4e-8eb6-eb8f527f0527
[fish track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/a5146d0f-bfa8-4e0a-b7df-3c1446cd8142

View file

@ -236,7 +236,7 @@ To use a logger, select it from the dropdown menu in the code snippet above and
To use Comet:
!!! Example ""
!!! Example
=== "Python"
```python
@ -254,7 +254,7 @@ Remember to sign in to your Comet account on their website and get your API key.
To use ClearML:
!!! Example ""
!!! Example
=== "Python"
```python
@ -272,7 +272,7 @@ After running this script, you will need to sign in to your ClearML account on t
To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb):
!!! Example ""
!!! Example
=== "CLI"
```bash
@ -282,7 +282,7 @@ To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ul
To use TensorBoard locally run the below command and view results at http://localhost:6006/.
!!! Example ""
!!! Example
=== "CLI"
```bash

View file

@ -38,7 +38,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! Example ""
!!! Example
=== "Python"

View file

@ -94,7 +94,6 @@ keywords: Ultralytics, Utils, utilitarian functions, colorstr, yaml_save, set_lo
<br><br>
---
## ::: ultralytics.utils.is_github_action_running
<br><br>

View file

@ -40,7 +40,7 @@ YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose model
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! Example ""
!!! Example
=== "Python"
@ -77,7 +77,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example ""
!!! Example
=== "Python"
@ -104,7 +104,7 @@ Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument
Use a trained YOLOv8n-cls model to run predictions on images.
!!! Example ""
!!! Example
=== "Python"
@ -131,7 +131,7 @@ See full `predict` mode details in the [Predict](https://docs.ultralytics.com/mo
Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
!!! Example ""
!!! Example
=== "Python"

View file

@ -51,7 +51,7 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! Example ""
!!! Example
=== "Python"
@ -87,7 +87,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example ""
!!! Example
=== "Python"
@ -116,7 +116,7 @@ Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need
Use a trained YOLOv8n model to run predictions on images.
!!! Example ""
!!! Example
=== "Python"
@ -143,7 +143,7 @@ See full `predict` mode details in the [Predict](https://docs.ultralytics.com/mo
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
!!! Example ""
!!! Example
=== "Python"

View file

@ -54,7 +54,7 @@ YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models ar
Train a YOLOv8-pose model on the COCO128-pose dataset.
!!! Example ""
!!! Example
=== "Python"
@ -91,7 +91,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
retains it's training `data` and arguments as model attributes.
!!! Example ""
!!! Example
=== "Python"
@ -120,7 +120,7 @@ retains it's training `data` and arguments as model attributes.
Use a trained YOLOv8n-pose model to run predictions on images.
!!! Example ""
!!! Example
=== "Python"
@ -147,7 +147,7 @@ See full `predict` mode details in the [Predict](https://docs.ultralytics.com/mo
Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
!!! Example ""
!!! Example
=== "Python"

View file

@ -51,7 +51,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! Example ""
!!! Example
=== "Python"
@ -88,7 +88,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](.
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
retains it's training `data` and arguments as model attributes.
!!! Example ""
!!! Example
=== "Python"
@ -121,7 +121,7 @@ retains it's training `data` and arguments as model attributes.
Use a trained YOLOv8n-seg model to run predictions on images.
!!! Example ""
!!! Example
=== "Python"
@ -148,7 +148,7 @@ See full `predict` mode details in the [Predict](https://docs.ultralytics.com/mo
Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
!!! Example ""
!!! Example
=== "Python"

View file

@ -8,7 +8,7 @@ YOLO settings and hyperparameters play a critical role in the model's performanc
YOLOv8 `yolo` CLI commands use the following syntax:
!!! Example ""
!!! Example
=== "CLI"

View file

@ -218,7 +218,7 @@ To do this first create a copy of `default.yaml` in your current working dir wit
This will create `default_copy.yaml`, which you can then pass as `cfg=default_copy.yaml` along with any additional args, like `imgsz=320` in this example:
!!! Example ""
!!! Example
=== "CLI"
```bash