Update to lowercase MkDocs admonitions (#15990)

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
MatthewNoyce 2024-09-06 16:33:26 +01:00 committed by GitHub
parent ce24c7273e
commit c2b647a768
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
133 changed files with 529 additions and 521 deletions

View file

@ -28,7 +28,7 @@ The Caltech-101 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the objects in the Caltech-1
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -90,7 +90,7 @@ The [Caltech-101](https://data.caltech.edu/records/mzrjq-6wc02) dataset is widel
To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the provided code snippets. For example, to train for 100 epochs:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -128,7 +128,7 @@ These features make it an excellent choice for training and evaluating object re
Citing the Caltech-101 dataset in your research acknowledges the creators' contributions and provides a reference for others who might use the dataset. The recommended citation is:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -39,7 +39,7 @@ The Caltech-256 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the diversity and complexity of the objects in the Caltech
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -98,7 +98,7 @@ The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is a lar
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. Refer to the model [Training](../../modes/train.md) page for additional options.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -42,7 +42,7 @@ The CIFAR-10 dataset is widely used for training and evaluating deep learning mo
To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -96,7 +96,7 @@ We would like to acknowledge Alex Krizhevsky for creating and maintaining the CI
To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow the examples provided for both Python and CLI. Here is a basic example to train your model for 100 epochs with an image size of 32x32 pixels:
!!! Example
!!! example
=== "Python"
@ -153,7 +153,7 @@ Each subset comprises images categorized into 10 classes, with their annotations
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -31,7 +31,7 @@ The CIFAR-100 dataset is extensively used for training and evaluating deep learn
To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -89,7 +89,7 @@ The [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a large
You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI commands. Here's how:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -56,7 +56,7 @@ The Fashion-MNIST dataset is widely used for training and evaluating deep learni
To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image size of 28x28, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -99,7 +99,7 @@ The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is
To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use both Python and CLI commands. Here's a quick example to get you started:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -41,7 +41,7 @@ The ImageNet dataset is widely used for training and evaluating deep learning mo
To train a deep learning model on the ImageNet dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -74,7 +74,7 @@ The example showcases the variety and complexity of the images in the ImageNet d
If you use the ImageNet dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -102,7 +102,7 @@ The [ImageNet dataset](https://www.image-net.org/) is a large-scale database con
To use a pretrained Ultralytics YOLO model for image classification on the ImageNet dataset, follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -27,7 +27,7 @@ The ImageNet10 dataset is useful for quickly testing and debugging computer visi
To test a deep learning model on the ImageNet10 dataset with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Test Example"
!!! example "Test Example"
=== "Python"
@ -58,7 +58,7 @@ The ImageNet10 dataset contains a subset of images from the original ImageNet da
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -86,7 +86,7 @@ The [ImageNet10](https://github.com/ultralytics/assets/releases/download/v0.0.0/
To test your deep learning model on the ImageNet10 dataset with an image size of 224x224, use the following code snippets.
!!! Example "Test Example"
!!! example "Test Example"
=== "Python"

View file

@ -29,7 +29,7 @@ The ImageNette dataset is widely used for training and evaluating deep learning
To train a model on the ImageNette dataset for 100 epochs with a standard image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -64,7 +64,7 @@ For faster prototyping and training, the ImageNette dataset is also available in
To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imagenette320' in the training command. The following code snippets illustrate this:
!!! Example "Train Example with ImageNette160"
!!! example "Train Example with ImageNette160"
=== "Python"
@ -85,7 +85,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
yolo classify train data=imagenette160 model=yolov8n-cls.pt epochs=100 imgsz=160
```
!!! Example "Train Example with ImageNette320"
!!! example "Train Example with ImageNette320"
=== "Python"
@ -122,7 +122,7 @@ The [ImageNette dataset](https://github.com/fastai/imagenette) is a simplified s
To train a YOLO model on the ImageNette dataset for 100 epochs, you can use the following commands. Make sure to have the Ultralytics YOLO environment set up.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -159,7 +159,7 @@ For more details on model training and dataset management, explore the [Dataset
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
!!! Example "Train Example with ImageNette160"
!!! example "Train Example with ImageNette160"
=== "Python"

View file

@ -26,7 +26,7 @@ The ImageWoof dataset is widely used for training and evaluating deep learning m
To train a CNN model on the ImageWoof dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -59,7 +59,7 @@ ImageWoof dataset comes in three different sizes to accommodate various research
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
!!! Example "Example"
!!! example "Example"
=== "Python"
@ -109,7 +109,7 @@ The [ImageWoof](https://github.com/fastai/imagenette) dataset is a challenging s
To train a Convolutional Neural Network (CNN) model on the ImageWoof dataset using Ultralytics YOLO for 100 epochs at an image size of 224x224, you can use the following code:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -78,7 +78,7 @@ This structured approach ensures that the model can effectively learn from well-
## Usage
!!! Example
!!! example
=== "Python"
@ -194,7 +194,7 @@ For additional insights and real-world applications, you can explore [Ultralytic
Training a model using Ultralytics YOLO can be done easily in both Python and CLI. Here's an example:
!!! Example
!!! example
=== "Python"

View file

@ -34,7 +34,7 @@ The MNIST dataset is widely used for training and evaluating deep learning model
To train a CNN model on the MNIST dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -69,7 +69,7 @@ If you use the MNIST dataset in your
research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -95,7 +95,7 @@ The [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, or Modified National Ins
To train a model on the MNIST dataset using Ultralytics YOLO, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -35,7 +35,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths, classes, and other pertinent details. For the African wildlife dataset, the `african-wildlife.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml).
!!! Example "ultralytics/cfg/datasets/african-wildlife.yaml"
!!! example "ultralytics/cfg/datasets/african-wildlife.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/african-wildlife.yaml"
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -66,7 +66,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640
```
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -111,7 +111,7 @@ The African Wildlife Dataset includes images of four common animal species found
You can train a YOLOv8 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example
!!! example
=== "Python"

View file

@ -8,7 +8,7 @@ keywords: Argoverse dataset, autonomous driving, 3D tracking, motion forecasting
The [Argoverse](https://www.argoverse.org/) dataset is a collection of data designed to support research in autonomous driving tasks, such as 3D tracking, motion forecasting, and stereo depth estimation. Developed by Argo AI, the dataset provides a wide range of high-quality sensor data, including high-resolution images, LiDAR point clouds, and map data.
!!! Note
!!! note
The Argoverse dataset `*.zip` file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
@ -35,7 +35,7 @@ The Argoverse dataset is widely used for training and evaluating deep learning m
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
!!! Example "ultralytics/cfg/datasets/Argoverse.yaml"
!!! example "ultralytics/cfg/datasets/Argoverse.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/Argoverse.yaml"
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -80,7 +80,7 @@ The example showcases the variety and complexity of the data in the Argoverse da
If you use the Argoverse dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -106,7 +106,7 @@ The [Argoverse](https://www.argoverse.org/) dataset, developed by Argo AI, suppo
To train a YOLOv8 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -34,7 +34,7 @@ The application of brain tumor detection using computer vision enables early dia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the brain tumor dataset, the `brain-tumor.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml).
!!! Example "ultralytics/cfg/datasets/brain-tumor.yaml"
!!! example "ultralytics/cfg/datasets/brain-tumor.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/brain-tumor.yaml"
@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -65,7 +65,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
```
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -110,7 +110,7 @@ The brain tumor dataset is divided into two subsets: the **training set** consis
You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -142,7 +142,7 @@ Using the brain tumor dataset in AI projects enables early diagnosis and treatme
Inference using a fine-tuned YOLOv8 model can be performed with either Python or CLI approaches. Here are the examples:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"

View file

@ -52,7 +52,7 @@ The COCO dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! Example "ultralytics/cfg/datasets/coco.yaml"
!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco.yaml"
@ -62,7 +62,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -97,7 +97,7 @@ The example showcases the variety and complexity of the images in the COCO datas
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -124,7 +124,7 @@ The [COCO dataset](https://cocodataset.org/#home) (Common Objects in Context) is
To train a YOLOv8 model using the COCO dataset, you can use the following code snippets:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -27,7 +27,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8 dataset, the `coco8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml).
!!! Example "ultralytics/cfg/datasets/coco8.yaml"
!!! example "ultralytics/cfg/datasets/coco8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8.yaml"
@ -37,7 +37,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the COCO8 data
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -99,7 +99,7 @@ The Ultralytics COCO8 dataset is a compact yet versatile object detection datase
To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or CLI commands. Here's how you can start:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -30,7 +30,7 @@ The Global Wheat Head Dataset is widely used for training and evaluating deep le
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Global Wheat Head Dataset, the `GlobalWheat2020.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml).
!!! Example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
!!! example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Global Wheat
If you use the Global Wheat Head Dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -100,7 +100,7 @@ The Global Wheat Head Dataset is primarily used for developing and training deep
To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the following code snippets. Make sure you have the `GlobalWheat2020.yaml` configuration file specifying dataset paths and classes:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -48,7 +48,7 @@ When using the Ultralytics YOLO format, organize your training and validation im
Here's how you can use these formats to train your model:
!!! Example
!!! example
=== "Python"
@ -100,7 +100,7 @@ If you have your own dataset and would like to use it for training detection mod
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example
!!! example
=== "Python"
@ -164,7 +164,7 @@ Each dataset page provides detailed information on the structure and usage tailo
To start training a YOLOv8 model, ensure your dataset is formatted correctly and the paths are defined in a YAML file. Use the following script to begin training:
!!! Example
!!! example
=== "Python"

View file

@ -48,7 +48,7 @@ The LVIS dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the LVIS dataset, the `lvis.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml).
!!! Example "ultralytics/cfg/datasets/lvis.yaml"
!!! example "ultralytics/cfg/datasets/lvis.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/lvis.yaml"
@ -58,7 +58,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -93,7 +93,7 @@ The example showcases the variety and complexity of the images in the LVIS datas
If you use the LVIS dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -118,7 +118,7 @@ The [LVIS dataset](https://www.lvisdataset.org/) is a large-scale dataset with f
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, follow the example below. This process utilizes Ultralytics' framework, which offers comprehensive training features.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -30,7 +30,7 @@ The Objects365 dataset is widely used for training and evaluating deep learning
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Objects365 Dataset, the `Objects365.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml).
!!! Example "ultralytics/cfg/datasets/Objects365.yaml"
!!! example "ultralytics/cfg/datasets/Objects365.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/Objects365.yaml"
@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Objects365 d
If you use the Objects365 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -101,7 +101,7 @@ The [Objects365 dataset](https://www.objects365.org/) is designed for object det
To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an image size of 640, follow these instructions:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -61,7 +61,7 @@ Open Images V7 is a cornerstone for training and evaluating state-of-the-art mod
Typically, datasets come with a YAML (Yet Another Markup Language) file that delineates the dataset's configuration. For the case of Open Images V7, a hypothetical `OpenImagesV7.yaml` might exist. For accurate paths and configurations, one should refer to the dataset's official repository or documentation.
!!! Example "OpenImagesV7.yaml"
!!! example "OpenImagesV7.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/open-images-v7.yaml"
@ -71,7 +71,7 @@ Typically, datasets come with a YAML (Yet Another Markup Language) file that del
To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Warning
!!! warning
The complete Open Images V7 dataset comprises 1,743,042 training images and 41,620 validation images, requiring approximately **561 GB of storage space** upon download.
@ -80,7 +80,7 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
- Verify that your device has enough storage capacity.
- Ensure a robust and speedy internet connection.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -115,7 +115,7 @@ Researchers can gain invaluable insights into the array of computer vision chall
For those employing Open Images V7 in their work, it's prudent to cite the relevant papers and acknowledge the creators:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -140,7 +140,7 @@ Open Images V7 is an extensive and versatile dataset created by Google, designed
To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -37,11 +37,11 @@ This structure enables a diverse and extensive testing ground for object detecti
Dataset benchmarking evaluates machine learning model performance on specific datasets using standardized metrics like accuracy, mean average precision and F1-score.
!!! Tip "Benchmarking"
!!! tip "Benchmarking"
Benchmarking results will be stored in "ultralytics-benchmarks/evaluation.txt"
!!! Example "Benchmarking example"
!!! example "Benchmarking example"
=== "Python"
@ -113,7 +113,7 @@ The diversity in the Roboflow 100 benchmark that can be seen above is a signific
If you use the Roboflow 100 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -139,7 +139,7 @@ The **Roboflow 100** dataset, developed by [Roboflow](https://roboflow.com/?ref=
To use the Roboflow 100 dataset for benchmarking, you can implement the RF100Benchmark class from the Ultralytics library. Here's a brief example:
!!! Example "Benchmarking example"
!!! example "Benchmarking example"
=== "Python"
@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -23,7 +23,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths and classes information. For the signature detection dataset, the `signature.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
!!! Example "ultralytics/cfg/datasets/signature.yaml"
!!! example "ultralytics/cfg/datasets/signature.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/signature.yaml"
@ -33,7 +33,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the signature detection dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -54,7 +54,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 epochs with
yolo detect train data=signature.yaml model=yolov8n.pt epochs=100 imgsz=640
```
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -102,7 +102,7 @@ To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
1. Download the `signature.yaml` dataset configuration file from [signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
2. Use the following Python script or CLI command to start training:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -140,7 +140,7 @@ To perform inference using a model trained on the Signature Detection Dataset, f
1. Load your fine-tuned model.
2. Use the below Python script or CLI command to perform inference:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"

View file

@ -43,7 +43,7 @@ The SKU-110k dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the SKU-110K dataset, the `SKU-110K.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml).
!!! Example "ultralytics/cfg/datasets/SKU-110K.yaml"
!!! example "ultralytics/cfg/datasets/SKU-110K.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/SKU-110K.yaml"
@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -88,7 +88,7 @@ The example showcases the variety and complexity of the data in the SKU-110k dat
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -113,7 +113,7 @@ The SKU-110k dataset consists of densely packed retail shelf images designed to
Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an example to train a YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -165,7 +165,7 @@ These features make the SKU-110k dataset particularly valuable for training and
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -39,7 +39,7 @@ The VisDrone dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Visdrone dataset, the `VisDrone.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml).
!!! Example "ultralytics/cfg/datasets/VisDrone.yaml"
!!! example "ultralytics/cfg/datasets/VisDrone.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/VisDrone.yaml"
@ -49,7 +49,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -84,7 +84,7 @@ The example showcases the variety and complexity of the data in the VisDrone dat
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -117,7 +117,7 @@ The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-
To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -161,7 +161,7 @@ The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found i
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -31,7 +31,7 @@ The VOC dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml).
!!! Example "ultralytics/cfg/datasets/VOC.yaml"
!!! example "ultralytics/cfg/datasets/VOC.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/VOC.yaml"
@ -41,7 +41,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the VOC datase
If you use the VOC dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -103,7 +103,7 @@ The [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) (Visual Object Classes
To train a YOLOv8 model with the VOC dataset, you need the dataset configuration in a YAML file. Here's an example to start training a YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -34,7 +34,7 @@ The xView dataset is widely used for training and evaluating deep learning model
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml).
!!! Example "ultralytics/cfg/datasets/xView.yaml"
!!! example "ultralytics/cfg/datasets/xView.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/xView.yaml"
@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a model on the xView dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -79,7 +79,7 @@ The example showcases the variety and complexity of the data in the xView datase
If you use the xView dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -106,7 +106,7 @@ The [xView](http://xviewdataset.org/) dataset is one of the largest publicly ava
To train a model on the xView dataset using Ultralytics YOLO, follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -147,7 +147,7 @@ The xView dataset comprises high-resolution satellite images collected from Worl
If you utilize the xView dataset in your research, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -48,7 +48,7 @@ dataframe = explorer.get_similar(img="path/to/image.jpg")
dataframe = explorer.get_similar(idx=0)
```
!!! Tip "Note"
!!! note
Embeddings table for a given dataset and model pair is only created once and reused. These use [LanceDB](https://lancedb.github.io/lancedb/) under the hood, which scales on-disk, so you can create and reuse embeddings for large datasets like COCO without running out of memory.
@ -67,7 +67,7 @@ In case of multiple inputs, the aggregate of their embeddings is used.
You get a pandas dataframe with the `limit` number of most similar data points to the input, along with their distance in the embedding space. You can use this dataset to perform further filtering
!!! Example "Semantic Search"
!!! example "Semantic Search"
=== "Using Images"
@ -110,7 +110,7 @@ You get a pandas dataframe with the `limit` number of most similar data points t
You can also plot the similar images using the `plot_similar` method. This method takes the same arguments as `get_similar` and plots the similar images in a grid.
!!! Example "Plotting Similar Images"
!!! example "Plotting Similar Images"
=== "Using Images"
@ -143,7 +143,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results.
Note: This works using LLMs under the hood so the results are probabilistic and might get things wrong sometimes
!!! Example "Ask AI"
!!! example "Ask AI"
```python
from ultralytics import Explorer
@ -165,7 +165,7 @@ Note: This works using LLMs under the hood so the results are probabilistic and
You can run SQL queries on your dataset using the `sql_query` method. This method takes a SQL query as input and returns a pandas dataframe with the results.
!!! Example "SQL Query"
!!! example "SQL Query"
```python
from ultralytics import Explorer
@ -182,7 +182,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
You can also plot the results of a SQL query using the `plot_sql_query` method. This method takes the same arguments as `sql_query` and plots the results in a grid.
!!! Example "Plotting SQL Query Results"
!!! example "Plotting SQL Query Results"
```python
from ultralytics import Explorer
@ -199,7 +199,9 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
You can also work with the embeddings table directly. Once the embeddings table is created, you can access it using the `Explorer.table`
!!! Tip "Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc."
!!! tip
Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc.
```python
from ultralytics import Explorer
@ -213,7 +215,7 @@ Here are some examples of what you can do with the table:
### Get raw Embeddings
!!! Example
!!! example
```python
from ultralytics import Explorer
@ -228,7 +230,7 @@ Here are some examples of what you can do with the table:
### Advanced Querying with pre- and post-filters
!!! Example
!!! example
```python
from ultralytics import Explorer
@ -270,11 +272,11 @@ It returns a pandas dataframe with the following columns:
- `count`: Number of images in the dataset that are closer than `max_dist` to the current image
- `sim_im_files`: List of paths to the `count` similar images
!!! Tip
!!! tip
For a given dataset, model, `max_dist` & `top_k` the similarity index once generated will be reused. In case, your dataset has changed, or you simply need to regenerate the similarity index, you can pass `force=True`.
!!! Example "Similarity Index"
!!! example "Similarity Index"
```python
from ultralytics import Explorer

View file

@ -127,7 +127,7 @@ Contributing a new dataset involves several steps to ensure that it aligns well
### Example Code to Optimize and Zip a Dataset
!!! Example "Optimize and Zip a Dataset"
!!! example "Optimize and Zip a Dataset"
=== "Python"
@ -205,7 +205,7 @@ Discover more about YOLO on the [Ultralytics YOLO](https://www.ultralytics.com/y
To optimize and zip a dataset using Ultralytics tools, follow this example code:
!!! Example "Optimize and Zip a Dataset"
!!! example "Optimize and Zip a Dataset"
=== "Python"

View file

@ -60,7 +60,7 @@ DOTA serves as a benchmark for training and evaluating models specifically tailo
Typically, datasets incorporate a YAML (Yet Another Markup Language) file detailing the dataset's configuration. For DOTA v1 and DOTA v1.5, Ultralytics provides `DOTAv1.yaml` and `DOTAv1.5.yaml` files. For additional details on these as well as DOTA v2 please consult DOTA's official repository and documentation.
!!! Example "DOTAv1.yaml"
!!! example "DOTAv1.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/DOTAv1.yaml"
@ -70,7 +70,7 @@ Typically, datasets incorporate a YAML (Yet Another Markup Language) file detail
To train DOTA dataset, we split original DOTA images with high-resolution into images with 1024x1024 resolution in multiscale way.
!!! Example "Split images"
!!! example "Split images"
=== "Python"
@ -97,11 +97,11 @@ To train DOTA dataset, we split original DOTA images with high-resolution into i
To train a model on the DOTA v1 dataset, you can utilize the following code snippets. Always refer to your model's documentation for a thorough list of available arguments.
!!! Warning
!!! warning
Please note that all images and associated annotations in the DOTAv1 dataset can be used for academic purposes, but commercial use is prohibited. Your understanding and respect for the dataset creators' wishes are greatly appreciated!
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -136,7 +136,7 @@ The dataset's richness offers invaluable insights into object detection challeng
For those leveraging DOTA in their endeavors, it's pertinent to cite the relevant research papers:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -169,7 +169,7 @@ DOTA utilizes Oriented Bounding Boxes (OBB) for annotation, which are represente
To train a model on the DOTA dataset, you can use the following example with Ultralytics YOLO:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -204,7 +204,7 @@ For a detailed comparison and additional specifics, check the [dataset versions
DOTA images, which can be very large, are split into smaller resolutions for manageable training. Here's a Python snippet to split images:
!!! Example
!!! example
=== "Python"

View file

@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the DOTA8 dataset, the `dota8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml).
!!! Example "ultralytics/cfg/datasets/dota8.yaml"
!!! example "ultralytics/cfg/datasets/dota8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/dota8.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the DOTA8 data
If you use the DOTA dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -90,7 +90,7 @@ The DOTA8 dataset is a small, versatile oriented object detection dataset made u
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For comprehensive argument options, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -32,7 +32,7 @@ An example of a `*.txt` label file for the above image, which contains an object
To train a model using these OBB formats:
!!! Example
!!! example
=== "Python"
@ -70,7 +70,7 @@ For those looking to introduce their own datasets with oriented bounding boxes,
Transitioning labels from the DOTA dataset format to the YOLO OBB format can be achieved with this script:
!!! Example
!!! example
=== "Python"
@ -106,7 +106,7 @@ This script will reformat your DOTA annotations into a YOLO-compatible format.
Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO OBB format and then using the Ultralytics API to train the model. Here's an example in both Python and CLI:
!!! Example
!!! example
=== "Python"

View file

@ -43,7 +43,7 @@ The COCO-Pose dataset is specifically used for training and evaluating deep lear
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Pose dataset, the `coco-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml).
!!! Example "ultralytics/cfg/datasets/coco-pose.yaml"
!!! example "ultralytics/cfg/datasets/coco-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco-pose.yaml"
@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -88,7 +88,7 @@ The example showcases the variety and complexity of the images in the COCO-Pose
If you use the COCO-Pose dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -115,7 +115,7 @@ The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialize
Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using either Python or CLI commands. For example, to train a YOLOv8n-pose model for 100 epochs with an image size of 640, you can follow the steps below:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Pose dataset, the `coco8-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
!!! Example "ultralytics/cfg/datasets/coco8-pose.yaml"
!!! example "ultralytics/cfg/datasets/coco8-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8-pose.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Pose
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -88,7 +88,7 @@ The COCO8-Pose dataset is a small, versatile pose detection dataset that include
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, follow these examples:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -64,7 +64,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example
!!! example
=== "Python"
@ -126,7 +126,7 @@ If you have your own dataset and would like to use it for training pose estimati
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
!!! Example
!!! example
=== "Python"

View file

@ -29,7 +29,7 @@ This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file serves as the means to specify the configuration details of a dataset. It encompasses crucial data such as file paths, class definitions, and other pertinent information. Specifically, for the `tiger-pose.yaml` file, you can check [Ultralytics Tiger-Pose Dataset Configuration File](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/tiger-pose.yaml).
!!! Example "ultralytics/cfg/datasets/tiger-pose.yaml"
!!! example "ultralytics/cfg/datasets/tiger-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/tiger-pose.yaml"
@ -39,7 +39,7 @@ A YAML (Yet Another Markup Language) file serves as the means to specify the con
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
## Inference Example
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -107,7 +107,7 @@ The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consis
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, use the following code snippets. For more details, visit the [Training](../../modes/train.md) page:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -137,7 +137,7 @@ The `tiger-pose.yaml` file is used to specify the configuration details of the T
To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you can use the following code snippets. For a detailed guide, visit the [Prediction](../../modes/predict.md) page:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"

View file

@ -37,7 +37,7 @@ Carparts Segmentation finds applications in automotive quality control, auto rep
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `carparts-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml).
!!! Example "ultralytics/cfg/datasets/carparts-seg.yaml"
!!! example "ultralytics/cfg/datasets/carparts-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/carparts-seg.yaml"
@ -47,7 +47,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -81,7 +81,7 @@ The Carparts Segmentation dataset includes a diverse array of images and videos
If you integrate the Carparts Segmentation dataset into your research or development projects, please make reference to the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -112,7 +112,7 @@ The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianm
To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -41,7 +41,7 @@ COCO-Seg is widely used for training and evaluating deep learning models in inst
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Seg dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! Example "ultralytics/cfg/datasets/coco.yaml"
!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco.yaml"
@ -51,7 +51,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -86,7 +86,7 @@ The example showcases the variety and complexity of the images in the COCO-Seg d
If you use the COCO-Seg dataset in your research or development work, please cite the original COCO paper and acknowledge the extension to COCO-Seg:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -113,7 +113,7 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset is an extension of the ori
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Seg dataset, the `coco8-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml).
!!! Example "ultralytics/cfg/datasets/coco8-seg.yaml"
!!! example "ultralytics/cfg/datasets/coco8-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8-seg.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Seg
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -88,7 +88,7 @@ The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralyt
To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -26,7 +26,7 @@ Crack segmentation finds practical applications in infrastructure maintenance, a
A YAML (Yet Another Markup Language) file is employed to outline the configuration of the dataset, encompassing details about paths, classes, and other pertinent information. Specifically, for the Crack Segmentation dataset, the `crack-seg.yaml` file is managed and accessible at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml).
!!! Example "ultralytics/cfg/datasets/crack-seg.yaml"
!!! example "ultralytics/cfg/datasets/crack-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/crack-seg.yaml"
@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is employed to outline the configurati
To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -71,7 +71,7 @@ The Crack Segmentation dataset comprises a varied collection of images and video
If you incorporate the crack segmentation dataset into your research or development endeavors, kindly reference the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -102,7 +102,7 @@ The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/universi
To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -135,7 +135,7 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
!!! Quote ""
!!! quote ""
=== "BibTeX"

View file

@ -33,7 +33,7 @@ Here is an example of the YOLO dataset format for a single image with two object
1 0.504 0.000 0.501 0.004 0.498 0.004 0.493 0.010 0.492 0.0104
```
!!! Tip "Tip"
!!! tip "Tip"
- The length of each row does **not** have to be equal.
- Each segmentation label must have a **minimum of 3 xy points**: `<class-index> <x1> <y1> <x2> <y2> <x3> <y3>`
@ -66,7 +66,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example
!!! example
=== "Python"
@ -108,7 +108,7 @@ If you have your own dataset and would like to use it for training segmentation
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example
!!! example
=== "Python"
@ -130,7 +130,7 @@ Auto-annotation is an essential feature that allows you to generate a segmentati
To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below:
!!! Example
!!! example
=== "Python"

View file

@ -26,7 +26,7 @@ Package segmentation, facilitated by the Package Segmentation Dataset, is crucia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `package-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml).
!!! Example "ultralytics/cfg/datasets/package-seg.yaml"
!!! example "ultralytics/cfg/datasets/package-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/package-seg.yaml"
@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -70,7 +70,7 @@ The Package Segmentation dataset comprises a varied collection of images and vid
If you integrate the crack segmentation dataset into your research or development initiatives, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -101,7 +101,7 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

View file

@ -12,7 +12,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
## Usage
!!! Example
!!! example
=== "Python"
@ -35,7 +35,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the Python or CLI examples provided. Here is how you can get started:
!!! Example
!!! example
=== "Python"