Fix gitignore to format Docs datasets (#16071)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Glenn Jocher 2024-09-06 17:17:33 +02:00 committed by GitHub
parent 6f5c3c8cea
commit ce24c7273e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
41 changed files with 767 additions and 744 deletions

View file

@ -114,7 +114,7 @@ You can train a YOLOv8 model on the African Wildlife Dataset by using the `afric
!!! Example
=== "Python"
```python
from ultralytics import YOLO
@ -126,7 +126,7 @@ You can train a YOLOv8 model on the African Wildlife Dataset by using the `afric
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640

View file

@ -109,7 +109,7 @@ To train a YOLOv8 model with the Argoverse dataset, use the provided YAML config
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -119,10 +119,10 @@ To train a YOLOv8 model with the Argoverse dataset, use the provided YAML config
# Train the model
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=Argoverse.yaml model=yolov8n.pt epochs=100 imgsz=640

View file

@ -113,7 +113,7 @@ You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an i
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -123,10 +123,10 @@ You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an i
# Train the model
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -157,7 +157,7 @@ Inference using a fine-tuned YOLOv8 model can be performed with either Python or
```
=== "CLI"
```bash
# Start prediction with a finetuned *.pt model
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/brain-tumor-sample.jpg"

View file

@ -22,7 +22,7 @@ The [COCO](https://cocodataset.org/#home) (Common Objects in Context) dataset is
## COCO Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|--------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
@ -127,7 +127,7 @@ To train a YOLOv8 model using the COCO dataset, you can use the following code s
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -139,7 +139,7 @@ To train a YOLOv8 model using the COCO dataset, you can use the following code s
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco.yaml model=yolov8n.pt epochs=100 imgsz=640

View file

@ -102,7 +102,7 @@ To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO

View file

@ -103,7 +103,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the follo
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO

View file

@ -16,20 +16,20 @@ The Ultralytics YOLO format is a dataset configuration format that allows you to
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
# Classes (80 COCO classes)
names:
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
```
Labels for this format should be exported to YOLO format with one `*.txt` file per image. If there are no objects in an image, no `*.txt` file is required. The `*.txt` file should be formatted with one row per object in `class x_center y_center width height` format. Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, you should divide `x_center` and `width` by image width, and `y_center` and `height` by image height. Class numbers should be zero-indexed (start with 0).
@ -121,15 +121,15 @@ Remember to double-check if the dataset you want to use is compatible with your
The Ultralytics YOLO format is a structured configuration for defining datasets in your training projects. It involves setting paths to your training, validation, and testing images and corresponding labels. For example:
```yaml
path: ../datasets/coco8 # dataset root directory
train: images/train # training images (relative to 'path')
val: images/val # validation images (relative to 'path')
test: # optional test images
path: ../datasets/coco8 # dataset root directory
train: images/train # training images (relative to 'path')
val: images/val # validation images (relative to 'path')
test: # optional test images
names:
0: person
1: bicycle
2: car
# ...
0: person
1: bicycle
2: car
# ...
```
Labels are saved in `*.txt` files with one file per image, formatted as `class x_center y_center width height` with normalized coordinates. For a detailed guide, see the [COCO8 dataset example](coco8.md).
@ -167,7 +167,7 @@ To start training a YOLOv8 model, ensure your dataset is formatted correctly and
!!! Example
=== "Python"
```python
from ultralytics import YOLO
@ -176,7 +176,7 @@ To start training a YOLOv8 model, ensure your dataset is formatted correctly and
```
=== "CLI"
```bash
yolo detect train data=path/to/your_dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
```

View file

@ -121,7 +121,7 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -131,10 +131,10 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
# Train the model
results = model.train(data="lvis.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=lvis.yaml model=yolov8n.pt epochs=100 imgsz=640

View file

@ -127,6 +127,7 @@ Refer to the [Training](../../modes/train.md) page for a comprehensive list of a
### Why should I use the Objects365 dataset for my object detection projects?
The Objects365 dataset offers several advantages for object detection tasks:
1. **Diversity**: It includes 2 million images with objects in diverse scenarios, covering 365 categories.
2. **High-quality Annotations**: Over 30 million bounding boxes provide comprehensive ground truth data.
3. **Performance**: Models pre-trained on Objects365 significantly outperform those trained on datasets like ImageNet, leading to better generalization.

View file

@ -22,7 +22,7 @@ keywords: Open Images V7, Google dataset, computer vision, YOLOv8 models, object
## Open Images V7 Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|-------------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
@ -141,10 +141,9 @@ Open Images V7 is an extensive and versatile dataset created by Google, designed
To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -154,10 +153,10 @@ To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python a
# Train the model on the Open Images V7 dataset
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Train a COCO-pretrained YOLOv8n model on the Open Images V7 dataset
yolo detect train data=open-images-v7.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -168,6 +167,7 @@ For more details on arguments and settings, refer to the [Training](../../modes/
### What are some key features of the Open Images V7 dataset?
The Open Images V7 dataset includes approximately 9 million images with various annotations:
- **Bounding Boxes**: 16 million bounding boxes across 600 object classes.
- **Segmentation Masks**: Masks for 2.8 million objects across 350 classes.
- **Visual Relationships**: 3.3 million annotations indicating relationships, properties, and actions.
@ -179,17 +179,18 @@ The Open Images V7 dataset includes approximately 9 million images with various
Ultralytics provides several YOLOv8 pretrained models for the Open Images V7 dataset, each with different sizes and performance metrics:
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|-------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
### What applications can the Open Images V7 dataset be used for?
The Open Images V7 dataset supports a variety of computer vision tasks including:
- **Image Classification**
- **Object Detection**
- **Instance Segmentation**

View file

@ -142,7 +142,7 @@ To use the Roboflow 100 dataset for benchmarking, you can implement the RF100Ben
!!! Example "Benchmarking example"
=== "Python"
```python
import os
import shutil

View file

@ -116,7 +116,7 @@ Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an ex
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -126,10 +126,10 @@ Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an ex
# Train the model
results = model.train(data="SKU-110K.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=SKU-110K.yaml model=yolov8n.pt epochs=100 imgsz=640

View file

@ -107,6 +107,7 @@ We would like to acknowledge the AISKYEYE team at the Lab of Machine Learning an
### What is the VisDrone Dataset and what are its key features?
The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-scale benchmark created by the AISKYEYE team at Tianjin University, China. It is designed for various computer vision tasks related to drone-based image and video analysis. Key features include:
- **Composition**: 288 video clips with 261,908 frames and 10,209 static images.
- **Annotations**: Over 2.6 million bounding boxes for objects like pedestrians, cars, bicycles, and tricycles.
- **Diversity**: Collected across 14 cities, in urban and rural settings, under different weather and lighting conditions.
@ -119,7 +120,7 @@ To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image siz
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -131,7 +132,7 @@ To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image siz
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=VisDrone.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -142,6 +143,7 @@ For additional configuration options, please refer to the model [Training](../..
### What are the main subsets of the VisDrone dataset and their applications?
The VisDrone dataset is divided into five main subsets, each tailored for a specific computer vision task:
1. **Task 1**: Object detection in images.
2. **Task 2**: Object detection in videos.
3. **Task 3**: Single-object tracking.

View file

@ -109,7 +109,7 @@ To train a model on the xView dataset using Ultralytics YOLO, follow these steps
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -119,10 +119,10 @@ To train a model on the xView dataset using Ultralytics YOLO, follow these steps
# Train the model
results = model.train(data="xView.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=xView.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -133,6 +133,7 @@ For detailed arguments and settings, refer to the model [Training](../../modes/t
### What are the key features of the xView dataset?
The xView dataset stands out due to its comprehensive set of features:
- Over 1 million object instances across 60 distinct classes.
- High-resolution imagery at 0.3 meters.
- Diverse object types including small, rare, and fine-grained objects, all annotated with bounding boxes.
@ -160,5 +161,5 @@ If you utilize the xView dataset in your research, please cite the following pap
primaryClass={cs.CV}
}
```
For more information about the xView dataset, visit the official [xView dataset website](http://xviewdataset.org/).