ultralytics 8.2.2 replace COCO128 with COCO8 (#10167)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Glenn Jocher 2024-04-18 20:47:21 -07:00 committed by GitHub
parent 626309d221
commit 1110258d37
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
43 changed files with 154 additions and 156 deletions

View file

@ -87,7 +87,7 @@ The training settings for YOLO models encompass various hyperparameters and conf
| Argument | Default | Description |
|-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. |
| `data` | `None` | Path to the dataset configuration file (e.g., `coco128.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
| `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
| `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. |
| `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. |
| `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. |
@ -182,22 +182,22 @@ Visualization arguments:
The val (validation) settings for YOLO models involve various hyperparameters and configurations used to evaluate the model's performance on a validation dataset. These settings influence the model's performance, speed, and accuracy. Common YOLO validation settings include batch size, validation frequency during training, and performance evaluation metrics. Other factors affecting the validation process include the validation dataset's size and composition, as well as the specific task the model is employed for.
| Argument | Type | Default | Description |
|---------------|---------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco128.yaml`). This file includes paths to validation data, class names, and number of classes. |
| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
| Argument | Type | Default | Description |
|---------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco8.yaml`). This file includes paths to validation data, class names, and number of classes. |
| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
Careful tuning and experimentation with these settings are crucial to ensure optimal performance on the validation dataset and detect and prevent overfitting.

View file

@ -37,7 +37,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Train a detection model for 10 epochs with an initial learning_rate of 0.01
```bash
yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
=== "Predict"
@ -51,7 +51,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Val a pretrained detection model at batch-size 1 and image size 640:
```bash
yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
```
=== "Export"
@ -90,15 +90,15 @@ Where:
## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
!!! Example "Example"
=== "Train"
Start training YOLOv8n on COCO128 for 100 epochs at image-size 640.
Start training YOLOv8n on COCO8 for 100 epochs at image-size 640.
```bash
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
```
=== "Resume"
@ -110,7 +110,7 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a ful
## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example "Example"
@ -196,7 +196,7 @@ Default arguments can be overridden by simply passing them as arguments in the C
Train a detection model for `10 epochs` with `learning_rate` of `0.01`
```bash
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
=== "Predict"
@ -210,7 +210,7 @@ Default arguments can be overridden by simply passing them as arguments in the C
Validate a pretrained detection model at batch-size 1 and image size 640:
```bash
yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
yolo detect val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
```
## Overriding default config file

View file

@ -32,8 +32,8 @@ For example, users can load a model, train it, evaluate its performance on a val
# Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt')
# Train the model using the 'coco128.yaml' dataset for 3 epochs
results = model.train(data='coco128.yaml', epochs=3)
# Train the model using the 'coco8.yaml' dataset for 3 epochs
results = model.train(data='coco8.yaml', epochs=3)
# Evaluate the model's performance on the validation set
results = model.val()
@ -66,7 +66,7 @@ Train mode is used for training a YOLOv8 model on a custom dataset. In this mode
from ultralytics import YOLO
model = YOLO('yolov8n.yaml')
results = model.train(data='coco128.yaml', epochs=5)
results = model.train(data='coco8.yaml', epochs=5)
```
=== "Resume"
@ -90,7 +90,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
from ultralytics import YOLO
model = YOLO('yolov8n.yaml')
model.train(data='coco128.yaml', epochs=5)
model.train(data='coco8.yaml', epochs=5)
model.val() # It'll automatically evaluate the data you trained.
```
@ -103,7 +103,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
# It'll use the data YAML file in model.pt if you don't set data.
model.val()
# or you can set the data you want to val
model.val(data='coco128.yaml')
model.val(data='coco8.yaml')
```
[Val Examples](../modes/val.md){ .md-button }
@ -259,7 +259,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table()
similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
@ -280,7 +280,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table()
similar = exp.get_similar(idx=1, limit=10)

View file

@ -233,7 +233,7 @@ boxes.bboxes
See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available.
!!! tip
Many of the following functions (and more) can be accessed using the [`Bboxes` class](#bounding-box-horizontal-instances) but if you prefer to work with the functions directly, see the next subsections on how to import these independently.
### Scaling Boxes