ultralytics 8.0.196 instance-mean Segment loss (#5285)
Co-authored-by: Andy <39454881+yermandy@users.noreply.github.com>
This commit is contained in:
parent
7517667a33
commit
e7f0658744
72 changed files with 369 additions and 493 deletions
|
|
@ -6,9 +6,7 @@ keywords: Ultralytics, YOLO, callbacks guide, training callback, validation call
|
|||
|
||||
## Callbacks
|
||||
|
||||
Ultralytics framework supports callbacks as entry points in strategic stages of train, val, export, and predict modes.
|
||||
Each callback accepts a `Trainer`, `Validator`, or `Predictor` object depending on the operation type. All properties of
|
||||
these objects can be found in Reference section of the docs.
|
||||
Ultralytics framework supports callbacks as entry points in strategic stages of train, val, export, and predict modes. Each callback accepts a `Trainer`, `Validator`, or `Predictor` object depending on the operation type. All properties of these objects can be found in Reference section of the docs.
|
||||
|
||||
## Examples
|
||||
|
||||
|
|
|
|||
|
|
@ -6,8 +6,7 @@ keywords: Ultralytics, YOLO, CLI, train, validation, prediction, command line in
|
|||
|
||||
# Command Line Interface Usage
|
||||
|
||||
The YOLO command line interface (CLI) allows for simple single-line commands without the need for a Python environment.
|
||||
CLI requires no customization or Python code. You can simply run all tasks from the terminal with the `yolo` command.
|
||||
The YOLO command line interface (CLI) allows for simple single-line commands without the need for a Python environment. CLI requires no customization or Python code. You can simply run all tasks from the terminal with the `yolo` command.
|
||||
|
||||
!!! example
|
||||
|
||||
|
|
@ -65,11 +64,9 @@ CLI requires no customization or Python code. You can simply run all tasks from
|
|||
|
||||
Where:
|
||||
|
||||
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
|
||||
the `TASK` from the model type.
|
||||
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess the `TASK` from the model type.
|
||||
- `MODE` (required) is one of `[train, val, predict, export, track]`
|
||||
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
|
||||
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
||||
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults. For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
||||
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/default.yaml).
|
||||
|
||||
!!! warning "Warning"
|
||||
|
|
@ -82,8 +79,7 @@ Where:
|
|||
|
||||
## Train
|
||||
|
||||
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see
|
||||
the [Configuration](cfg.md) page.
|
||||
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
|
||||
|
||||
!!! example "Example"
|
||||
|
||||
|
|
@ -103,8 +99,7 @@ the [Configuration](cfg.md) page.
|
|||
|
||||
## Val
|
||||
|
||||
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
|
||||
training `data` and arguments as model attributes.
|
||||
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
|
||||
|
||||
!!! example "Example"
|
||||
|
||||
|
|
@ -162,8 +157,7 @@ Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
|
|||
yolo export model=path/to/best.pt format=onnx
|
||||
```
|
||||
|
||||
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument,
|
||||
i.e. `format='onnx'` or `format='engine'`.
|
||||
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`.
|
||||
|
||||
| Format | `format` Argument | Model | Metadata | Arguments |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
|
||||
|
|
@ -207,13 +201,11 @@ Default arguments can be overridden by simply passing them as arguments in the C
|
|||
|
||||
## Overriding default config file
|
||||
|
||||
You can override the `default.yaml` config file entirely by passing a new file with the `cfg` arguments,
|
||||
i.e. `cfg=custom.yaml`.
|
||||
You can override the `default.yaml` config file entirely by passing a new file with the `cfg` arguments, i.e. `cfg=custom.yaml`.
|
||||
|
||||
To do this first create a copy of `default.yaml` in your current working dir with the `yolo copy-cfg` command.
|
||||
|
||||
This will create `default_copy.yaml`, which you can then pass as `cfg=default_copy.yaml` along with any additional args,
|
||||
like `imgsz=320` in this example:
|
||||
This will create `default_copy.yaml`, which you can then pass as `cfg=default_copy.yaml` along with any additional args, like `imgsz=320` in this example:
|
||||
|
||||
!!! example ""
|
||||
|
||||
|
|
|
|||
|
|
@ -4,18 +4,14 @@ description: Discover how to customize and extend base Ultralytics YOLO Trainer
|
|||
keywords: Ultralytics, YOLO, trainer engines, BaseTrainer, DetectionTrainer, customizing trainers, extending trainers, custom model, custom dataloader
|
||||
---
|
||||
|
||||
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine
|
||||
executors. Let's take a look at the Trainer engine.
|
||||
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine executors. Let's take a look at the Trainer engine.
|
||||
|
||||
## BaseTrainer
|
||||
|
||||
BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overriding
|
||||
the required functions or operations as long the as correct formats are followed. For example, you can support your own
|
||||
custom model and dataloader by just overriding these functions:
|
||||
BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overriding the required functions or operations as long the as correct formats are followed. For example, you can support your own custom model and dataloader by just overriding these functions:
|
||||
|
||||
* `get_model(cfg, weights)` - The function that builds the model to be trained
|
||||
* `get_dataloader()` - The function that builds the dataloader
|
||||
More details and source code can be found in [`BaseTrainer` Reference](../reference/engine/trainer.md)
|
||||
* `get_dataloader()` - The function that builds the dataloader More details and source code can be found in [`BaseTrainer` Reference](../reference/engine/trainer.md)
|
||||
|
||||
## DetectionTrainer
|
||||
|
||||
|
|
@ -31,8 +27,7 @@ trained_model = trainer.best # get best model
|
|||
|
||||
### Customizing the DetectionTrainer
|
||||
|
||||
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by
|
||||
simply overloading the existing the `get_model` functionality:
|
||||
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by simply overloading the existing the `get_model` functionality:
|
||||
|
||||
```python
|
||||
from ultralytics.models.yolo.detect import DetectionTrainer
|
||||
|
|
|
|||
|
|
@ -6,14 +6,9 @@ keywords: YOLOv8, Ultralytics, Python, object detection, segmentation, classific
|
|||
|
||||
# Python Usage
|
||||
|
||||
Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help you seamlessly integrate YOLOv8 into
|
||||
your Python projects for object detection, segmentation, and classification. Here, you'll learn how to load and use
|
||||
pretrained models, train new models, and perform predictions on images. The easy-to-use Python interface is a valuable
|
||||
resource for anyone looking to incorporate YOLOv8 into their Python projects, allowing you to quickly implement advanced
|
||||
object detection capabilities. Let's get started!
|
||||
Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help you seamlessly integrate YOLOv8 into your Python projects for object detection, segmentation, and classification. Here, you'll learn how to load and use pretrained models, train new models, and perform predictions on images. The easy-to-use Python interface is a valuable resource for anyone looking to incorporate YOLOv8 into their Python projects, allowing you to quickly implement advanced object detection capabilities. Let's get started!
|
||||
|
||||
For example, users can load a model, train it, evaluate its performance on a validation set, and even export it to ONNX
|
||||
format with just a few lines of code.
|
||||
For example, users can load a model, train it, evaluate its performance on a validation set, and even export it to ONNX format with just a few lines of code.
|
||||
|
||||
!!! example "Python"
|
||||
|
||||
|
|
@ -41,9 +36,7 @@ format with just a few lines of code.
|
|||
|
||||
## [Train](../modes/train.md)
|
||||
|
||||
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the
|
||||
specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can
|
||||
accurately predict the classes and locations of objects in an image.
|
||||
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image.
|
||||
|
||||
!!! example "Train"
|
||||
|
||||
|
|
@ -73,9 +66,7 @@ accurately predict the classes and locations of objects in an image.
|
|||
|
||||
## [Val](../modes/val.md)
|
||||
|
||||
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a
|
||||
validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters
|
||||
of the model to improve its performance.
|
||||
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance.
|
||||
|
||||
!!! example "Val"
|
||||
|
||||
|
|
@ -103,9 +94,7 @@ of the model to improve its performance.
|
|||
|
||||
## [Predict](../modes/predict.md)
|
||||
|
||||
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the
|
||||
model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model
|
||||
predicts the classes and locations of objects in the input images or videos.
|
||||
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model predicts the classes and locations of objects in the input images or videos.
|
||||
|
||||
!!! example "Predict"
|
||||
|
||||
|
|
@ -173,9 +162,7 @@ predicts the classes and locations of objects in the input images or videos.
|
|||
|
||||
## [Export](../modes/export.md)
|
||||
|
||||
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is
|
||||
converted to a format that can be used by other software applications or hardware devices. This mode is useful when
|
||||
deploying the model to production environments.
|
||||
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is converted to a format that can be used by other software applications or hardware devices. This mode is useful when deploying the model to production environments.
|
||||
|
||||
!!! example "Export"
|
||||
|
||||
|
|
@ -203,9 +190,7 @@ deploying the model to production environments.
|
|||
|
||||
## [Track](../modes/track.md)
|
||||
|
||||
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a
|
||||
checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful
|
||||
for applications such as surveillance systems or self-driving cars.
|
||||
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful for applications such as surveillance systems or self-driving cars.
|
||||
|
||||
!!! example "Track"
|
||||
|
||||
|
|
@ -228,11 +213,8 @@ for applications such as surveillance systems or self-driving cars.
|
|||
|
||||
## [Benchmark](../modes/benchmark.md)
|
||||
|
||||
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide
|
||||
information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation)
|
||||
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export
|
||||
formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for
|
||||
their specific use case based on their requirements for speed and accuracy.
|
||||
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation)
|
||||
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
|
||||
|
||||
!!! example "Benchmark"
|
||||
|
||||
|
|
@ -250,8 +232,7 @@ their specific use case based on their requirements for speed and accuracy.
|
|||
|
||||
## Using Trainers
|
||||
|
||||
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits
|
||||
from `BaseTrainer`.
|
||||
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
|
||||
|
||||
!!! tip "Detection Trainer Example"
|
||||
|
||||
|
|
@ -276,8 +257,6 @@ from `BaseTrainer`.
|
|||
trainer = detect.DetectionTrainer(overrides=overrides)
|
||||
```
|
||||
|
||||
You can easily customize Trainers to support custom tasks or explore R&D ideas.
|
||||
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization
|
||||
Section.
|
||||
You can easily customize Trainers to support custom tasks or explore R&D ideas. Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section.
|
||||
|
||||
[Customization tutorials](engine.md){ .md-button .md-button--primary}
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue