Add FAQ sections to Modes and Tasks (#14181)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Abirami Vina <abirami.vina@gmail.com>
Co-authored-by: RizwanMunawar <chr043416@gmail.com>
Co-authored-by: Muhammad Rizwan Munawar <muhammadrizwanmunawar123@gmail.com>
This commit is contained in:
Glenn Jocher 2024-07-04 17:16:16 +02:00 committed by GitHub
parent e285d3d1b2
commit 6c13bea7b8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
39 changed files with 2247 additions and 481 deletions

View file

@ -179,3 +179,93 @@ Available YOLOv8-cls export formats are in the table below. You can export to an
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n-cls_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
See full `export` details in the [Export](../modes/export.md) page.
## FAQ
### What is the purpose of YOLOv8 in image classification?
YOLOv8 models, such as `yolov8n-cls.pt`, are designed for efficient image classification. They assign a single class label to an entire image along with a confidence score. This is particularly useful for applications where knowing the specific class of an image is sufficient, rather than identifying the location or shape of objects within the image.
### How do I train a YOLOv8 model for image classification?
To train a YOLOv8 model, you can use either Python or CLI commands. For example, to train a `yolov8n-cls` model on the MNIST160 dataset for 100 epochs at an image size of 64:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist160", epochs=100, imgsz=64)
```
=== "CLI"
```bash
yolo classify train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
```
For more configuration options, visit the [Configuration](../usage/cfg.md) page.
### Where can I find pretrained YOLOv8 classification models?
Pretrained YOLOv8 classification models can be found in the [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) section. Models like `yolov8n-cls.pt`, `yolov8s-cls.pt`, `yolov8m-cls.pt`, etc., are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset and can be easily downloaded and used for various image classification tasks.
### How can I export a trained YOLOv8 model to different formats?
You can export a trained YOLOv8 model to various formats using Python or CLI commands. For instance, to export a model to ONNX format:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load the trained model
# Export the model to ONNX
model.export(format="onnx")
```
=== "CLI"
```bash
yolo export model=yolov8n-cls.pt format=onnx # export the trained model to ONNX format
```
For detailed export options, refer to the [Export](../modes/export.md) page.
### How do I validate a trained YOLOv8 classification model?
To validate a trained model's accuracy on a dataset like MNIST160, you can use the following Python or CLI commands:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load the trained model
# Validate the model
metrics = model.val() # no arguments needed, uses the dataset and settings from training
metrics.top1 # top1 accuracy
metrics.top5 # top5 accuracy
```
=== "CLI"
```bash
yolo classify val model=yolov8n-cls.pt # validate the trained model
```
For more information, visit the [Validate](#val) section.

View file

@ -180,3 +180,111 @@ Available YOLOv8 export formats are in the table below. You can export to any fo
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
See full `export` details in the [Export](../modes/export.md) page.
## FAQ
### How do I train a YOLOv8 model on my custom dataset?
Training a YOLOv8 model on a custom dataset involves a few steps:
1. **Prepare the Dataset**: Ensure your dataset is in the YOLO format. For guidance, refer to our [Dataset Guide](../datasets/detect/index.md).
2. **Load the Model**: Use the Ultralytics YOLO library to load a pre-trained model or create a new model from a YAML file.
3. **Train the Model**: Execute the `train` method in Python or the `yolo detect train` command in CLI.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n.pt")
# Train the model on your custom dataset
model.train(data="my_custom_dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo detect train data=my_custom_dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
```
For detailed configuration options, visit the [Configuration](../usage/cfg.md) page.
### What pretrained models are available in YOLOv8?
Ultralytics YOLOv8 offers various pretrained models for object detection, segmentation, and pose estimation. These models are pretrained on the COCO dataset or ImageNet for classification tasks. Here are some of the available models:
- [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt)
- [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt)
- [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt)
- [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt)
- [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt)
For a detailed list and performance metrics, refer to the [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) section.
### How can I validate the accuracy of my trained YOLOv8 model?
To validate the accuracy of your trained YOLOv8 model, you can use the `.val()` method in Python or the `yolo detect val` command in CLI. This will provide metrics like mAP50-95, mAP50, and more.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load the model
model = YOLO("path/to/best.pt")
# Validate the model
metrics = model.val()
print(metrics.box.map) # mAP50-95
```
=== "CLI"
```bash
yolo detect val model=path/to/best.pt
```
For more validation details, visit the [Val](../modes/val.md) page.
### What formats can I export a YOLOv8 model to?
Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, TensorRT, CoreML, and more to ensure compatibility across different platforms and devices.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n.pt")
# Export the model to ONNX format
model.export(format="onnx")
```
=== "CLI"
```bash
yolo export model=yolov8n.pt format=onnx
```
Check the full list of supported formats and instructions on the [Export](../modes/export.md) page.
### Why should I use Ultralytics YOLOv8 for object detection?
Ultralytics YOLOv8 is designed to offer state-of-the-art performance for object detection, segmentation, and pose estimation. Here are some key advantages:
1. **Pretrained Models**: Utilize models pretrained on popular datasets like COCO and ImageNet for faster development.
2. **High Accuracy**: Achieves impressive mAP scores, ensuring reliable object detection.
3. **Speed**: Optimized for real-time inference, making it ideal for applications requiring swift processing.
4. **Flexibility**: Export models to various formats like ONNX and TensorRT for deployment across multiple platforms.
Explore our [Blog](https://www.ultralytics.com/blog) for use cases and success stories showcasing YOLOv8 in action.

View file

@ -55,3 +55,68 @@ Oriented object detection goes a step further than regular object detection with
## Conclusion
YOLOv8 supports multiple tasks, including detection, segmentation, classification, oriented object detection and keypoints detection. Each of these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose the appropriate task for your computer vision application.
## FAQ
### What tasks can Ultralytics YOLOv8 perform?
Ultralytics YOLOv8 is a versatile AI framework capable of performing various computer vision tasks with high accuracy and speed. These tasks include:
- **[Detection](detect.md):** Identifying and localizing objects in images or video frames by drawing bounding boxes around them.
- **[Segmentation](segment.md):** Segmenting images into different regions based on their content, useful for applications like medical imaging.
- **[Classification](classify.md):** Categorizing entire images based on their content, leveraging variants of the EfficientNet architecture.
- **[Pose estimation](pose.md):** Detecting specific keypoints in an image or video frame to track movements or poses.
- **[Oriented Object Detection (OBB)](obb.md):** Detecting rotated objects with an added orientation angle for enhanced accuracy.
### How do I use Ultralytics YOLOv8 for object detection?
To use Ultralytics YOLOv8 for object detection, follow these steps:
1. Prepare your dataset in the appropriate format.
2. Train the YOLOv8 model using the detection task.
3. Use the model to make predictions by feeding in new images or video frames.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # Load pre-trained model
results = model.predict(source="image.jpg") # Perform object detection
results[0].show()
```
=== "CLI"
```bash
yolo detect model=yolov8n.pt source='image.jpg'
```
For more detailed instructions, check out our [detection examples](detect.md).
### What are the benefits of using YOLOv8 for segmentation tasks?
Using YOLOv8 for segmentation tasks provides several advantages:
1. **High Accuracy:** The segmentation task leverages a variant of the U-Net architecture to achieve precise segmentation.
2. **Speed:** YOLOv8 is optimized for real-time applications, offering quick processing even for high-resolution images.
3. **Multiple Applications:** It is ideal for medical imaging, autonomous driving, and other applications requiring detailed image segmentation.
Learn more about the benefits and use cases of YOLOv8 for segmentation in the [segmentation section](segment.md).
### Can Ultralytics YOLOv8 handle pose estimation and keypoint detection?
Yes, Ultralytics YOLOv8 can effectively perform pose estimation and keypoint detection with high accuracy and speed. This feature is particularly useful for tracking movements in sports analytics, healthcare, and human-computer interaction applications. YOLOv8 detects keypoints in an image or video frame, allowing for precise pose estimation.
For more details and implementation tips, visit our [pose estimation examples](pose.md).
### Why should I choose Ultralytics YOLOv8 for oriented object detection (OBB)?
Oriented Object Detection (OBB) with YOLOv8 provides enhanced precision by detecting objects with an additional angle parameter. This feature is beneficial for applications requiring accurate localization of rotated objects, such as aerial imagery analysis and warehouse automation.
- **Increased Precision:** The angle component reduces false positives for rotated objects.
- **Versatile Applications:** Useful for tasks in geospatial analysis, robotics, etc.
Check out the [Oriented Object Detection section](obb.md) for more details and examples.

View file

@ -201,3 +201,91 @@ Available YOLOv8-obb export formats are in the table below. You can export to an
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n-obb_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
See full `export` details in the [Export](../modes/export.md) page.
## FAQ
### What are Oriented Bounding Boxes (OBB) and how do they differ from regular bounding boxes?
Oriented Bounding Boxes (OBB) include an additional angle to enhance object localization accuracy in images. Unlike regular bounding boxes, which are axis-aligned rectangles, OBBs can rotate to fit the orientation of the object better. This is particularly useful for applications requiring precise object placement, such as aerial or satellite imagery ([Dataset Guide](../datasets/obb/index.md)).
### How do I train a YOLOv8n-obb model using a custom dataset?
To train a YOLOv8n-obb model with a custom dataset, follow the example below using Python or CLI:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n-obb.pt")
# Train the model
results = model.train(data="path/to/custom_dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo obb train data=path/to/custom_dataset.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
```
For more training arguments, check the [Configuration](../usage/cfg.md) section.
### What datasets can I use for training YOLOv8-OBB models?
YOLOv8-OBB models are pretrained on datasets like [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml) but you can use any dataset formatted for OBB. Detailed information on OBB dataset formats can be found in the [Dataset Guide](../datasets/obb/index.md).
### How can I export a YOLOv8-OBB model to ONNX format?
Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Python or CLI:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-obb.pt")
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo export model=yolov8n-obb.pt format=onnx
```
For more export formats and details, refer to the [Export](../modes/export.md) page.
### How do I validate the accuracy of a YOLOv8n-obb model?
To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown below:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-obb.pt")
# Validate the model
metrics = model.val(data="dota8.yaml")
```
=== "CLI"
```bash
yolo obb val model=yolov8n-obb.pt data=dota8.yaml
```
See full validation details in the [Val](../modes/val.md) section.

View file

@ -195,3 +195,64 @@ Available YOLOv8-pose export formats are in the table below. You can export to a
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n-pose_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
See full `export` details in the [Export](../modes/export.md) page.
## FAQ
### What is Pose Estimation with Ultralytics YOLOv8 and how does it work?
Pose estimation with Ultralytics YOLOv8 involves identifying specific points, known as keypoints, in an image. These keypoints typically represent joints or other important features of the object. The output includes the `[x, y]` coordinates and confidence scores for each point. YOLOv8-pose models are specifically designed for this task and use the `-pose` suffix, such as `yolov8n-pose.pt`. These models are pre-trained on datasets like [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) and can be used for various pose estimation tasks. For more information, visit the [Pose Estimation Page](#pose-estimation).
### How can I train a YOLOv8-pose model on a custom dataset?
Training a YOLOv8-pose model on a custom dataset involves loading a model, either a new model defined by a YAML file or a pre-trained model. You can then start the training process using your specified dataset and parameters.
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.yaml") # build a new model from YAML
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="your-dataset.yaml", epochs=100, imgsz=640)
```
For comprehensive details on training, refer to the [Train Section](#train).
### How do I validate a trained YOLOv8-pose model?
Validation of a YOLOv8-pose model involves assessing its accuracy using the same dataset parameters retained during training. Here's an example:
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
metrics = model.val() # no arguments needed, dataset and settings remembered
```
For more information, visit the [Val Section](#val).
### Can I export a YOLOv8-pose model to other formats, and how?
Yes, you can export a YOLOv8-pose model to various formats like ONNX, CoreML, TensorRT, and more. This can be done using either Python or the Command Line Interface (CLI).
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
model.export(format="onnx")
```
Refer to the [Export Section](#export) for more details.
### What are the available Ultralytics YOLOv8-pose models and their performance metrics?
Ultralytics YOLOv8 offers various pretrained pose models such as YOLOv8n-pose, YOLOv8s-pose, YOLOv8m-pose, among others. These models differ in size, accuracy (mAP), and speed. For instance, the YOLOv8n-pose model achieves a mAP<sup>pose</sup>50-95 of 50.4 and an mAP<sup>pose</sup>50 of 80.1. For a complete list and performance details, visit the [Models Section](#models).

View file

@ -185,3 +185,93 @@ Available YOLOv8-seg export formats are in the table below. You can export to an
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n-seg_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
See full `export` details in the [Export](../modes/export.md) page.
## FAQ
### How do I train a YOLOv8 segmentation model on a custom dataset?
To train a YOLOv8 segmentation model on a custom dataset, you first need to prepare your dataset in the YOLO segmentation format. You can use tools like [JSON2YOLO](https://github.com/ultralytics/JSON2YOLO) to convert datasets from other formats. Once your dataset is ready, you can train the model using Python or CLI commands:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained YOLOv8 segment model
model = YOLO("yolov8n-seg.pt")
# Train the model
results = model.train(data="path/to/your_dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo segment train data=path/to/your_dataset.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
```
Check the [Configuration](../usage/cfg.md) page for more available arguments.
### What is the difference between object detection and instance segmentation in YOLOv8?
Object detection identifies and localizes objects within an image by drawing bounding boxes around them, whereas instance segmentation not only identifies the bounding boxes but also delineates the exact shape of each object. YOLOv8 instance segmentation models provide masks or contours that outline each detected object, which is particularly useful for tasks where knowing the precise shape of objects is important, such as medical imaging or autonomous driving.
### Why use YOLOv8 for instance segmentation?
Ultralytics YOLOv8 is a state-of-the-art model recognized for its high accuracy and real-time performance, making it ideal for instance segmentation tasks. YOLOv8 Segment models come pretrained on the [COCO dataset](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml), ensuring robust performance across a variety of objects. Additionally, YOLOv8 supports training, validation, prediction, and export functionalities with seamless integration, making it highly versatile for both research and industry applications.
### How do I load and validate a pretrained YOLOv8 segmentation model?
Loading and validating a pretrained YOLOv8 segmentation model is straightforward. Here's how you can do it using both Python and CLI:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n-seg.pt")
# Validate the model
metrics = model.val()
print("Mean Average Precision for boxes:", metrics.box.map)
print("Mean Average Precision for masks:", metrics.seg.map)
```
=== "CLI"
```bash
yolo segment val model=yolov8n-seg.pt
```
These steps will provide you with validation metrics like Mean Average Precision (mAP), crucial for assessing model performance.
### How can I export a YOLOv8 segmentation model to ONNX format?
Exporting a YOLOv8 segmentation model to ONNX format is simple and can be done using Python or CLI commands:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n-seg.pt")
# Export the model to ONNX format
model.export(format="onnx")
```
=== "CLI"
```bash
yolo export model=yolov8n-seg.pt format=onnx
```
For more details on exporting to various formats, refer to the [Export](../modes/export.md) page.