Add FAQ sections to Modes and Tasks (#14181)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Abirami Vina <abirami.vina@gmail.com> Co-authored-by: RizwanMunawar <chr043416@gmail.com> Co-authored-by: Muhammad Rizwan Munawar <muhammadrizwanmunawar123@gmail.com>
This commit is contained in:
parent
e285d3d1b2
commit
6c13bea7b8
39 changed files with 2247 additions and 481 deletions
|
|
@ -104,3 +104,70 @@ Benchmarks will attempt to run automatically on all possible export formats belo
|
|||
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
|
||||
|
||||
See full `export` details in the [Export](../modes/export.md) page.
|
||||
|
||||
## FAQ
|
||||
|
||||
### How do I benchmark my YOLOv8 model's performance using Ultralytics?
|
||||
|
||||
Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as mean Average Precision (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics.utils.benchmarks import benchmark
|
||||
|
||||
# Benchmark on GPU
|
||||
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
|
||||
```
|
||||
|
||||
For more details on benchmark arguments, visit the [Arguments](#arguments) section.
|
||||
|
||||
### What are the benefits of exporting YOLOv8 models to different formats?
|
||||
|
||||
Exporting YOLOv8 models to different formats such as ONNX, TensorRT, and OpenVINO allows you to optimize performance based on your deployment environment. For instance:
|
||||
|
||||
- **ONNX:** Provides up to 3x CPU speedup.
|
||||
- **TensorRT:** Offers up to 5x GPU speedup.
|
||||
- **OpenVINO:** Specifically optimized for Intel hardware.
|
||||
These formats enhance both the speed and accuracy of your models, making them more efficient for various real-world applications. Visit the [Export](../modes/export.md) page for complete details.
|
||||
|
||||
### Why is benchmarking crucial in evaluating YOLOv8 models?
|
||||
|
||||
Benchmarking your YOLOv8 models is essential for several reasons:
|
||||
|
||||
- **Informed Decisions:** Understand the trade-offs between speed and accuracy.
|
||||
- **Resource Allocation:** Gauge the performance across different hardware options.
|
||||
- **Optimization:** Determine which export format offers the best performance for specific use cases.
|
||||
- **Cost Efficiency:** Optimize hardware usage based on benchmark results.
|
||||
Key metrics such as mAP50-95, Top-5 accuracy, and inference time help in making these evaluations. Refer to the [Key Metrics](#key-metrics-in-benchmark-mode) section for more information.
|
||||
|
||||
### Which export formats are supported by YOLOv8, and what are their advantages?
|
||||
|
||||
YOLOv8 supports a variety of export formats, each tailored for specific hardware and use cases:
|
||||
|
||||
- **ONNX:** Best for CPU performance.
|
||||
- **TensorRT:** Ideal for GPU efficiency.
|
||||
- **OpenVINO:** Optimized for Intel hardware.
|
||||
- **CoreML & TensorFlow:** Useful for iOS and general ML applications.
|
||||
For a complete list of supported formats and their respective advantages, check out the [Supported Export Formats](#supported-export-formats) section.
|
||||
|
||||
### What arguments can I use to fine-tune my YOLOv8 benchmarks?
|
||||
|
||||
When running benchmarks, several arguments can be customized to suit specific needs:
|
||||
|
||||
- **model:** Path to the model file (e.g., "yolov8n.pt").
|
||||
- **data:** Path to a YAML file defining the dataset (e.g., "coco8.yaml").
|
||||
- **imgsz:** The input image size, either as a single integer or a tuple.
|
||||
- **half:** Enable FP16 inference for better performance.
|
||||
- **int8:** Activate INT8 quantization for edge devices.
|
||||
- **device:** Specify the computation device (e.g., "cpu", "cuda:0").
|
||||
- **verbose:** Control the level of logging detail.
|
||||
For a full list of arguments, refer to the [Arguments](#arguments) section.
|
||||
|
|
|
|||
|
|
@ -110,3 +110,103 @@ Available YOLOv8 export formats are in the table below. You can export to any fo
|
|||
| [TF.js](../integrations/tfjs.md) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8`, `batch` |
|
||||
| [PaddlePaddle](../integrations/paddlepaddle.md) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz`, `batch` |
|
||||
| [NCNN](../integrations/ncnn.md) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
|
||||
|
||||
## FAQ
|
||||
|
||||
### How do I export a YOLOv8 model to ONNX format?
|
||||
|
||||
Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It provides both Python and CLI methods for exporting models.
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official model
|
||||
model = YOLO("path/to/best.pt") # load a custom trained model
|
||||
|
||||
# Export the model
|
||||
model.export(format="onnx")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolov8n.pt format=onnx # export official model
|
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
For more details on the process, including advanced options like handling different input sizes, refer to the [ONNX](../integrations/onnx.md) section.
|
||||
|
||||
### What are the benefits of using TensorRT for model export?
|
||||
|
||||
Using TensorRT for model export offers significant performance improvements. YOLOv8 models exported to TensorRT can achieve up to a 5x GPU speedup, making it ideal for real-time inference applications.
|
||||
|
||||
- **Versatility:** Optimize models for a specific hardware setup.
|
||||
- **Speed:** Achieve faster inference through advanced optimizations.
|
||||
- **Compatibility:** Integrate smoothly with NVIDIA hardware.
|
||||
|
||||
To learn more about integrating TensorRT, see the [TensorRT](../integrations/tensorrt.md) integration guide.
|
||||
|
||||
### How do I enable INT8 quantization when exporting my YOLOv8 model?
|
||||
|
||||
INT8 quantization is an excellent way to compress the model and speed up inference, especially on edge devices. Here's how you can enable INT8 quantization:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt") # Load a model
|
||||
model.export(format="onnx", int8=True)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolov8n.pt format=onnx int8=True # export model with INT8 quantization
|
||||
```
|
||||
|
||||
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export](../modes/export.md) section.
|
||||
|
||||
### Why is dynamic input size important when exporting models?
|
||||
|
||||
Dynamic input size allows the exported model to handle varying image dimensions, providing flexibility and optimizing processing efficiency for different use cases. When exporting to formats like ONNX or TensorRT, enabling dynamic input size ensures that the model can adapt to different input shapes seamlessly.
|
||||
|
||||
To enable this feature, use the `dynamic=True` flag during export:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.export(format="onnx", dynamic=True)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolov8n.pt format=onnx dynamic=True
|
||||
```
|
||||
|
||||
For additional context, refer to the [dynamic input size configuration](#arguments).
|
||||
|
||||
### What are the key export arguments to consider for optimizing model performance?
|
||||
|
||||
Understanding and configuring export arguments is crucial for optimizing model performance:
|
||||
|
||||
- **`format:`** The target format for the exported model (e.g., `onnx`, `torchscript`, `tensorflow`).
|
||||
- **`imgsz:`** Desired image size for the model input (e.g., `640` or `(height, width)`).
|
||||
- **`half:`** Enables FP16 quantization, reducing model size and potentially speeding up inference.
|
||||
- **`optimize:`** Applies specific optimizations for mobile or constrained environments.
|
||||
- **`int8:`** Enables INT8 quantization, highly beneficial for edge deployments.
|
||||
|
||||
For a detailed list and explanations of all the export arguments, visit the [Export Arguments](#arguments) section.
|
||||
|
|
|
|||
|
|
@ -71,3 +71,130 @@ Track mode is used for tracking objects in real-time using a YOLOv8 model. In th
|
|||
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
|
||||
|
||||
[Benchmark Examples](benchmark.md){ .md-button }
|
||||
|
||||
## FAQ
|
||||
|
||||
### How do I train a custom object detection model with Ultralytics YOLOv8?
|
||||
|
||||
Training a custom object detection model with Ultralytics YOLOv8 involves using the train mode. You need a dataset formatted in YOLO format, containing images and corresponding annotation files. Use the following command to start the training process:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Train a custom model
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.train(data="path/to/dataset.yaml", epochs=100, imgsz=640)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo train data=path/to/dataset.yaml epochs=100 imgsz=640
|
||||
```
|
||||
|
||||
For more detailed instructions, you can refer to the [Ultralytics Train Guide](../modes/train.md).
|
||||
|
||||
### What metrics does Ultralytics YOLOv8 use to validate the model's performance?
|
||||
|
||||
Ultralytics YOLOv8 uses various metrics during the validation process to assess model performance. These include:
|
||||
|
||||
- **mAP (mean Average Precision)**: This evaluates the accuracy of object detection.
|
||||
- **IOU (Intersection over Union)**: Measures the overlap between predicted and ground truth bounding boxes.
|
||||
- **Precision and Recall**: Precision measures the ratio of true positive detections to the total detected positives, while recall measures the ratio of true positive detections to the total actual positives.
|
||||
|
||||
You can run the following command to start the validation:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Validate the model
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.val(data="path/to/validation.yaml")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo val data=path/to/validation.yaml
|
||||
```
|
||||
|
||||
Refer to the [Validation Guide](../modes/val.md) for further details.
|
||||
|
||||
### How can I export my YOLOv8 model for deployment?
|
||||
|
||||
Ultralytics YOLOv8 offers export functionality to convert your trained model into various deployment formats such as ONNX, TensorRT, CoreML, and more. Use the following example to export your model:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Export the model
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.export(format="onnx")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolov8n.pt format=onnx
|
||||
```
|
||||
|
||||
Detailed steps for each export format can be found in the [Export Guide](../modes/export.md).
|
||||
|
||||
### What is the purpose of the benchmark mode in Ultralytics YOLOv8?
|
||||
|
||||
Benchmark mode in Ultralytics YOLOv8 is used to analyze the speed and accuracy of various export formats such as ONNX, TensorRT, and OpenVINO. It provides metrics like model size, `mAP50-95` for object detection, and inference time across different hardware setups, helping you choose the most suitable format for your deployment needs.
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics.utils.benchmarks import benchmark
|
||||
|
||||
# Benchmark on GPU
|
||||
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
|
||||
```
|
||||
|
||||
For more details, refer to the [Benchmark Guide](../modes/benchmark.md).
|
||||
|
||||
### How can I perform real-time object tracking using Ultralytics YOLOv8?
|
||||
|
||||
Real-time object tracking can be achieved using the track mode in Ultralytics YOLOv8. This mode extends object detection capabilities to track objects across video frames or live feeds. Use the following example to enable tracking:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Track objects in a video
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.track(source="path/to/video.mp4")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo track source=path/to/video.mp4
|
||||
```
|
||||
|
||||
For in-depth instructions, visit the [Track Guide](../modes/track.md).
|
||||
|
|
|
|||
|
|
@ -800,3 +800,25 @@ This script will run predictions on each frame of the video, visualize the resul
|
|||
[car spare parts]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/a0f802a8-0776-44cf-8f17-93974a4a28a1
|
||||
[football player detect]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d320e1f-fc57-4d7f-a691-78ee579c3442
|
||||
[human fall detect]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/86437c4a-3227-4eee-90ef-9efb697bdb43
|
||||
|
||||
## FAQ
|
||||
|
||||
### What is Ultralytics YOLOv8 and its predict mode for real-time inference?
|
||||
|
||||
Ultralytics YOLOv8 is a state-of-the-art model for real-time object detection, segmentation, and classification. Its **predict mode** allows users to perform high-speed inference on various data sources such as images, videos, and live streams. Designed for performance and versatility, it also offers batch processing and streaming modes. For more details on its features, check out the [Ultralytics YOLOv8 predict mode](#key-features-of-predict-mode).
|
||||
|
||||
### How can I run inference using Ultralytics YOLOv8 on different data sources?
|
||||
|
||||
Ultralytics YOLOv8 can process a wide range of data sources, including individual images, videos, directories, URLs, and streams. You can specify the data source in the `model.predict()` call. For example, use `'image.jpg'` for a local image or `'https://ultralytics.com/images/bus.jpg'` for a URL. Check out the detailed examples for various [inference sources](#inference-sources) in the documentation.
|
||||
|
||||
### How do I optimize YOLOv8 inference speed and memory usage?
|
||||
|
||||
To optimize inference speed and manage memory efficiently, you can use the streaming mode by setting `stream=True` in the predictor's call method. The streaming mode generates a memory-efficient generator of `Results` objects instead of loading all frames into memory. For processing long videos or large datasets, streaming mode is particularly useful. Learn more about [streaming mode](#key-features-of-predict-mode).
|
||||
|
||||
### What inference arguments does Ultralytics YOLOv8 support?
|
||||
|
||||
The `model.predict()` method in YOLOv8 supports various arguments such as `conf`, `iou`, `imgsz`, `device`, and more. These arguments allow you to customize the inference process, setting parameters like confidence thresholds, image size, and the device used for computation. Detailed descriptions of these arguments can be found in the [inference arguments](#inference-arguments) section.
|
||||
|
||||
### How can I visualize and save the results of YOLOv8 predictions?
|
||||
|
||||
After running inference with YOLOv8, the `Results` objects contain methods for displaying and saving annotated images. You can use methods like `result.show()` and `result.save(filename="result.jpg")` to visualize and save the results. For a comprehensive list of these methods, refer to the [working with results](#working-with-results) section.
|
||||
|
|
|
|||
|
|
@ -367,3 +367,130 @@ Together, let's enhance the tracking capabilities of the Ultralytics YOLO ecosys
|
|||
[fish track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/a5146d0f-bfa8-4e0a-b7df-3c1446cd8142
|
||||
[people track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/93bb4ee2-77a0-4e4e-8eb6-eb8f527f0527
|
||||
[vehicle track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/ee6e6038-383b-4f21-ac29-b2a1c7d386ab
|
||||
|
||||
## FAQ
|
||||
|
||||
### What is Multi-Object Tracking and how does Ultralytics YOLO support it?
|
||||
|
||||
Multi-object tracking in video analytics involves both identifying objects and maintaining a unique ID for each detected object across video frames. Ultralytics YOLO supports this by providing real-time tracking along with object IDs, facilitating tasks such as security surveillance and sports analytics. The system uses trackers like BoT-SORT and ByteTrack, which can be configured via YAML files.
|
||||
|
||||
### How do I configure a custom tracker for Ultralytics YOLO?
|
||||
|
||||
You can configure a custom tracker by copying an existing tracker configuration file (e.g., `custom_tracker.yaml`) from the [Ultralytics tracker configuration directory](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modifying parameters as needed, except for the `tracker_type`. Use this file in your tracking model like so:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt")
|
||||
results = model.track(source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
|
||||
```
|
||||
|
||||
### How can I run object tracking on multiple video streams simultaneously?
|
||||
|
||||
To run object tracking on multiple video streams simultaneously, you can use Python's `threading` module. Each thread will handle a separate video stream. Here's an example of how you can set this up:
|
||||
|
||||
!!! Example "Multithreaded Tracking"
|
||||
|
||||
```python
|
||||
import threading
|
||||
|
||||
import cv2
|
||||
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
def run_tracker_in_thread(filename, model, file_index):
|
||||
video = cv2.VideoCapture(filename)
|
||||
while True:
|
||||
ret, frame = video.read()
|
||||
if not ret:
|
||||
break
|
||||
results = model.track(frame, persist=True)
|
||||
res_plotted = results[0].plot()
|
||||
cv2.imshow(f"Tracking_Stream_{file_index}", res_plotted)
|
||||
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||
break
|
||||
video.release()
|
||||
|
||||
|
||||
model1 = YOLO("yolov8n.pt")
|
||||
model2 = YOLO("yolov8n-seg.pt")
|
||||
video_file1 = "path/to/video1.mp4"
|
||||
video_file2 = 0 # Path to a second video file, or 0 for a webcam
|
||||
|
||||
tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(video_file1, model1, 1), daemon=True)
|
||||
tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(video_file2, model2, 2), daemon=True)
|
||||
|
||||
tracker_thread1.start()
|
||||
tracker_thread2.start()
|
||||
|
||||
tracker_thread1.join()
|
||||
tracker_thread2.join()
|
||||
|
||||
cv2.destroyAllWindows()
|
||||
```
|
||||
|
||||
### What are the real-world applications of multi-object tracking with Ultralytics YOLO?
|
||||
|
||||
Multi-object tracking with Ultralytics YOLO has numerous applications, including:
|
||||
|
||||
- **Transportation:** Vehicle tracking for traffic management and autonomous driving.
|
||||
- **Retail:** People tracking for in-store analytics and security.
|
||||
- **Aquaculture:** Fish tracking for monitoring aquatic environments.
|
||||
|
||||
These applications benefit from Ultralytics YOLO's ability to process high-frame-rate videos in real time.
|
||||
|
||||
### How can I visualize object tracks over multiple video frames with Ultralytics YOLO?
|
||||
|
||||
To visualize object tracks over multiple video frames, you can use the YOLO model's tracking features along with OpenCV to draw the paths of detected objects. Here's an example script that demonstrates this:
|
||||
|
||||
!!! Example "Plotting tracks over multiple video frames"
|
||||
|
||||
```python
|
||||
from collections import defaultdict
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt")
|
||||
video_path = "path/to/video.mp4"
|
||||
cap = cv2.VideoCapture(video_path)
|
||||
track_history = defaultdict(lambda: [])
|
||||
|
||||
while cap.isOpened():
|
||||
success, frame = cap.read()
|
||||
if success:
|
||||
results = model.track(frame, persist=True)
|
||||
boxes = results[0].boxes.xywh.cpu()
|
||||
track_ids = results[0].boxes.id.int().cpu().tolist()
|
||||
annotated_frame = results[0].plot()
|
||||
for box, track_id in zip(boxes, track_ids):
|
||||
x, y, w, h = box
|
||||
track = track_history[track_id]
|
||||
track.append((float(x), float(y)))
|
||||
if len(track) > 30:
|
||||
track.pop(0)
|
||||
points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
|
||||
cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)
|
||||
cv2.imshow("YOLOv8 Tracking", annotated_frame)
|
||||
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||
break
|
||||
else:
|
||||
break
|
||||
cap.release()
|
||||
cv2.destroyAllWindows()
|
||||
```
|
||||
|
||||
This script will plot the tracking lines showing the movement paths of the tracked objects over time.
|
||||
|
|
|
|||
|
|
@ -336,3 +336,110 @@ To use TensorBoard locally run the below command and view results at http://loca
|
|||
This will load TensorBoard and direct it to the directory where your training logs are saved.
|
||||
|
||||
After setting up your logger, you can then proceed with your model training. All training metrics will be automatically logged in your chosen platform, and you can access these logs to monitor your model's performance over time, compare different models, and identify areas for improvement.
|
||||
|
||||
## FAQ
|
||||
|
||||
### How do I train an object detection model using Ultralytics YOLOv8?
|
||||
|
||||
To train an object detection model using Ultralytics YOLOv8, you can either use the Python API or the CLI. Below is an example for both:
|
||||
|
||||
!!! Example "Single-GPU and CPU Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
|
||||
```
|
||||
|
||||
For more details, refer to the [Train Settings](#train-settings) section.
|
||||
|
||||
### What are the key features of Ultralytics YOLOv8's Train mode?
|
||||
|
||||
The key features of Ultralytics YOLOv8's Train mode include:
|
||||
|
||||
- **Automatic Dataset Download:** Automatically downloads standard datasets like COCO, VOC, and ImageNet.
|
||||
- **Multi-GPU Support:** Scale training across multiple GPUs for faster processing.
|
||||
- **Hyperparameter Configuration:** Customize hyperparameters through YAML files or CLI arguments.
|
||||
- **Visualization and Monitoring:** Real-time tracking of training metrics for better insights.
|
||||
|
||||
These features make training efficient and customizable to your needs. For more details, see the [Key Features of Train Mode](#key-features-of-train-mode) section.
|
||||
|
||||
### How do I resume training from an interrupted session in Ultralytics YOLOv8?
|
||||
|
||||
To resume training from an interrupted session, set the `resume` argument to `True` and specify the path to the last saved checkpoint.
|
||||
|
||||
!!! Example "Resume Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the partially trained model
|
||||
model = YOLO("path/to/last.pt")
|
||||
|
||||
# Resume training
|
||||
results = model.train(resume=True)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo train resume model=path/to/last.pt
|
||||
```
|
||||
|
||||
Check the section on [Resuming Interrupted Trainings](#resuming-interrupted-trainings) for more information.
|
||||
|
||||
### Can I train YOLOv8 models on Apple M1 and M2 chips?
|
||||
|
||||
Yes, Ultralytics YOLOv8 supports training on Apple M1 and M2 chips utilizing the Metal Performance Shaders (MPS) framework. Specify 'mps' as your training device.
|
||||
|
||||
!!! Example "MPS Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a pretrained model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Train the model on M1/M2 chip
|
||||
results = model.train(data="coco8.yaml", epochs=100, imgsz=640, device="mps")
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
|
||||
```
|
||||
|
||||
For more details, refer to the [Apple M1 and M2 MPS Training](#apple-m1-and-m2-mps-training) section.
|
||||
|
||||
### What are the common training settings, and how do I configure them?
|
||||
|
||||
Ultralytics YOLOv8 allows you to configure a variety of training settings such as batch size, learning rate, epochs, and more through arguments. Here's a brief overview:
|
||||
|
||||
| Argument | Default | Description |
|
||||
| -------- | ------- | ---------------------------------------------------------------------- |
|
||||
| `model` | `None` | Path to the model file for training. |
|
||||
| `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). |
|
||||
| `epochs` | `100` | Total number of training epochs. |
|
||||
| `batch` | `16` | Batch size, adjustable as integer or auto mode. |
|
||||
| `imgsz` | `640` | Target image size for training. |
|
||||
| `device` | `None` | Computational device(s) for training like `cpu`, `0`, `0,1`, or `mps`. |
|
||||
| `save` | `True` | Enables saving of training checkpoints and final model weights. |
|
||||
|
||||
For an in-depth guide on training settings, check the [Train Settings](#train-settings) section.
|
||||
|
|
|
|||
|
|
@ -121,3 +121,108 @@ The below examples showcase YOLO model validation with custom arguments in Pytho
|
|||
```bash
|
||||
yolo val model=yolov8n.pt data=coco8.yaml imgsz=640 batch=16 conf=0.25 iou=0.6 device=0
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
### How do I validate my YOLOv8 model with Ultralytics?
|
||||
|
||||
To validate your YOLOv8 model, you can use the Val mode provided by Ultralytics. For example, using the Python API, you can load a model and run validation with:
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Validate the model
|
||||
metrics = model.val()
|
||||
print(metrics.box.map) # map50-95
|
||||
```
|
||||
|
||||
Alternatively, you can use the command-line interface (CLI):
|
||||
|
||||
```bash
|
||||
yolo val model=yolov8n.pt
|
||||
```
|
||||
|
||||
For further customization, you can adjust various arguments like `imgsz`, `batch`, and `conf` in both Python and CLI modes. Check the [Arguments for YOLO Model Validation](#arguments-for-yolo-model-validation) section for the full list of parameters.
|
||||
|
||||
### What metrics can I get from YOLOv8 model validation?
|
||||
|
||||
YOLOv8 model validation provides several key metrics to assess model performance. These include:
|
||||
|
||||
- mAP50 (mean Average Precision at IoU threshold 0.5)
|
||||
- mAP75 (mean Average Precision at IoU threshold 0.75)
|
||||
- mAP50-95 (mean Average Precision across multiple IoU thresholds from 0.5 to 0.95)
|
||||
|
||||
Using the Python API, you can access these metrics as follows:
|
||||
|
||||
```python
|
||||
metrics = model.val() # assumes `model` has been loaded
|
||||
print(metrics.box.map) # mAP50-95
|
||||
print(metrics.box.map50) # mAP50
|
||||
print(metrics.box.map75) # mAP75
|
||||
print(metrics.box.maps) # list of mAP50-95 for each category
|
||||
```
|
||||
|
||||
For a complete performance evaluation, it's crucial to review all these metrics. For more details, refer to the [Key Features of Val Mode](#key-features-of-val-mode).
|
||||
|
||||
### What are the advantages of using Ultralytics YOLO for validation?
|
||||
|
||||
Using Ultralytics YOLO for validation provides several advantages:
|
||||
|
||||
- **Precision:** YOLOv8 offers accurate performance metrics including mAP50, mAP75, and mAP50-95.
|
||||
- **Convenience:** The models remember their training settings, making validation straightforward.
|
||||
- **Flexibility:** You can validate against the same or different datasets and image sizes.
|
||||
- **Hyperparameter Tuning:** Validation metrics help in fine-tuning models for better performance.
|
||||
|
||||
These benefits ensure that your models are evaluated thoroughly and can be optimized for superior results. Learn more about these advantages in the [Why Validate with Ultralytics YOLO](#why-validate-with-ultralytics-yolo) section.
|
||||
|
||||
### Can I validate my YOLOv8 model using a custom dataset?
|
||||
|
||||
Yes, you can validate your YOLOv8 model using a custom dataset. Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the validation data, class names, and other relevant details.
|
||||
|
||||
Example in Python:
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Validate with a custom dataset
|
||||
metrics = model.val(data="path/to/your/custom_dataset.yaml")
|
||||
print(metrics.box.map) # map50-95
|
||||
```
|
||||
|
||||
Example using CLI:
|
||||
|
||||
```bash
|
||||
yolo val model=yolov8n.pt data=path/to/your/custom_dataset.yaml
|
||||
```
|
||||
|
||||
For more customizable options during validation, see the [Example Validation with Arguments](#example-validation-with-arguments) section.
|
||||
|
||||
### How do I save validation results to a JSON file in YOLOv8?
|
||||
|
||||
To save the validation results to a JSON file, you can set the `save_json` argument to `True` when running validation. This can be done in both the Python API and CLI.
|
||||
|
||||
Example in Python:
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Save validation results to JSON
|
||||
metrics = model.val(save_json=True)
|
||||
```
|
||||
|
||||
Example using CLI:
|
||||
|
||||
```bash
|
||||
yolo val model=yolov8n.pt save_json=True
|
||||
```
|
||||
|
||||
This functionality is particularly useful for further analysis or integration with other tools. Check the [Arguments for YOLO Model Validation](#arguments-for-yolo-model-validation) for more details.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue