Fix mkdocs.yml raw image URLs (#14213)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Burhan <62214284+Burhan-Q@users.noreply.github.com>
This commit is contained in:
Glenn Jocher 2024-07-05 02:25:02 +02:00 committed by GitHub
parent d5db9c916f
commit 5d479c73c2
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
69 changed files with 4767 additions and 223 deletions

View file

@ -4,7 +4,7 @@ description: Learn to create line graphs, bar plots, and pie charts using Python
keywords: Ultralytics, YOLOv8, data visualization, line graphs, bar plots, pie charts, Python, analytics, tutorial, guide
---
# Analytics using Ultralytics YOLOv8 📊
# Analytics using Ultralytics YOLOv8
## Introduction
@ -324,3 +324,170 @@ Here's a table with the `Analytics` arguments:
## Conclusion
Understanding when and how to use different types of visualizations is crucial for effective data analysis. Line graphs, bar plots, and pie charts are fundamental tools that can help you convey your data's story more clearly and effectively.
## FAQ
### How do I create a line graph using Ultralytics YOLOv8 Analytics?
To create a line graph using Ultralytics YOLOv8 Analytics, follow these steps:
1. Load a YOLOv8 model and open your video file.
2. Initialize the `Analytics` class with the type set to "line."
3. Iterate through video frames, updating the line graph with relevant data, such as object counts per frame.
4. Save the output video displaying the line graph.
Example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("line_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="line", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
total_counts = sum([1 for box in results[0].boxes.xyxy])
analytics.update_line(frame_count, total_counts)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
For further details on configuring the `Analytics` class, visit the [Analytics using Ultralytics YOLOv8 📊](#analytics-using-ultralytics-yolov8) section.
### What are the benefits of using Ultralytics YOLOv8 for creating bar plots?
Using Ultralytics YOLOv8 for creating bar plots offers several benefits:
1. **Real-time Data Visualization**: Seamlessly integrate object detection results into bar plots for dynamic updates.
2. **Ease of Use**: Simple API and functions make it straightforward to implement and visualize data.
3. **Customization**: Customize titles, labels, colors, and more to fit your specific requirements.
4. **Efficiency**: Efficiently handle large amounts of data and update plots in real-time during video processing.
Use the following example to generate a bar plot:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("bar_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="bar", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
clswise_count = {
model.names[int(cls)]: boxes.size(0)
for cls, boxes in zip(results[0].boxes.cls.tolist(), results[0].boxes.xyxy)
}
analytics.update_bar(clswise_count)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
To learn more, visit the [Bar Plot](#visual-samples) section in the guide.
### Why should I use Ultralytics YOLOv8 for creating pie charts in my data visualization projects?
Ultralytics YOLOv8 is an excellent choice for creating pie charts because:
1. **Integration with Object Detection**: Directly integrate object detection results into pie charts for immediate insights.
2. **User-Friendly API**: Simple to set up and use with minimal code.
3. **Customizable**: Various customization options for colors, labels, and more.
4. **Real-time Updates**: Handle and visualize data in real-time, which is ideal for video analytics projects.
Here's a quick example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("pie_chart.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="pie", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
clswise_count = {
model.names[int(cls)]: boxes.size(0)
for cls, boxes in zip(results[0].boxes.cls.tolist(), results[0].boxes.xyxy)
}
analytics.update_pie(clswise_count)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
For more information, refer to the [Pie Chart](#visual-samples) section in the guide.
### Can Ultralytics YOLOv8 be used to track objects and dynamically update visualizations?
Yes, Ultralytics YOLOv8 can be used to track objects and dynamically update visualizations. It supports tracking multiple objects in real-time and can update various visualizations like line graphs, bar plots, and pie charts based on the tracked objects' data.
Example for tracking and updating a line graph:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("line_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="line", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
total_counts = sum([1 for box in results[0].boxes.xyxy])
analytics.update_line(frame_count, total_counts)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
To learn about the complete functionality, see the [Tracking](../modes/track.md) section.
### What makes Ultralytics YOLOv8 different from other object detection solutions like OpenCV and TensorFlow?
Ultralytics YOLOv8 stands out from other object detection solutions like OpenCV and TensorFlow for multiple reasons:
1. **State-of-the-art Accuracy**: YOLOv8 provides superior accuracy in object detection, segmentation, and classification tasks.
2. **Ease of Use**: User-friendly API allows for quick implementation and integration without extensive coding.
3. **Real-time Performance**: Optimized for high-speed inference, suitable for real-time applications.
4. **Diverse Applications**: Supports various tasks including multi-object tracking, custom model training, and exporting to different formats like ONNX, TensorRT, and CoreML.
5. **Comprehensive Documentation**: Extensive [documentation](https://docs.ultralytics.com/) and [blog resources](https://www.ultralytics.com/blog) to guide users through every step.
For more detailed comparisons and use cases, explore our [Ultralytics Blog](https://www.ultralytics.com/blog/ai-use-cases-transforming-your-future).

View file

@ -150,3 +150,78 @@ This guide serves as an introduction to get you up and running with YOLOv8 on Az
- [Register a Model](https://learn.microsoft.com/azure/machine-learning/how-to-manage-models): Familiarize yourself with model management practices including registration, versioning, and deployment.
- [Train YOLOv8 with AzureML Python SDK](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azure-machine-learning-python-sdk-8268696be8ba): Explore a step-by-step guide on using the AzureML Python SDK to train your YOLOv8 models.
- [Train YOLOv8 with AzureML CLI](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azureml-and-the-az-cli-73d3c870ba8e): Discover how to utilize the command-line interface for streamlined training and management of YOLOv8 models on AzureML.
## FAQ
### How do I run YOLOv8 on AzureML for model training?
Running YOLOv8 on AzureML for model training involves several steps:
1. **Create a Compute Instance**: From your AzureML workspace, navigate to Compute > Compute instances > New, and select the required instance.
2. **Setup Environment**: Start your compute instance, open a terminal, and create a conda environment:
```bash
conda create --name yolov8env -y
conda activate yolov8env
conda install pip -y
pip install ultralytics onnx>=1.12.0
```
3. **Run YOLOv8 Tasks**: Use the Ultralytics CLI to train your model:
```bash
yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
For more details, you can refer to the [instructions to use the Ultralytics CLI](../quickstart.md#use-ultralytics-with-cli).
### What are the benefits of using AzureML for YOLOv8 training?
AzureML provides a robust and efficient ecosystem for training YOLOv8 models:
- **Scalability**: Easily scale your compute resources as your data and model complexity grows.
- **MLOps Integration**: Utilize features like versioning, monitoring, and auditing to streamline ML operations.
- **Collaboration**: Share and manage resources within teams, enhancing collaborative workflows.
These advantages make AzureML an ideal platform for projects ranging from quick prototypes to large-scale deployments. For more tips, check out [AzureML Jobs](https://learn.microsoft.com/azure/machine-learning/how-to-train-model).
### How do I troubleshoot common issues when running YOLOv8 on AzureML?
Troubleshooting common issues with YOLOv8 on AzureML can involve the following steps:
- **Dependency Issues**: Ensure all required packages are installed. Refer to the `requirements.txt` file for dependencies.
- **Environment Setup**: Verify that your conda environment is correctly activated before running commands.
- **Resource Allocation**: Make sure your compute instances have sufficient resources to handle the training workload.
For additional guidance, review our [YOLO Common Issues](https://docs.ultralytics.com/guides/yolo-common-issues/) documentation.
### Can I use both the Ultralytics CLI and Python interface on AzureML?
Yes, AzureML allows you to use both the Ultralytics CLI and the Python interface seamlessly:
- **CLI**: Ideal for quick tasks and running standard scripts directly from the terminal.
```bash
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
```
- **Python Interface**: Useful for more complex tasks requiring custom coding and integration within notebooks.
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.train(data="coco8.yaml", epochs=3)
```
Refer to the quickstart guides for more detailed instructions [here](../quickstart.md#use-ultralytics-with-cli) and [here](../quickstart.md#use-ultralytics-with-python).
### What is the advantage of using Ultralytics YOLOv8 over other object detection models?
Ultralytics YOLOv8 offers several unique advantages over competing object detection models:
- **Speed**: Faster inference and training times compared to models like Faster R-CNN and SSD.
- **Accuracy**: High accuracy in detection tasks with features like anchor-free design and enhanced augmentation strategies.
- **Ease of Use**: Intuitive API and CLI for quick setup, making it accessible both to beginners and experts.
To explore more about YOLOv8's features, visit the [Ultralytics YOLO](https://www.ultralytics.com/yolo) page for detailed insights.

View file

@ -127,3 +127,66 @@ And that's it! Your Conda installation will now use `libmamba` as the solver, wh
---
Congratulations! You have successfully set up a Conda environment, installed the Ultralytics package, and are now ready to explore its rich functionalities. Feel free to dive deeper into the [Ultralytics documentation](../index.md) for more advanced tutorials and examples.
## FAQ
### What is the process for setting up a Conda environment for Ultralytics projects?
Setting up a Conda environment for Ultralytics projects is straightforward and ensures smooth package management. First, create a new Conda environment using the following command:
```bash
conda create --name ultralytics-env python=3.8 -y
```
Then, activate the new environment with:
```bash
conda activate ultralytics-env
```
Finally, install Ultralytics from the conda-forge channel:
```bash
conda install -c conda-forge ultralytics
```
### Why should I use Conda over pip for managing dependencies in Ultralytics projects?
Conda is a robust package and environment management system that offers several advantages over pip. It manages dependencies efficiently and ensures that all necessary libraries are compatible. Conda's isolated environments prevent conflicts between packages, which is crucial in data science and machine learning projects. Additionally, Conda supports binary package distribution, speeding up the installation process.
### Can I use Ultralytics YOLO in a CUDA-enabled environment for faster performance?
Yes, you can enhance performance by utilizing a CUDA-enabled environment. Ensure that you install `ultralytics`, `pytorch`, and `pytorch-cuda` together to avoid conflicts:
```bash
conda install -c pytorch -c nvidia -c conda-forge pytorch torchvision pytorch-cuda=11.8 ultralytics
```
This setup enables GPU acceleration, crucial for intensive tasks like deep learning model training and inference. For more information, visit the [Ultralytics installation guide](../quickstart.md).
### What are the benefits of using Ultralytics Docker images with a Conda environment?
Using Ultralytics Docker images ensures a consistent and reproducible environment, eliminating "it works on my machine" issues. These images include a pre-configured Conda environment, simplifying the setup process. You can pull and run the latest Ultralytics Docker image with the following commands:
```bash
sudo docker pull ultralytics/ultralytics:latest-conda
sudo docker run -it --ipc=host --gpus all ultralytics/ultralytics:latest-conda
```
This approach is ideal for deploying applications in production or running complex workflows without manual configuration. Learn more about [Ultralytics Conda Docker Image](../quickstart.md).
### How can I speed up Conda package installation in my Ultralytics environment?
You can speed up the package installation process by using `libmamba`, a fast dependency solver for Conda. First, install the `conda-libmamba-solver` package:
```bash
conda install conda-libmamba-solver
```
Then configure Conda to use `libmamba` as the solver:
```bash
conda config --set solver libmamba
```
This setup provides faster and more efficient package management. For more tips on optimizing your environment, read about [libmamba installation](../quickstart.md).

View file

@ -138,3 +138,87 @@ Find comprehensive information on the [Predict](../modes/predict.md) page for fu
```
If you want a `tflite-runtime` wheel for `tensorflow` 2.15.0 download it from [here](https://github.com/feranick/TFlite-builds/releases) and install it using `pip` or your package manager of choice.
## FAQ
### What is a Coral Edge TPU and how does it enhance Raspberry Pi's performance with Ultralytics YOLOv8?
The Coral Edge TPU is a compact device designed to add an Edge TPU coprocessor to your system. This coprocessor enables low-power, high-performance machine learning inference, particularly optimized for TensorFlow Lite models. When using a Raspberry Pi, the Edge TPU accelerates ML model inference, significantly boosting performance, especially for Ultralytics YOLOv8 models. You can read more about the Coral Edge TPU on their [home page](https://coral.ai/products/accelerator).
### How do I install the Coral Edge TPU runtime on a Raspberry Pi?
To install the Coral Edge TPU runtime on your Raspberry Pi, download the appropriate `.deb` package for your Raspberry Pi OS version from [this link](https://github.com/feranick/libedgetpu/releases). Once downloaded, use the following command to install it:
```bash
sudo dpkg -i path/to/package.deb
```
Make sure to uninstall any previous Coral Edge TPU runtime versions by following the steps outlined in the [Installation Walkthrough](#installation-walkthrough) section.
### Can I export my Ultralytics YOLOv8 model to be compatible with Coral Edge TPU?
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
!!! Exporting the model
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("path/to/model.pt") # Load an official model or custom model
# Export the model
model.export(format="edgetpu")
```
=== "CLI"
```bash
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
```
For more information, refer to the [Export Mode](../modes/export.md) documentation.
### What should I do if TensorFlow is already installed on my Raspberry Pi but I want to use tflite-runtime instead?
If you have TensorFlow installed on your Raspberry Pi and need to switch to `tflite-runtime`, you'll need to uninstall TensorFlow first using:
```bash
pip uninstall tensorflow tensorflow-aarch64
```
Then, install or update `tflite-runtime` with the following command:
```bash
pip install -U tflite-runtime
```
For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can download it from [this link](https://github.com/feranick/TFlite-builds/releases) and install it using `pip`. Detailed instructions are available in the section on running the model [Running the Model](#running-the-model).
### How do I run inference with an exported YOLOv8 model on a Raspberry Pi using the Coral Edge TPU?
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
!!! Running the model
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("path/to/edgetpu_model.tflite") # Load an official model or custom model
# Run Prediction
model.predict("path/to/source.png")
```
=== "CLI"
```bash
yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load an official model or custom model
```
Comprehensive details on full prediction mode features can be found on the [Predict Page](../modes/predict.md).

View file

@ -303,3 +303,39 @@ The following table summarizes how YOLOv8s models perform at different TensorRT
### Acknowledgements
This guide was initially created by our friends at Seeed Studio, Lakshantha and Elaine.
## FAQ
### How do I set up Ultralytics YOLOv8 on an NVIDIA Jetson device?
To set up Ultralytics YOLOv8 on an [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) device, you first need to install the [DeepStream SDK](https://developer.nvidia.com/deepstream-getting-started) compatible with your JetPack version. Follow the step-by-step guide in our [Quick Start Guide](nvidia-jetson.md) to configure your NVIDIA Jetson for YOLOv8 deployment.
### What is the benefit of using TensorRT with YOLOv8 on NVIDIA Jetson?
Using TensorRT with YOLOv8 optimizes the model for inference, significantly reducing latency and improving throughput on NVIDIA Jetson devices. TensorRT provides high-performance, low-latency deep learning inference through layer fusion, precision calibration, and kernel auto-tuning. This leads to faster and more efficient execution, particularly useful for real-time applications like video analytics and autonomous machines.
### Can I run Ultralytics YOLOv8 with DeepStream SDK across different NVIDIA Jetson hardware?
Yes, the guide for deploying Ultralytics YOLOv8 with the DeepStream SDK and TensorRT is compatible across the entire NVIDIA Jetson lineup. This includes devices like the Jetson Orin NX 16GB with [JetPack 5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and the Jetson Nano 4GB with [JetPack 4.6.4](https://developer.nvidia.com/jetpack-sdk-464). Refer to the section [DeepStream Configuration for YOLOv8](#deepstream-configuration-for-yolov8) for detailed steps.
### How can I convert a YOLOv8 model to ONNX for DeepStream?
To convert a YOLOv8 model to ONNX format for deployment with DeepStream, use the `utils/export_yoloV8.py` script from the [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) repository.
Here's an example command:
```bash
python3 utils/export_yoloV8.py -w yolov8s.pt --opset 12 --simplify
```
For more details on model conversion, check out our [model export section](../modes/export.md).
### What are the performance benchmarks for YOLOv8 on NVIDIA Jetson Orin NX?
The performance of YOLOv8 models on NVIDIA Jetson Orin NX 16GB varies based on TensorRT precision levels. For example, YOLOv8s models achieve:
- **FP32 Precision**: 15.63 ms/im, 64 FPS
- **FP16 Precision**: 7.94 ms/im, 126 FPS
- **INT8 Precision**: 5.53 ms/im, 181 FPS
These benchmarks underscore the efficiency and capability of using TensorRT-optimized YOLOv8 models on NVIDIA Jetson hardware. For further details, see our [Benchmark Results](#benchmark-results) section.

View file

@ -4,7 +4,7 @@ description: Learn how to calculate distances between objects using Ultralytics
keywords: Ultralytics, YOLOv8, distance calculation, computer vision, object tracking, spatial positioning
---
# Distance Calculation using Ultralytics YOLOv8 🚀
# Distance Calculation using Ultralytics YOLOv8
## What is Distance Calculation?
@ -101,3 +101,38 @@ Measuring the gap between two objects is known as distance calculation within a
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I calculate distances between objects using Ultralytics YOLOv8?
To calculate distances between objects using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), you need to identify the bounding box centroids of the detected objects. This process involves initializing the `DistanceCalculation` class from Ultralytics' `solutions` module and using the model's tracking outputs to calculate the distances. You can refer to the implementation in the [distance calculation example](#distance-calculation-using-ultralytics-yolov8).
### What are the advantages of using distance calculation with Ultralytics YOLOv8?
Using distance calculation with Ultralytics YOLOv8 offers several advantages:
- **Localization Precision:** Provides accurate spatial positioning for objects.
- **Size Estimation:** Helps estimate physical sizes, contributing to better contextual understanding.
- **Scene Understanding:** Enhances 3D scene comprehension, aiding improved decision-making in applications like autonomous driving and surveillance.
### Can I perform distance calculation in real-time video streams with Ultralytics YOLOv8?
Yes, you can perform distance calculation in real-time video streams with Ultralytics YOLOv8. The process involves capturing video frames using OpenCV, running YOLOv8 object detection, and using the `DistanceCalculation` class to calculate distances between objects in successive frames. For a detailed implementation, see the [video stream example](#distance-calculation-using-ultralytics-yolov8).
### How do I delete points drawn during distance calculation using Ultralytics YOLOv8?
To delete points drawn during distance calculation with Ultralytics YOLOv8, you can use a right mouse click. This action will clear all the points you have drawn. For more details, refer to the note section under the [distance calculation example](#distance-calculation-using-ultralytics-yolov8).
### What are the key arguments for initializing the DistanceCalculation class in Ultralytics YOLOv8?
The key arguments for initializing the `DistanceCalculation` class in Ultralytics YOLOv8 include:
- `names`: Dictionary mapping class indices to class names.
- `pixels_per_meter`: Conversion factor from pixels to meters.
- `view_img`: Flag to indicate if the video stream should be displayed.
- `line_thickness`: Thickness of the lines drawn on the image.
- `line_color`: Color of the lines drawn on the image (BGR format).
- `centroid_color`: Color of the centroids (BGR format).
For an exhaustive list and default values, see the [arguments of DistanceCalculation](#arguments-distancecalculation).

View file

@ -224,3 +224,60 @@ yolo predict model=yolov8n.pt show=True
---
Congratulations! You're now set up to use Ultralytics with Docker and ready to take advantage of its powerful capabilities. For alternate installation methods, feel free to explore the [Ultralytics quickstart documentation](../quickstart.md).
## FAQ
### How do I set up Ultralytics with Docker?
To set up Ultralytics with Docker, first ensure that Docker is installed on your system. If you have an NVIDIA GPU, install the NVIDIA Docker runtime to enable GPU support. Then, pull the latest Ultralytics Docker image from Docker Hub using the following command:
```bash
sudo docker pull ultralytics/ultralytics:latest
```
For detailed steps, refer to our [Docker Quickstart Guide](../quickstart.md).
### What are the benefits of using Ultralytics Docker images for machine learning projects?
Using Ultralytics Docker images ensures a consistent environment across different machines, replicating the same software and dependencies. This is particularly useful for collaborating across teams, running models on various hardware, and maintaining reproducibility. For GPU-based training, Ultralytics provides optimized Docker images such as `Dockerfile` for general GPU usage and `Dockerfile-jetson` for NVIDIA Jetson devices. Explore [Ultralytics Docker Hub](https://hub.docker.com/r/ultralytics/ultralytics) for more details.
### How can I run Ultralytics YOLO in a Docker container with GPU support?
First, ensure that the NVIDIA Docker runtime is installed and configured. Then, use the following command to run Ultralytics YOLO with GPU support:
```bash
sudo docker run -it --ipc=host --gpus all ultralytics/ultralytics:latest
```
This command sets up a Docker container with GPU access. For additional details, see the [Docker Quickstart Guide](../quickstart.md).
### How do I visualize YOLO prediction results in a Docker container with a display server?
To visualize YOLO prediction results with a GUI in a Docker container, you need to allow Docker to access your display server. For systems running X11, the command is:
```bash
xhost +local:docker && docker run -e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v ~/.Xauthority:/root/.Xauthority \
-it --ipc=host ultralytics/ultralytics:latest
```
For systems running Wayland, use:
```bash
xhost +local:docker && docker run -e DISPLAY=$DISPLAY \
-v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY \
--net=host -it --ipc=host ultralytics/ultralytics:latest
```
More information can be found in the [Run graphical user interface (GUI) applications in a Docker Container](#run-graphical-user-interface-gui-applications-in-a-docker-container) section.
### Can I mount local directories into the Ultralytics Docker container?
Yes, you can mount local directories into the Ultralytics Docker container using the `-v` flag:
```bash
sudo docker run -it --ipc=host --gpus all -v /path/on/host:/path/in/container ultralytics/ultralytics:latest
```
Replace `/path/on/host` with the directory on your local machine and `/path/in/container` with the desired path inside the container. This setup allows you to work with your local files within the container. For more information, refer to the relevant section on [mounting local directories](../usage/python.md).

View file

@ -330,3 +330,74 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
| `cv::COLORMAP_DEEPGREEN` | Deep Green color map |
These colormaps are commonly used for visualizing data with different color representations.
## FAQ
### How does Ultralytics YOLOv8 generate heatmaps and what are their benefits?
Ultralytics YOLOv8 generates heatmaps by transforming complex data into a color-coded matrix where different hues represent data intensities. Heatmaps make it easier to visualize patterns, correlations, and anomalies in the data. Warmer hues indicate higher values, while cooler tones represent lower values. The primary benefits include intuitive visualization of data distribution, efficient pattern detection, and enhanced spatial analysis for decision-making. For more details and configuration options, refer to the [Heatmap Configuration](#arguments-heatmap) section.
### Can I use Ultralytics YOLOv8 to perform object tracking and generate a heatmap simultaneously?
Yes, Ultralytics YOLOv8 supports object tracking and heatmap generation concurrently. This can be achieved through its `Heatmap` solution integrated with object tracking models. To do so, you need to initialize the heatmap object and use YOLOv8's tracking capabilities. Here's a simple example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, view_img=True, shape="circle", classes_names=model.names)
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
tracks = model.track(im0, persist=True, show=False)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
cv2.imshow("Heatmap", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
For further guidance, check the [Tracking Mode](../modes/track.md) page.
### What makes Ultralytics YOLOv8 heatmaps different from other data visualization tools like those from OpenCV or Matplotlib?
Ultralytics YOLOv8 heatmaps are specifically designed for integration with its object detection and tracking models, providing an end-to-end solution for real-time data analysis. Unlike generic visualization tools like OpenCV or Matplotlib, YOLOv8 heatmaps are optimized for performance and automated processing, supporting features like persistent tracking, decay factor adjustment, and real-time video overlay. For more information on YOLOv8's unique features, visit the [Ultralytics YOLOv8 Introduction](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
### How can I visualize only specific object classes in heatmaps using Ultralytics YOLOv8?
You can visualize specific object classes by specifying the desired classes in the `track()` method of the YOLO model. For instance, if you only want to visualize cars and persons (assuming their class indices are 0 and 2), you can set the `classes` parameter accordingly.
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, view_img=True, shape="circle", classes_names=model.names)
classes_for_heatmap = [0, 2] # Classes to visualize
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
tracks = model.track(im0, persist=True, show=False, classes=classes_for_heatmap)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
cv2.imshow("Heatmap", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
### Why should businesses choose Ultralytics YOLOv8 for heatmap generation in data analysis?
Ultralytics YOLOv8 offers seamless integration of advanced object detection and real-time heatmap generation, making it an ideal choice for businesses looking to visualize data more effectively. The key advantages include intuitive data distribution visualization, efficient pattern detection, and enhanced spatial analysis for better decision-making. Additionally, YOLOv8's cutting-edge features such as persistent tracking, customizable colormaps, and support for various export formats make it superior to other tools like TensorFlow and OpenCV for comprehensive data analysis. Learn more about business applications at [Ultralytics Plans](https://www.ultralytics.com/plans).

View file

@ -205,3 +205,57 @@ The hyperparameter tuning process in Ultralytics YOLO is simplified yet powerful
3. [Efficient Hyperparameter Tuning with Ray Tune and YOLOv8](../integrations/ray-tune.md)
For deeper insights, you can explore the `Tuner` class source code and accompanying documentation. Should you have any questions, feature requests, or need further assistance, feel free to reach out to us on [GitHub](https://github.com/ultralytics/ultralytics/issues/new/choose) or [Discord](https://ultralytics.com/discord).
## FAQ
### How do I optimize the learning rate for Ultralytics YOLO during hyperparameter tuning?
To optimize the learning rate for Ultralytics YOLO, start by setting an initial learning rate using the `lr0` parameter. Common values range from `0.001` to `0.01`. During the hyperparameter tuning process, this value will be mutated to find the optimal setting. You can utilize the `model.tune()` method to automate this process. For example:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Initialize the YOLO model
model = YOLO("yolov8n.pt")
# Tune hyperparameters on COCO8 for 30 epochs
model.tune(data="coco8.yaml", epochs=30, iterations=300, optimizer="AdamW", plots=False, save=False, val=False)
```
For more details, check the [Ultralytics YOLO configuration page](../usage/cfg.md#augmentation-settings).
### What are the benefits of using genetic algorithms for hyperparameter tuning in YOLOv8?
Genetic algorithms in Ultralytics YOLOv8 provide a robust method for exploring the hyperparameter space, leading to highly optimized model performance. Key benefits include:
- **Efficient Search**: Genetic algorithms like mutation can quickly explore a large set of hyperparameters.
- **Avoiding Local Minima**: By introducing randomness, they help in avoiding local minima, ensuring better global optimization.
- **Performance Metrics**: They adapt based on performance metrics such as AP50 and F1-score.
To see how genetic algorithms can optimize hyperparameters, check out the [hyperparameter evolution guide](../yolov5/tutorials/hyperparameter_evolution.md).
### How long does the hyperparameter tuning process take for Ultralytics YOLO?
The time required for hyperparameter tuning with Ultralytics YOLO largely depends on several factors such as the size of the dataset, the complexity of the model architecture, the number of iterations, and the computational resources available. For instance, tuning YOLOv8n on a dataset like COCO8 for 30 epochs might take several hours to days, depending on the hardware.
To effectively manage tuning time, define a clear tuning budget beforehand ([internal section link](#preparing-for-hyperparameter-tuning)). This helps in balancing resource allocation and optimization goals.
### What metrics should I use to evaluate model performance during hyperparameter tuning in YOLO?
When evaluating model performance during hyperparameter tuning in YOLO, you can use several key metrics:
- **AP50**: The average precision at IoU threshold of 0.50.
- **F1-Score**: The harmonic mean of precision and recall.
- **Precision and Recall**: Individual metrics indicating the model's accuracy in identifying true positives versus false positives and false negatives.
These metrics help you understand different aspects of your model's performance. Refer to the [Ultralytics YOLO performance metrics](../guides/yolo-performance-metrics.md) guide for a comprehensive overview.
### Can I use Ultralytics HUB for hyperparameter tuning of YOLO models?
Yes, you can use Ultralytics HUB for hyperparameter tuning of YOLO models. The HUB offers a no-code platform to easily upload datasets, train models, and perform hyperparameter tuning efficiently. It provides real-time tracking and visualization of tuning progress and results.
Explore more about using Ultralytics HUB for hyperparameter tuning in the [Ultralytics HUB Cloud Training](../hub/cloud-training.md) documentation.

View file

@ -60,3 +60,44 @@ We welcome contributions from the community! If you've mastered a particular asp
To get started, please read our [Contributing Guide](../help/contributing.md) for guidelines on how to open up a Pull Request (PR) 🛠️. We look forward to your contributions!
Let's work together to make the Ultralytics YOLO ecosystem more robust and versatile 🙏!
## FAQ
### How do I train a custom object detection model using Ultralytics YOLO?
Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolov8s.pt") # Load a pre-trained YOLO model
model.train(data="path/to/dataset.yaml", epochs=50) # Train on custom dataset
```
=== "CLI"
```bash
yolo task=detect mode=train model=yolov8s.pt data=path/to/dataset.yaml epochs=50
```
For detailed dataset formatting and additional options, refer to our [Tips for Model Training](model-training-tips.md) guide.
### What performance metrics should I use to evaluate my YOLO model?
Evaluating your YOLO model performance is crucial to understanding its efficacy. Key metrics include Mean Average Precision (mAP), Intersection over Union (IoU), and F1 score. These metrics help assess the accuracy and precision of object detection tasks. You can learn more about these metrics and how to improve your model in our [YOLO Performance Metrics](yolo-performance-metrics.md) guide.
### Why should I use Ultralytics HUB for my computer vision projects?
Ultralytics HUB is a no-code platform that simplifies managing, training, and deploying YOLO models. It supports seamless integration, real-time tracking, and cloud training, making it ideal for both beginners and professionals. Discover more about its features and how it can streamline your workflow with our [Ultralytics HUB](https://docs.ultralytics.com/hub/) quickstart guide.
### What are the common issues faced during YOLO model training, and how can I resolve them?
Common issues during YOLO model training include data formatting errors, model architecture mismatches, and insufficient training data. To address these, ensure your dataset is correctly formatted, check for compatible model versions, and augment your training data. For a comprehensive list of solutions, refer to our [YOLO Common Issues](yolo-common-issues.md) guide.
### How can I deploy my YOLO model for real-time object detection on edge devices?
Deploying YOLO models on edge devices like NVIDIA Jetson and Raspberry Pi requires converting the model to a compatible format such as TensorRT or TFLite. Follow our step-by-step guides for [NVIDIA Jetson](nvidia-jetson.md) and [Raspberry Pi](raspberry-pi.md) deployments to get started with real-time object detection on edge hardware. These guides will walk you through installation, configuration, and performance optimization.

View file

@ -135,3 +135,114 @@ There are two types of instance segmentation tracking available in the Ultralyti
## Note
For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below.
## FAQ
### How do I perform instance segmentation using Ultralytics YOLOv8?
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
!!! Example
=== "Python"
```python
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
break
results = model.predict(im0)
annotator = Annotator(im0, line_width=2)
if results[0].masks is not None:
clss = results[0].boxes.cls.cpu().tolist()
masks = results[0].masks.xy
for mask, cls in zip(masks, clss):
annotator.seg_bbox(mask=mask, mask_color=colors(int(cls), True), det_label=model.model.names[int(cls)])
out.write(im0)
cv2.imshow("instance-segmentation", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
```
Learn more about instance segmentation in the [Ultralytics YOLOv8 guide](#what-is-instance-segmentation).
### What is the difference between instance segmentation and object tracking in Ultralytics YOLOv8?
Instance segmentation identifies and outlines individual objects within an image, giving each object a unique label and mask. Object tracking extends this by assigning consistent labels to objects across video frames, facilitating continuous tracking of the same objects over time. Learn more about the distinctions in the [Ultralytics YOLOv8 documentation](#samples).
### Why should I use Ultralytics YOLOv8 for instance segmentation and tracking over other models like Mask R-CNN or Faster R-CNN?
Ultralytics YOLOv8 offers real-time performance, superior accuracy, and ease of use compared to other models like Mask R-CNN or Faster R-CNN. YOLOv8 provides a seamless integration with Ultralytics HUB, allowing users to manage models, datasets, and training pipelines efficiently. Discover more about the benefits of YOLOv8 in the [Ultralytics blog](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
### How can I implement object tracking using Ultralytics YOLOv8?
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
!!! Example
=== "Python"
```python
from collections import defaultdict
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
track_history = defaultdict(lambda: [])
model = YOLO("yolov8n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
if results[0].boxes.id is not None and results[0].masks is not None:
masks = results[0].masks.xy
track_ids = results[0].boxes.id.int().cpu().tolist()
for mask, track_id in zip(masks, track_ids):
annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id))
out.write(im0)
cv2.imshow("instance-segmentation-object-tracking", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
```
Find more in the [Instance Segmentation and Tracking section](#samples).
### Are there any datasets provided by Ultralytics suitable for training YOLOv8 models for instance segmentation and tracking?
Yes, Ultralytics offers several datasets suitable for training YOLOv8 models, including segmentation and tracking datasets. Dataset examples, structures, and instructions for use can be found in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/).

View file

@ -14,7 +14,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
## Recipe Walk Through
1. See the [Ultralytics Quickstart Installation section](../quickstart.md/#install-ultralytics) for a quick walkthrough on installing the required libraries.
1. See the [Ultralytics Quickstart Installation section](../quickstart.md) for a quick walkthrough on installing the required libraries.
***
@ -307,3 +307,93 @@ for r in res:
4. See [Segment Task](../tasks/segment.md#models) for more information.
5. Learn more about [Working with Results](../modes/predict.md#working-with-results)
6. Learn more about [Segmentation Mask Results](../modes/predict.md#masks)
## FAQ
### How do I isolate objects using Ultralytics YOLOv8 for segmentation tasks?
To isolate objects using Ultralytics YOLOv8, follow these steps:
1. **Load the model and run inference:**
```python
from ultralytics import YOLO
model = YOLO("yolov8n-seg.pt")
results = model.predict(source="path/to/your/image.jpg")
```
2. **Generate a binary mask and draw contours:**
```python
import cv2
import numpy as np
img = np.copy(results[0].orig_img)
b_mask = np.zeros(img.shape[:2], np.uint8)
contour = results[0].masks.xy[0].astype(np.int32).reshape(-1, 1, 2)
cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
```
3. **Isolate the object using the binary mask:**
```python
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
isolated = cv2.bitwise_and(mask3ch, img)
```
Refer to the guide on [Predict Mode](../modes/predict.md) and the [Segment Task](../tasks/segment.md) for more information.
### What options are available for saving the isolated objects after segmentation?
Ultralytics YOLOv8 offers two main options for saving isolated objects:
1. **With a Black Background:**
```python
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
isolated = cv2.bitwise_and(mask3ch, img)
```
2. **With a Transparent Background:**
```python
isolated = np.dstack([img, b_mask])
```
For further details, visit the [Predict Mode](../modes/predict.md) section.
### How can I crop isolated objects to their bounding boxes using Ultralytics YOLOv8?
To crop isolated objects to their bounding boxes:
1. **Retrieve bounding box coordinates:**
```python
x1, y1, x2, y2 = results[0].boxes.xyxy[0].cpu().numpy().astype(np.int32)
```
2. **Crop the isolated image:**
```python
iso_crop = isolated[y1:y2, x1:x2]
```
Learn more about bounding box results in the [Predict Mode](../modes/predict.md#boxes) documentation.
### Why should I use Ultralytics YOLOv8 for object isolation in segmentation tasks?
Ultralytics YOLOv8 provides:
- **High-speed** real-time object detection and segmentation.
- **Accurate bounding box and mask generation** for precise object isolation.
- **Comprehensive documentation** and easy-to-use API for efficient development.
Explore the benefits of using YOLO in the [Segment Task documentation](../tasks/segment.md).
### Can I save isolated objects including the background using Ultralytics YOLOv8?
Yes, this is a built-in feature in Ultralytics YOLOv8. Use the `save_crop` argument in the `predict()` method. For example:
```python
results = model.predict(source="path/to/your/image.jpg", save_crop=True)
```
Read more about the `save_crop` argument in the [Predict Mode Inference Arguments](../modes/predict.md#inference-arguments) section.

View file

@ -280,3 +280,33 @@ Finally, we implemented the actual model training using each split in a loop, sa
This technique of K-Fold cross-validation is a robust way of making the most out of your available data, and it helps to ensure that your model performance is reliable and consistent across different data subsets. This results in a more generalizable and reliable model that is less likely to overfit to specific data patterns.
Remember that although we used YOLO in this guide, these steps are mostly transferable to other machine learning models. Understanding these steps allows you to apply cross-validation effectively in your own machine learning projects. Happy coding!
## FAQ
### What is K-Fold Cross Validation and why is it useful in object detection?
K-Fold Cross Validation is a technique where the dataset is divided into 'k' subsets (folds) to evaluate model performance more reliably. Each fold serves as both training and validation data. In the context of object detection, using K-Fold Cross Validation helps to ensure your Ultralytics YOLO model's performance is robust and generalizable across different data splits, enhancing its reliability. For detailed instructions on setting up K-Fold Cross Validation with Ultralytics YOLO, refer to [K-Fold Cross Validation with Ultralytics](#introduction).
### How do I implement K-Fold Cross Validation using Ultralytics YOLO?
To implement K-Fold Cross Validation with Ultralytics YOLO, you need to follow these steps:
1. Verify annotations are in the [YOLO detection format](../datasets/detect/index.md).
2. Use Python libraries like `sklearn`, `pandas`, and `pyyaml`.
3. Create feature vectors from your dataset.
4. Split your dataset using `KFold` from `sklearn.model_selection`.
5. Train the YOLO model on each split.
For a comprehensive guide, see the [K-Fold Dataset Split](#k-fold-dataset-split) section in our documentation.
### Why should I use Ultralytics YOLO for object detection?
Ultralytics YOLO offers state-of-the-art, real-time object detection with high accuracy and efficiency. It's versatile, supporting multiple computer vision tasks such as detection, segmentation, and classification. Additionally, it integrates seamlessly with tools like Ultralytics HUB for no-code model training and deployment. For more details, explore the benefits and features on our [Ultralytics YOLO page](https://www.ultralytics.com/yolo).
### How can I ensure my annotations are in the correct format for Ultralytics YOLO?
Your annotations should follow the YOLO detection format. Each annotation file must list the object class, alongside its bounding box coordinates in the image. The YOLO format ensures streamlined and standardized data processing for training object detection models. For more information on proper annotation formatting, visit the [YOLO detection format guide](../datasets/detect/index.md).
### Can I use K-Fold Cross Validation with custom datasets other than Fruit Detection?
Yes, you can use K-Fold Cross Validation with any custom dataset as long as the annotations are in the YOLO detection format. Replace the dataset paths and class labels with those specific to your custom dataset. This flexibility ensures that any object detection project can benefit from robust model evaluation using K-Fold Cross Validation. For a practical example, review our [Generating Feature Vectors](#generating-feature-vectors-for-object-detection-dataset) section.

View file

@ -303,3 +303,68 @@ In this guide, we've explored the different deployment options for YOLOv8. We've
Don't forget that the YOLOv8 and Ultralytics community is a valuable source of help. Connect with other developers and experts to learn unique tips and solutions you might not find in regular documentation. Keep seeking knowledge, exploring new ideas, and sharing your experiences.
Happy deploying!
## FAQ
### What are the deployment options available for YOLOv8 on different hardware platforms?
Ultralytics YOLOv8 supports various deployment formats, each designed for specific environments and hardware platforms. Key formats include:
- **PyTorch** for research and prototyping, with excellent Python integration.
- **TorchScript** for production environments where Python is unavailable.
- **ONNX** for cross-platform compatibility and hardware acceleration.
- **OpenVINO** for optimized performance on Intel hardware.
- **TensorRT** for high-speed inference on NVIDIA GPUs.
Each format has unique advantages. For a detailed walkthrough, see our [export process documentation](../modes/export.md#usage-examples).
### How do I improve the inference speed of my YOLOv8 model on an Intel CPU?
To enhance inference speed on Intel CPUs, you can deploy your YOLOv8 model using Intel's OpenVINO toolkit. OpenVINO offers significant performance boosts by optimizing models to leverage Intel hardware efficiently.
1. Convert your YOLOv8 model to the OpenVINO format using the `model.export()` function.
2. Follow the detailed setup guide in the [Intel OpenVINO Export documentation](../integrations/openvino.md).
For more insights, check out our [blog post](https://www.ultralytics.com/blog/achieve-faster-inference-speeds-ultralytics-yolov8-openvino).
### Can I deploy YOLOv8 models on mobile devices?
Yes, YOLOv8 models can be deployed on mobile devices using TensorFlow Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
!!! Example
=== "Python"
```python
# Export command for TFLite format
model.export(format="tflite")
```
=== "CLI"
```bash
# CLI command for TFLite export
yolo export --format tflite
```
For more details on deploying models to mobile, refer to our [TF Lite integration guide](../integrations/tflite.md).
### What factors should I consider when choosing a deployment format for my YOLOv8 model?
When choosing a deployment format for YOLOv8, consider the following factors:
- **Performance**: Some formats like TensorRT provide exceptional speeds on NVIDIA GPUs, while OpenVINO is optimized for Intel hardware.
- **Compatibility**: ONNX offers broad compatibility across different platforms.
- **Ease of Integration**: Formats like CoreML or TF Lite are tailored for specific ecosystems like iOS and Android, respectively.
- **Community Support**: Formats like PyTorch and TensorFlow have extensive community resources and support.
For a comparative analysis, refer to our [export formats documentation](../modes/export.md#export-formats).
### How can I deploy YOLOv8 models in a web application?
To deploy YOLOv8 models in a web application, you can use TensorFlow.js (TF.js), which allows for running machine learning models directly in the browser. This approach eliminates the need for backend infrastructure and provides real-time performance.
1. Export the YOLOv8 model to the TF.js format.
2. Integrate the exported model into your web application.
For step-by-step instructions, refer to our guide on [TensorFlow.js integration](../integrations/tfjs.md).

View file

@ -26,19 +26,19 @@ Choosing where to deploy your computer vision model depends on multiple factors.
Cloud deployment is great for applications that need to scale up quickly and handle large amounts of data. Platforms like AWS, [Google Cloud](../yolov5/environments/google_cloud_quickstart_tutorial.md), and Azure make it easy to manage your models from training to deployment. They offer services like [AWS SageMaker](../integrations/amazon-sagemaker.md), Google AI Platform, and [Azure Machine Learning](./azureml-quickstart.md) to help you throughout the process.
However, using the cloud can be expensive, especially with high data usage, and you might face latency issues if your users are far from the data centers. To manage costs and performance, it's important to optimize resource use and ensure compliance with data privacy rules.
However, using the cloud can be expensive, especially with high data usage, and you might face latency issues if your users are far from the data centers. To manage costs and performance, it's important to optimize resource use and ensure compliance with data privacy rules.
#### Edge Deployment
Edge deployment works well for applications needing real-time responses and low latency, particularly in places with limited or no internet access. Deploying models on edge devices like smartphones or IoT gadgets ensures fast processing and keeps data local, which enhances privacy. Deploying on edge also saves bandwidth since less data is sent to the cloud.
Edge deployment works well for applications needing real-time responses and low latency, particularly in places with limited or no internet access. Deploying models on edge devices like smartphones or IoT gadgets ensures fast processing and keeps data local, which enhances privacy. Deploying on edge also saves bandwidth due to reduced data sent to the cloud.
However, edge devices often have limited processing power, so you'll need to optimize your models. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) can help. Despite the benefits, maintaining and updating many devices can be challenging.
However, edge devices often have limited processing power, so you'll need to optimize your models. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) can help. Despite the benefits, maintaining and updating many devices can be challenging.
#### Local Deployment
Local Deployment is best when data privacy is critical or when there's unreliable or no internet access. Running models on local servers or desktops gives you full control and keeps your data secure. It can also reduce latency if the server is near the user.
However, scaling locally can be tough, and maintenance can be time-consuming. Using tools like [Docker](./docker-quickstart.md) for containerization and Kubernetes for management can help make local deployments more efficient. Regular updates and maintenance are necessary to keep everything running smoothly.
However, scaling locally can be tough, and maintenance can be time-consuming. Using tools like [Docker](./docker-quickstart.md) for containerization and Kubernetes for management can help make local deployments more efficient. Regular updates and maintenance are necessary to keep everything running smoothly.
## Model Optimization Techniques
@ -46,7 +46,7 @@ Optimizing your computer vision model helps it runs efficiently, especially when
### Model Pruning
Pruning reduces the size of the model by removing weights that contribute little to the final output. It makes the model smaller and faster without significantly affecting accuracy. Pruning involves identifying and eliminating unnecessary parameters, resulting in a lighter model that requires less computational power. It is particularly useful for deploying models on devices with limited resources.
Pruning reduces the size of the model by removing weights that contribute little to the final output. It makes the model smaller and faster without significantly affecting accuracy. Pruning involves identifying and eliminating unnecessary parameters, resulting in a lighter model that requires less computational power. It is particularly useful for deploying models on devices with limited resources.
<p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*rw2zAHw9Xlm7nSq1PCKbzQ.png" alt="Model Pruning Overview">
@ -113,7 +113,7 @@ It's essential to control who can access your model and its data to prevent unau
### Model Obfuscation
Protecting your model from being reverse-engineered or misused can done through model obfuscation. It involves encrypting model parameters, such as weights and biases in neural networks, to make it difficult for unauthorized individuals to understand or alter the model. You can also obfuscate the model's architecture by renaming layers and parameters or adding dummy layers, making it harder for attackers to reverse-engineer it. You can also serve the model in a secure environment, like a secure enclave or using a trusted execution environment (TEE), can provide an extra layer of protection during inference.
Protecting your model from being reverse-engineered or misuse can be done through model obfuscation. It involves encrypting model parameters, such as weights and biases in neural networks, to make it difficult for unauthorized individuals to understand or alter the model. You can also obfuscate the model's architecture by renaming layers and parameters or adding dummy layers, making it harder for attackers to reverse-engineer it. You can also serve the model in a secure environment, like a secure enclave or using a trusted execution environment (TEE), can provide an extra layer of protection during inference.
## Share Ideas With Your Peers
@ -135,3 +135,25 @@ Using these resources will help you solve challenges and stay up-to-date with th
We walked through some best practices to follow when deploying computer vision models. By securing data, controlling access, and obfuscating model details, you can protect sensitive information while keeping your models running smoothly. We also discussed how to address common issues like reduced accuracy and slow inferences using strategies such as warm-up runs, optimizing engines, asynchronous processing, profiling pipelines, and choosing the right precision.
After deploying your model, the next step would be monitoring, maintaining, and documenting your application. Regular monitoring helps catch and fix issues quickly, maintenance keeps your models up-to-date and functional, and good documentation tracks all changes and updates. These steps will help you achieve the [goals of your computer vision project](./defining-project-goals.md).
## FAQ
### What are the best practices for deploying a machine learning model using Ultralytics YOLOv8?
Deploying a machine learning model, particularly with Ultralytics YOLOv8, involves several best practices to ensure efficiency and reliability. First, choose the deployment environment that suits your needs—cloud, edge, or local. Optimize your model through techniques like [pruning, quantization, and knowledge distillation](#model-optimization-techniques) for efficient deployment in resource-constrained environments. Lastly, ensure data consistency and preprocessing steps align with the training phase to maintain performance. You can also refer to [model deployment options](./model-deployment-options.md) for more detailed guidelines.
### How can I troubleshoot common deployment issues with Ultralytics YOLOv8 models?
Troubleshooting deployment issues can be broken down into a few key steps. If your model's accuracy drops after deployment, check for data consistency, validate preprocessing steps, and ensure the hardware/software environment matches what you used during training. For slow inference times, perform warm-up runs, optimize your inference engine, use asynchronous processing, and profile your inference pipeline. Refer to [troubleshooting deployment issues](#troubleshooting-deployment-issues) for a detailed guide on these best practices.
### How does Ultralytics YOLOv8 optimization enhance model performance on edge devices?
Optimizing Ultralytics YOLOv8 models for edge devices involves using techniques like pruning to reduce the model size, quantization to convert weights to lower precision, and knowledge distillation to train smaller models that mimic larger ones. These techniques ensure the model runs efficiently on devices with limited computational power. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) are particularly useful for these optimizations. Learn more about these techniques in our section on [model optimization](#model-optimization-techniques).
### What are the security considerations for deploying machine learning models with Ultralytics YOLOv8?
Security is paramount when deploying machine learning models. Ensure secure data transmission using encryption protocols like TLS. Implement robust access controls, including strong authentication and role-based access control (RBAC). Model obfuscation techniques, such as encrypting model parameters and serving models in a secure environment like a trusted execution environment (TEE), offer additional protection. For detailed practices, refer to [security considerations](#security-considerations-in-model-deployment).
### How do I choose the right deployment environment for my Ultralytics YOLOv8 model?
Selecting the optimal deployment environment for your Ultralytics YOLOv8 model depends on your application's specific needs. Cloud deployment offers scalability and ease of access, making it ideal for applications with high data volumes. Edge deployment is best for low-latency applications requiring real-time responses, using tools like [TensorFlow Lite](../integrations/tflite.md). Local deployment suits scenarios needing stringent data privacy and control. For a comprehensive overview of each environment, check out our section on [choosing a deployment environment](#choosing-a-deployment-environment).

View file

@ -39,7 +39,7 @@ Let's focus on two specific mAP metrics:
- *mAP@.5:* Measures the average precision at a single IoU (Intersection over Union) threshold of 0.5. This metric checks if the model can correctly find objects with a looser accuracy requirement. It focuses on whether the object is roughly in the right place, not needing perfect placement. It helps see if the model is generally good at spotting objects.
- *mAP@.5:.95:* Averages the mAP values calculated at multiple IoU thresholds, from 0.5 to 0.95 in 0.05 increments. This metric is more detailed and strict. It gives a fuller picture of how accurately the model can find objects at different levels of strictness and is especially useful for applications that need precise object detection.
Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75, and mAP@small, medium, and large, which evaluate precision across objects of different sizes.
Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75, and mAP@small, medium, and large, which evaluate precision across objects of different sizes.
<p align="center">
<img width="100%" src="https://a.storyblok.com/f/139616/1200x800/913f78e511/ways-to-improve-mean-average-precision.webp" alt="Mean Average Precision Overview">
@ -103,7 +103,7 @@ If you want to get a deeper understanding of your YOLOv8 model's performance, yo
The results object also includes speed metrics like preprocess time, inference time, loss, and postprocess time. By analyzing these metrics, you can fine-tune and optimize your YOLOv8 model for better performance, making it more effective for your specific use case.
## How Does Fine Tuning Work?
## How Does Fine-Tuning Work?
Fine-tuning involves taking a pre-trained model and adjusting its parameters to improve performance on a specific task or dataset. The process, also known as model retraining, allows the model to better understand and predict outcomes for the specific data it will encounter in real-world applications. You can retrain your model based on your model evaluation to achieve optimal results.
@ -137,3 +137,52 @@ Sharing your ideas and questions with other computer vision enthusiasts can insp
## Final Thoughts
Evaluating and fine-tuning your computer vision model are important steps for successful model deployment. These steps help make sure that your model is accurate, efficient, and suited to your overall application. The key to training the best model possible is continuous experimentation and learning. Don't hesitate to tweak parameters, try new techniques, and explore different datasets. Keep experimenting and pushing the boundaries of what's possible!
## FAQ
### What are the key metrics for evaluating YOLOv8 model performance?
To evaluate YOLOv8 model performance, important metrics include Confidence Score, Intersection over Union (IoU), and Mean Average Precision (mAP). Confidence Score measures the model's certainty for each detected object class. IoU evaluates how well the predicted bounding box overlaps with the ground truth. Mean Average Precision (mAP) aggregates precision scores across classes, with mAP@.5 and mAP@.5:.95 being two common types for varying IoU thresholds. Learn more about these metrics in our [YOLOv8 performance metrics guide](./yolo-performance-metrics.md).
### How can I fine-tune a pre-trained YOLOv8 model for my specific dataset?
Fine-tuning a pre-trained YOLOv8 model involves adjusting its parameters to improve performance on a specific task or dataset. Start by evaluating your model using metrics, then set a higher initial learning rate by adjusting the `warmup_epochs` parameter to 0 for immediate stability. Use parameters like `rect=true` for handling varied image sizes effectively. For more detailed guidance, refer to our section on [fine-tuning YOLOv8 models](#how-does-fine-tuning-work).
### How can I handle variable image sizes when evaluating my YOLOv8 model?
To handle variable image sizes during evaluation, use the `rect=true` parameter in YOLOv8, which adjusts the network's stride for each batch based on image sizes. The `imgsz` parameter sets the maximum dimension for image resizing, defaulting to 640. Adjust `imgsz` to suit your dataset and GPU memory. For more details, visit our [section on handling variable image sizes](#handling-variable-image-sizes).
### What practical steps can I take to improve mean average precision for my YOLOv8 model?
Improving mean average precision (mAP) for a YOLOv8 model involves several steps:
1. **Tuning Hyperparameters**: Experiment with different learning rates, batch sizes, and image augmentations.
2. **Data Augmentation**: Use techniques like Mosaic and MixUp to create diverse training samples.
3. **Image Tiling**: Split larger images into smaller tiles to improve detection accuracy for small objects.
Refer to our detailed guide on [model fine-tuning](#tips-for-fine-tuning-your-model) for specific strategies.
### How do I access YOLOv8 model evaluation metrics in Python?
You can access YOLOv8 model evaluation metrics using Python with the following steps:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n.pt")
# Run the evaluation
results = model.val(data="coco8.yaml")
# Print specific metrics
print("Class indices with average precision:", results.ap_class_index)
print("Average precision for all classes:", results.box.all_ap)
print("Mean average precision at IoU=0.50:", results.box.map50)
print("Mean recall:", results.box.mr)
```
Analyzing these metrics helps fine-tune and optimize your YOLOv8 model. For a deeper dive, check out our guide on [YOLOv8 metrics](../modes/val.md).

View file

@ -10,7 +10,7 @@ keywords: Overfitting and Underfitting in Machine Learning, Model Testing, Data
After [training](./model-training-tips.md) and [evaluating](./model-evaluation-insights.md) your model, it's time to test it. Model testing involves assessing how well it performs in real-world scenarios. Testing considers factors like accuracy, reliability, fairness, and how easy it is to understand the model's decisions. The goal is to make sure the model performs as intended, delivers the expected results, and fits into the [overall objective of your application](./defining-project-goals.md) or project.
Model testing's definition is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the model's learned behavior is the same as expectations. In this guide, we'll explore strategies for testing your computer vision models.
Model testing is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the model's learned behavior is the same as expectations. In this guide, we'll explore strategies for testing your computer vision models.
## Model Testing Vs. Model Evaluation
@ -140,3 +140,61 @@ These resources will help you navigate challenges and remain updated on the late
## In Summary
Building trustworthy computer vision models relies on rigorous model testing. By testing the model with previously unseen data, we can analyze it and spot weaknesses like overfitting and data leakage. Addressing these issues before deployment helps the model perform well in real-world applications. It's important to remember that model testing is just as crucial as model evaluation in guaranteeing the model's long-term success and effectiveness.
## FAQ
### What are the key differences between model evaluation and model testing in computer vision?
Model evaluation and model testing are distinct steps in a computer vision project. Model evaluation involves using a labeled dataset to compute metrics such as accuracy, precision, recall, and F1 score, providing insights into the model's performance with a controlled dataset. Model testing, on the other hand, assesses the model's performance in real-world scenarios by applying it to new, unseen data, ensuring the model's learned behavior aligns with expectations outside the evaluation environment. For a detailed guide, refer to the [steps in a computer vision project](./steps-of-a-cv-project.md).
### How can I test my Ultralytics YOLOv8 model on multiple images?
To test your Ultralytics YOLOv8 model on multiple images, you can use the [prediction mode](../modes/predict.md). This mode allows you to run the model on new, unseen data to generate predictions without providing detailed metrics. This is ideal for real-world performance testing on larger image sets stored in a folder. For evaluating performance metrics, use the [validation mode](../modes/val.md) instead.
### What should I do if my computer vision model shows signs of overfitting or underfitting?
To address **overfitting**:
- Regularization techniques like dropout.
- Increase the size of the training dataset.
- Simplify the model architecture.
To address **underfitting**:
- Use a more complex model.
- Provide more relevant features.
- Increase training iterations or epochs.
Review misclassified images, perform thorough error analysis, and regularly track performance metrics to maintain a balance. For more information on these concepts, explore our section on [Overfitting and Underfitting](#overfitting-and-underfitting-in-machine-learning).
### How can I detect and avoid data leakage in computer vision?
To detect data leakage:
- Verify that the testing performance is not unusually high.
- Check feature importance for unexpected insights.
- Intuitively review model decisions.
- Ensure correct data division before processing.
To avoid data leakage:
- Use diverse datasets with various environments.
- Carefully review data for hidden biases.
- Ensure no overlapping information between training and testing sets.
For detailed strategies on preventing data leakage, refer to our section on [Data Leakage in Computer Vision](#data-leakage-in-computer-vision-and-how-to-avoid-it).
### What steps should I take after testing my computer vision model?
Post-testing, if the model performance meets the project goals, proceed with deployment. If the results are unsatisfactory, consider:
- Error analysis.
- Gathering more diverse and high-quality data.
- Hyperparameter tuning.
- Retraining the model.
Gain insights from the [Model Testing Vs. Model Evaluation](#model-testing-vs-model-evaluation) section to refine and enhance model effectiveness in real-world applications.
### How do I run YOLOv8 predictions without custom training?
You can run predictions using the pre-trained YOLOv8 model on your dataset to see if it suits your application needs. Utilize the [prediction mode](../modes/predict.md) to get a quick sense of performance results without diving into custom training.

View file

@ -35,7 +35,7 @@ There are a few different aspects to think about when you are planning on using
When training models on large datasets, efficiently utilizing your GPU is key. Batch size is an important factor. It is the number of data samples that a machine learning model processes in a single training iteration.
Using the maximum batch size supported by your GPU, you can fully take advantage of its capabilities and reduce the time model training takes. However, you want to avoid running out of GPU memory. If you encounter memory errors, reduce the batch size incrementally until the model trains smoothly.
With respect to YOLOv8, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU's capacity. Also, setting `batch=-1` in your training script will automatically determine the batch size that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.
With respect to YOLOv8, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU capacity. Also, setting `batch=-1` in your training script will automatically determine the batch size that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.
### Subset Training
@ -73,7 +73,7 @@ Mixed precision training is straightforward when working with YOLOv8. You can us
### Pre-trained Weights
Using pre-trained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. Transfer learning adapts pre-trained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features.
Using pretrained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. Transfer learning adapts pretrained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features.
The `pretrained` parameter makes transfer learning easy with YOLOv8. Setting `pretrained=True` will use default pre-trained weights, or you can specify a path to a custom pre-trained model. Using pre-trained weights and transfer learning effectively boosts your model's capabilities and reduces training costs.
@ -96,7 +96,7 @@ However, the ideal number of epochs can vary based on your dataset's size and pr
Early stopping is a valuable technique for optimizing model training. By monitoring validation performance, you can halt training once the model stops improving. You can save computational resources and prevent overfitting.
The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance doesn't improve within these epochs, training is stopped to avoid wasting time and resources.
The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance does not improve within these epochs, training is stopped to avoid wasting time and resources.
<p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*06sTlOC3AYeZAjzUDwbaMw@2x.jpeg" alt="Early Stopping Overview">

View file

@ -367,3 +367,31 @@ When using NVIDIA Jetson, there are a couple of best practices to follow in orde
## Next Steps
Congratulations on successfully setting up YOLOv8 on your NVIDIA Jetson! For further learning and support, visit more guide at [Ultralytics YOLOv8 Docs](../index.md)!
## FAQ
### How do I deploy Ultralytics YOLOv8 on NVIDIA Jetson devices?
Deploying Ultralytics YOLOv8 on NVIDIA Jetson devices is a straightforward process. First, flash your Jetson device with the NVIDIA JetPack SDK. Then, either use a pre-built Docker image for quick setup or manually install the required packages. Detailed steps for each approach can be found in sections [Start with Docker](#start-with-docker) and [Start without Docker](#start-without-docker).
### What performance benchmarks can I expect from YOLOv8 models on NVIDIA Jetson devices?
YOLOv8 models have been benchmarked on various NVIDIA Jetson devices showing significant performance improvements. For example, the TensorRT format delivers the best inference performance. The table in the [Detailed Comparison Table](#detailed-comparison-table) section provides a comprehensive view of performance metrics like mAP50-95 and inference time across different model formats.
### Why should I use TensorRT for deploying YOLOv8 on NVIDIA Jetson?
TensorRT is highly recommended for deploying YOLOv8 models on NVIDIA Jetson due to its optimal performance. It accelerates inference by leveraging the Jetson's GPU capabilities, ensuring maximum efficiency and speed. Learn more about how to convert to TensorRT and run inference in the [Use TensorRT on NVIDIA Jetson](#use-tensorrt-on-nvidia-jetson) section.
### How can I install PyTorch and Torchvision on NVIDIA Jetson?
To install PyTorch and Torchvision on NVIDIA Jetson, first uninstall any existing versions that may have been installed via pip. Then, manually install the compatible PyTorch and Torchvision versions for the Jetson's ARM64 architecture. Detailed instructions for this process are provided in the [Install PyTorch and Torchvision](#install-pytorch-and-torchvision) section.
### What are the best practices for maximizing performance on NVIDIA Jetson when using YOLOv8?
To maximize performance on NVIDIA Jetson with YOLOv8, follow these best practices:
1. Enable MAX Power Mode to utilize all CPU and GPU cores.
2. Enable Jetson Clocks to run all cores at their maximum frequency.
3. Install the Jetson Stats application for monitoring system metrics.
For commands and additional details, refer to the [Best Practices when using NVIDIA Jetson](#best-practices-when-using-nvidia-jetson) section.

View file

@ -99,3 +99,57 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| `classes` | `list[int]` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `retina_masks` | `bool` | `False` | use high-resolution segmentation masks |
| `embed` | `list[int]` | `None` | return feature vectors/embeddings from given layers |
## FAQ
### What is object blurring with Ultralytics YOLOv8?
Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves automatically detecting and applying a blurring effect to specific objects in images or videos. This technique enhances privacy by concealing sensitive information while retaining relevant visual data. YOLOv8's real-time processing capabilities make it suitable for applications requiring immediate privacy protection and selective focus adjustments.
### How can I implement real-time object blurring using YOLOv8?
To implement real-time object blurring with YOLOv8, follow the provided Python example. This involves using YOLOv8 for object detection and OpenCV for applying the blur effect. Here's a simplified version:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
results = model.predict(im0, show=False)
for box in results[0].boxes.xyxy.cpu().tolist():
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = cv2.blur(obj, (50, 50))
cv2.imshow("YOLOv8 Blurring", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
### What are the benefits of using Ultralytics YOLOv8 for object blurring?
Ultralytics YOLOv8 offers several advantages for object blurring:
- **Privacy Protection**: Effectively obscure sensitive or identifiable information.
- **Selective Focus**: Target specific objects for blurring, maintaining essential visual content.
- **Real-time Processing**: Execute object blurring efficiently in dynamic environments, suitable for instant privacy enhancements.
For more detailed applications, check the [advantages of object blurring section](#advantages-of-object-blurring).
### Can I use Ultralytics YOLOv8 to blur faces in a video for privacy reasons?
Yes, Ultralytics YOLOv8 can be configured to detect and blur faces in videos to protect privacy. By training or using a pre-trained model to specifically recognize faces, the detection results can be processed with OpenCV to apply a blur effect. Refer to our guide on [object detection with YOLOv8](https://docs.ultralytics.com/models/yolov8) and modify the code to target face detection.
### How does YOLOv8 compare to other object detection models like Faster R-CNN for object blurring?
Ultralytics YOLOv8 typically outperforms models like Faster R-CNN in terms of speed, making it more suitable for real-time applications. While both models offer accurate detection, YOLOv8's architecture is optimized for rapid inference, which is critical for tasks like real-time object blurring. Learn more about the technical differences and performance metrics in our [YOLOv8 documentation](https://docs.ultralytics.com/models/yolov8).

View file

@ -4,7 +4,7 @@ description: Learn to accurately identify and count objects in real-time using U
keywords: object counting, YOLOv8, Ultralytics, real-time object detection, AI, deep learning, object tracking, crowd analysis, surveillance, resource optimization
---
# Object Counting using Ultralytics YOLOv8 🚀
# Object Counting using Ultralytics YOLOv8
## What is Object Counting?
@ -253,3 +253,125 @@ Here's a table with the `ObjectCounter` arguments:
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I count objects in a video using Ultralytics YOLOv8?
To count objects in a video using Ultralytics YOLOv8, you can follow these steps:
1. Import the necessary libraries (`cv2`, `ultralytics`).
2. Load a pretrained YOLOv8 model.
3. Define the counting region (e.g., a polygon, line, etc.).
4. Set up the video capture and initialize the object counter.
5. Process each frame to track objects and count them within the defined region.
Here's a simple example for counting in a region:
```python
import cv2
from ultralytics import YOLO, solutions
def count_objects_in_region(video_path, output_video_path, model_path):
"""Count objects in a specific region within a video."""
model = YOLO(model_path)
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
counter = solutions.ObjectCounter(
view_img=True, reg_pts=region_points, classes_names=model.names, draw_tracks=True, line_thickness=2
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolov8n.pt")
```
Explore more configurations and options in the [Object Counting](#object-counting-using-ultralytics-yolov8) section.
### What are the advantages of using Ultralytics YOLOv8 for object counting?
Using Ultralytics YOLOv8 for object counting offers several advantages:
1. **Resource Optimization:** It facilitates efficient resource management by providing accurate counts, helping optimize resource allocation in industries like inventory management.
2. **Enhanced Security:** It enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
3. **Informed Decision-Making:** It offers valuable insights for decision-making, optimizing processes in domains like retail, traffic management, and more.
For real-world applications and code examples, visit the [Advantages of Object Counting](#advantages-of-object-counting) section.
### How can I count specific classes of objects using Ultralytics YOLOv8?
To count specific classes of objects using Ultralytics YOLOv8, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:
```python
import cv2
from ultralytics import YOLO, solutions
def count_specific_classes(video_path, output_video_path, model_path, classes_to_count):
"""Count specific classes of objects in a video."""
model = YOLO(model_path)
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
line_points = [(20, 400), (1080, 400)]
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
counter = solutions.ObjectCounter(
view_img=True, reg_pts=line_points, classes_names=model.names, draw_tracks=True, line_thickness=2
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolov8n.pt", [0, 2])
```
In this example, `classes_to_count=[0, 2]`, which means it counts objects of class `0` and `2` (e.g., person and car).
### Why should I use YOLOv8 over other object detection models for real-time applications?
Ultralytics YOLOv8 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:
1. **Speed and Efficiency:** YOLOv8 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
2. **Accuracy:** It provides state-of-the-art accuracy for object detection and tracking tasks, reducing the number of false positives and improving overall system reliability.
3. **Ease of Integration:** YOLOv8 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
4. **Flexibility:** Supports various tasks like object detection, segmentation, and tracking with configurable models to meet specific use-case requirements.
Check out Ultralytics [YOLOv8 Documentation](https://docs.ultralytics.com/models/yolov8) for a deeper dive into its features and performance comparisons.
### Can I use YOLOv8 for advanced applications like crowd analysis and traffic management?
Yes, Ultralytics YOLOv8 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:
- **Crowd Analysis:** Monitor and manage large gatherings, ensuring safety and optimizing crowd flow.
- **Traffic Management:** Track and count vehicles, analyze traffic patterns, and manage congestion in real-time.
For more information and implementation details, refer to the guide on [Real World Applications](#real-world-applications) of object counting with YOLOv8.

View file

@ -4,7 +4,7 @@ description: Learn how to crop and extract objects using Ultralytics YOLOv8 for
keywords: Ultralytics, YOLOv8, object cropping, object detection, image processing, video analysis, AI, machine learning
---
# Object Cropping using Ultralytics YOLOv8 🚀
# Object Cropping using Ultralytics YOLOv8
## What is Object Cropping?
@ -111,3 +111,25 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
| `retina_masks` | `bool` | `False` | Uses high-resolution segmentation masks if available in the model. This can enhance mask quality for segmentation tasks, providing finer detail. |
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or embeddings. Useful for downstream tasks like clustering or similarity search. |
## FAQ
### What is object cropping in Ultralytics YOLOv8 and how does it work?
Object cropping using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) involves isolating and extracting specific objects from an image or video based on YOLOv8's detection capabilities. This process allows for focused analysis, reduced data volume, and enhanced precision by leveraging YOLOv8 to identify objects with high accuracy and crop them accordingly. For an in-depth tutorial, refer to the [object cropping example](#object-cropping-using-ultralytics-yolov8).
### Why should I use Ultralytics YOLOv8 for object cropping over other solutions?
Ultralytics YOLOv8 stands out due to its precision, speed, and ease of use. It allows detailed and accurate object detection and cropping, essential for [focused analysis](#advantages-of-object-cropping) and applications needing high data integrity. Moreover, YOLOv8 integrates seamlessly with tools like OpenVINO and TensorRT for deployments requiring real-time capabilities and optimization on diverse hardware. Explore the benefits in the [guide on model export](../modes/export.md).
### How can I reduce the data volume of my dataset using object cropping?
By using Ultralytics YOLOv8 to crop only relevant objects from your images or videos, you can significantly reduce the data size, making it more efficient for storage and processing. This process involves training the model to detect specific objects and then using the results to crop and save these portions only. For more information on exploiting Ultralytics YOLOv8's capabilities, visit our [quickstart guide](../quickstart.md).
### Can I use Ultralytics YOLOv8 for real-time video analysis and object cropping?
Yes, Ultralytics YOLOv8 can process real-time video feeds to detect and crop objects dynamically. The model's high-speed inference capabilities make it ideal for real-time applications such as surveillance, sports analysis, and automated inspection systems. Check out the [tracking and prediction modes](../modes/predict.md) to understand how to implement real-time processing.
### What are the hardware requirements for efficiently running YOLOv8 for object cropping?
Ultralytics YOLOv8 is optimized for both CPU and GPU environments, but to achieve optimal performance, especially for real-time or high-volume inference, a dedicated GPU (e.g., NVIDIA Tesla, RTX series) is recommended. For deployment on lightweight devices, consider using CoreML for iOS or TFLite for Android. More details on supported devices and formats can be found in our [model deployment options](../guides/model-deployment-options.md).

View file

@ -66,3 +66,63 @@ For more detailed technical information and the latest updates, refer to the [Op
---
Ensuring your models achieve optimal performance is not just about tweaking configurations; it's about understanding your application's needs and making informed decisions. Whether you're optimizing for real-time responses or maximizing throughput for large-scale processing, the combination of Ultralytics YOLO models and OpenVINO offers a powerful toolkit for developers to deploy high-performance AI solutions.
## FAQ
### How do I optimize Ultralytics YOLO models for low latency using OpenVINO?
Optimizing Ultralytics YOLO models for low latency involves several key strategies:
1. **Single Inference per Device:** Limit inferences to one at a time per device to minimize delays.
2. **Leveraging Sub-Devices:** Utilize devices like multi-socket CPUs or multi-tile GPUs which can handle multiple requests with minimal latency increase.
3. **OpenVINO Performance Hints:** Use OpenVINO's `ov::hint::PerformanceMode::LATENCY` during model compilation for simplified, device-agnostic tuning.
For more practical tips on optimizing latency, check out the [Latency Optimization section](#optimizing-for-latency) of our guide.
### Why should I use OpenVINO for optimizing Ultralytics YOLO throughput?
OpenVINO enhances Ultralytics YOLO model throughput by maximizing device resource utilization without sacrificing performance. Key benefits include:
- **Performance Hints:** Simple, high-level performance tuning across devices.
- **Explicit Batching and Streams:** Fine-tuning for advanced performance.
- **Multi-Device Execution:** Automated inference load balancing, easing application-level management.
Example configuration:
```python
import openvino.properties.hint as hints
config = {hints.performance_mode: hints.PerformanceMode.THROUGHPUT}
compiled_model = core.compile_model(model, "GPU", config)
```
Learn more about throughput optimization in the [Throughput Optimization section](#optimizing-for-throughput) of our detailed guide.
### What is the best practice for reducing first-inference latency in OpenVINO?
To reduce first-inference latency, consider these practices:
1. **Model Caching:** Use model caching to decrease load and compile times.
2. **Model Mapping vs. Reading:** Use mapping (`ov::enable_mmap(true)`) by default but switch to reading (`ov::enable_mmap(false)`) if the model is on a removable or network drive.
3. **AUTO Device Selection:** Utilize AUTO mode to start with CPU inference and transition to an accelerator seamlessly.
For detailed strategies on managing first-inference latency, refer to the [Managing First-Inference Latency section](#managing-first-inference-latency).
### How do I balance optimizing for latency and throughput with Ultralytics YOLO and OpenVINO?
Balancing latency and throughput optimization requires understanding your application needs:
- **Latency Optimization:** Ideal for real-time applications requiring immediate responses (e.g., consumer-grade apps).
- **Throughput Optimization:** Best for scenarios with many concurrent inferences, maximizing resource use (e.g., large-scale deployments).
Using OpenVINO's high-level performance hints and multi-device modes can help strike the right balance. Choose the appropriate [OpenVINO Performance hints](https://docs.ultralytics.com/integrations/openvino#openvino-performance-hints) based on your specific requirements.
### Can I use Ultralytics YOLO models with other AI frameworks besides OpenVINO?
Yes, Ultralytics YOLO models are highly versatile and can be integrated with various AI frameworks. Options include:
- **TensorRT:** For NVIDIA GPU optimization, follow the [TensorRT integration guide](https://docs.ultralytics.com/integrations/tensorrt).
- **CoreML:** For Apple devices, refer to our [CoreML export instructions](https://docs.ultralytics.com/integrations/coreml).
- **TensorFlow.js:** For web and Node.js apps, see the [TF.js conversion guide](https://docs.ultralytics.com/integrations/tfjs).
Explore more integrations on the [Ultralytics Integrations page](https://docs.ultralytics.com/integrations).

View file

@ -120,3 +120,37 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How does Ultralytics YOLOv8 enhance parking management systems?
Ultralytics YOLOv8 greatly enhances parking management systems by providing **real-time vehicle detection** and monitoring. This results in optimized usage of parking spaces, reduced congestion, and improved safety through continuous surveillance. The [Parking Management System](https://github.com/ultralytics/ultralytics) enables efficient traffic flow, minimizing idle times and emissions in parking lots, thereby contributing to environmental sustainability. For further details, refer to the [parking management code workflow](#python-code-for-parking-management).
### What are the benefits of using Ultralytics YOLOv8 for smart parking?
Using Ultralytics YOLOv8 for smart parking yields numerous benefits:
- **Efficiency**: Optimizes the use of parking spaces and decreases congestion.
- **Safety and Security**: Enhances surveillance and ensures the safety of vehicles and pedestrians.
- **Environmental Impact**: Helps in reducing emissions by minimizing vehicle idle times. More details on the advantages can be seen [here](#advantages-of-parking-management-system).
### How can I define parking spaces using Ultralytics YOLOv8?
Defining parking spaces is straightforward with Ultralytics YOLOv8:
1. Capture a frame from a video or camera stream.
2. Use the provided code to launch a GUI for selecting an image and drawing polygons to define parking spaces.
3. Save the labeled data in JSON format for further processing. For comprehensive instructions, check the [selection of points](#selection-of-points) section.
### Can I customize the YOLOv8 model for specific parking management needs?
Yes, Ultralytics YOLOv8 allows customization for specific parking management needs. You can adjust parameters such as the **occupied and available region colors**, margins for text display, and much more. Utilizing the `ParkingManagement` class's [optional arguments](#optional-arguments-parkingmanagement), you can tailor the model to suit your particular requirements, ensuring maximum efficiency and effectiveness.
### What are some real-world applications of Ultralytics YOLOv8 in parking lot management?
Ultralytics YOLOv8 is utilized in various real-world applications for parking lot management, including:
- **Parking Space Detection**: Accurately identifying available and occupied spaces.
- **Surveillance**: Enhancing security through real-time monitoring.
- **Traffic Flow Management**: Reducing idle times and congestion with efficient traffic handling. Images showcasing these applications can be found in [real-world applications](#real-world-applications).

View file

@ -140,3 +140,100 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How can I use Ultralytics YOLOv8 for real-time queue management?
To use Ultralytics YOLOv8 for real-time queue management, you can follow these steps:
1. Load the YOLOv8 model with `YOLO("yolov8n.pt")`.
2. Capture the video feed using `cv2.VideoCapture`.
3. Define the region of interest (ROI) for queue management.
4. Process frames to detect objects and manage queues.
Here's a minimal example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video.mp4")
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
queue = solutions.QueueManager(
classes_names=model.names,
reg_pts=queue_region,
line_thickness=3,
fontsize=1.0,
region_color=(255, 144, 31),
)
while cap.isOpened():
success, im0 = cap.read()
if success:
tracks = model.track(im0, show=False, persist=True, verbose=False)
out = queue.process_queue(im0, tracks)
cv2.imshow("Queue Management", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
Leveraging Ultralytics [HUB](https://docs.ultralytics.com/hub/) can streamline this process by providing a user-friendly platform for deploying and managing your queue management solution.
### What are the key advantages of using Ultralytics YOLOv8 for queue management?
Using Ultralytics YOLOv8 for queue management offers several benefits:
- **Plummeting Waiting Times:** Efficiently organizes queues, reducing customer wait times and boosting satisfaction.
- **Enhancing Efficiency:** Analyzes queue data to optimize staff deployment and operations, thereby reducing costs.
- **Real-time Alerts:** Provides real-time notifications for long queues, enabling quick intervention.
- **Scalability:** Easily scalable across different environments like retail, airports, and healthcare.
For more details, explore our [Queue Management](https://docs.ultralytics.com/reference/solutions/queue_management/) solutions.
### Why should I choose Ultralytics YOLOv8 over competitors like TensorFlow or Detectron2 for queue management?
Ultralytics YOLOv8 has several advantages over TensorFlow and Detectron2 for queue management:
- **Real-time Performance:** YOLOv8 is known for its real-time detection capabilities, offering faster processing speeds.
- **Ease of Use:** Ultralytics provides a user-friendly experience, from training to deployment, via [Ultralytics HUB](https://docs.ultralytics.com/hub/).
- **Pretrained Models:** Access to a range of pretrained models, minimizing the time needed for setup.
- **Community Support:** Extensive documentation and active community support make problem-solving easier.
Learn how to get started with [Ultralytics YOLO](https://docs.ultralytics.com/quickstart/).
### Can Ultralytics YOLOv8 handle multiple types of queues, such as in airports and retail?
Yes, Ultralytics YOLOv8 can manage various types of queues, including those in airports and retail environments. By configuring the QueueManager with specific regions and settings, YOLOv8 can adapt to different queue layouts and densities.
Example for airports:
```python
queue_region_airport = [(50, 600), (1200, 600), (1200, 550), (50, 550)]
queue_airport = solutions.QueueManager(
classes_names=model.names,
reg_pts=queue_region_airport,
line_thickness=3,
fontsize=1.0,
region_color=(0, 255, 0),
)
```
For more information on diverse applications, check out our [Real World Applications](#real-world-applications) section.
### What are some real-world applications of Ultralytics YOLOv8 in queue management?
Ultralytics YOLOv8 is used in various real-world applications for queue management:
- **Retail:** Monitors checkout lines to reduce wait times and improve customer satisfaction.
- **Airports:** Manages queues at ticket counters and security checkpoints for a smoother passenger experience.
- **Healthcare:** Optimizes patient flow in clinics and hospitals.
- **Banks:** Enhances customer service by managing queues efficiently in banks.
Check our [blog on real-world queue management](https://www.ultralytics.com/blog/revolutionizing-queue-management-with-ultralytics-yolov8-and-openvino) to learn more.

View file

@ -378,3 +378,124 @@ Congratulations on successfully setting up YOLO on your Raspberry Pi! For furthe
This guide was initially created by Daan Eeltink for Kashmir World Foundation, an organization dedicated to the use of YOLO for the conservation of endangered species. We acknowledge their pioneering work and educational focus in the realm of object detection technologies.
For more information about Kashmir World Foundation's activities, you can visit their [website](https://www.kashmirworldfoundation.org/).
## FAQ
### How do I set up Ultralytics YOLOv8 on a Raspberry Pi without using Docker?
To set up Ultralytics YOLOv8 on a Raspberry Pi without Docker, follow these steps:
1. Update the package list and install `pip`:
```bash
sudo apt update
sudo apt install python3-pip -y
pip install -U pip
```
2. Install the Ultralytics package with optional dependencies:
```bash
pip install ultralytics[export]
```
3. Reboot the device to apply changes:
```bash
sudo reboot
```
For detailed instructions, refer to the [Start without Docker](#start-without-docker) section.
### Why should I use Ultralytics YOLOv8's NCNN format on Raspberry Pi for AI tasks?
Ultralytics YOLOv8's NCNN format is highly optimized for mobile and embedded platforms, making it ideal for running AI tasks on Raspberry Pi devices. NCNN maximizes inference performance by leveraging ARM architecture, providing faster and more efficient processing compared to other formats. For more details on supported export options, visit the [Ultralytics documentation page on deployment options](../modes/export.md).
### How can I convert a YOLOv8 model to NCNN format for use on Raspberry Pi?
You can convert a PyTorch YOLOv8 model to NCNN format using either Python or CLI commands:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# Export the model to NCNN format
model.export(format="ncnn") # creates 'yolov8n_ncnn_model'
# Load the exported NCNN model
ncnn_model = YOLO("yolov8n_ncnn_model")
# Run inference
results = ncnn_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to NCNN format
yolo export model=yolov8n.pt format=ncnn # creates 'yolov8n_ncnn_model'
# Run inference with the exported model
yolo predict model='yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
```
For more details, see the [Use NCNN on Raspberry Pi](#use-ncnn-on-raspberry-pi) section.
### What are the hardware differences between Raspberry Pi 4 and Raspberry Pi 5 relevant to running YOLOv8?
Key differences include:
- **CPU**: Raspberry Pi 4 uses Broadcom BCM2711, Cortex-A72 64-bit SoC, while Raspberry Pi 5 uses Broadcom BCM2712, Cortex-A76 64-bit SoC.
- **Max CPU Frequency**: Raspberry Pi 4 has a max frequency of 1.8GHz, whereas Raspberry Pi 5 reaches 2.4GHz.
- **Memory**: Raspberry Pi 4 offers up to 8GB of LPDDR4-3200 SDRAM, while Raspberry Pi 5 features LPDDR4X-4267 SDRAM, available in 4GB and 8GB variants.
These enhancements contribute to better performance benchmarks for YOLOv8 models on Raspberry Pi 5 compared to Raspberry Pi 4. Refer to the [Raspberry Pi Series Comparison](#raspberry-pi-series-comparison) table for more details.
### How can I set up a Raspberry Pi Camera Module to work with Ultralytics YOLOv8?
There are two methods to set up a Raspberry Pi Camera for YOLOv8 inference:
1. **Using `picamera2`**:
```python
import cv2
from picamera2 import Picamera2
from ultralytics import YOLO
picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 720)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()
model = YOLO("yolov8n.pt")
while True:
frame = picam2.capture_array()
results = model(frame)
annotated_frame = results[0].plot()
cv2.imshow("Camera", annotated_frame)
if cv2.waitKey(1) == ord("q"):
break
cv2.destroyAllWindows()
```
2. **Using a TCP Stream**:
```bash
rpicam-vid -n -t 0 --inline --listen -o tcp://127.0.0.1:8888
```
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model("tcp://127.0.0.1:8888")
```
For detailed setup instructions, visit the [Inference with Camera](#inference-with-camera) section.

View file

@ -84,3 +84,50 @@ python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
| `--classes` | `list` | `None` | Detect specific classes i.e. --classes 0 2 |
| `--region-thickness` | `int` | `2` | Region Box thickness |
| `--track-thickness` | `int` | `2` | Tracking line thickness |
## FAQ
### What is object counting in specified regions using Ultralytics YOLOv8?
Object counting in specified regions with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) involves detecting and tallying the number of objects within defined areas using advanced computer vision. This precise method enhances efficiency and accuracy across various applications like manufacturing, surveillance, and traffic monitoring.
### How do I run the object counting script with Ultralytics YOLOv8?
Follow these steps to run object counting in Ultralytics YOLOv8:
1. Clone the Ultralytics repository and navigate to the directory:
```bash
git clone https://github.com/ultralytics/ultralytics
cd ultralytics/examples/YOLOv8-Region-Counter
```
2. Execute the region counting script:
```bash
python yolov8_region_counter.py --source "path/to/video.mp4" --save-img
```
For more options, visit the [Run Region Counting](#steps-to-run) section.
### Why should I use Ultralytics YOLOv8 for object counting in regions?
Using Ultralytics YOLOv8 for object counting in regions offers several advantages:
- **Precision and Accuracy:** Minimizes errors often seen in manual counting.
- **Efficiency Improvement:** Provides real-time results and streamlines processes.
- **Versatility and Application:** Applies to various domains, enhancing its utility.
Explore deeper benefits in the [Advantages](#advantages-of-object-counting-in-regions) section.
### Can the defined regions be adjusted during video playback?
Yes, with Ultralytics YOLOv8, regions can be interactively moved during video playback. Simply click and drag with the left mouse button to reposition the region. This feature enhances flexibility for dynamic environments. Learn more in the tip section for [movable regions](#step-2-run-region-counting-using-ultralytics-yolov8).
### What are some real-world applications of object counting in regions?
Object counting with Ultralytics YOLOv8 can be applied to numerous real-world scenarios:
- **Retail:** Counting people for foot traffic analysis.
- **Market Streets:** Crowd density management.
Explore more examples in the [Real World Applications](#real-world-applications) section.

View file

@ -512,3 +512,115 @@ for index, class_id in enumerate(classes):
<p align="center">
<img width="100%" src="https://github.com/ultralytics/ultralytics/assets/3855193/3caafc4a-0edd-4e5f-8dd1-37e30be70123" alt="Point Cloud Segmentation with Ultralytics ">
</p>
## FAQ
### What is the Robot Operating System (ROS)?
The [Robot Operating System (ROS)](https://www.ros.org/) is an open-source framework commonly used in robotics to help developers create robust robot applications. It provides a collection of [libraries and tools](https://www.ros.org/blog/ecosystem/) for building and interfacing with robotic systems, enabling easier development of complex applications. ROS supports communication between nodes using messages over [topics](https://wiki.ros.org/ROS/Tutorials/UnderstandingTopics) or [services](https://wiki.ros.org/ROS/Tutorials/UnderstandingServicesParams).
### How do I integrate Ultralytics YOLO with ROS for real-time object detection?
Integrating Ultralytics YOLO with ROS involves setting up a ROS environment and using YOLO for processing sensor data. Begin by installing the required dependencies like `ros_numpy` and Ultralytics YOLO:
```bash
pip install ros_numpy ultralytics
```
Next, create a ROS node and subscribe to an [image topic](../tasks/detect.md) to process the incoming data. Here is a minimal example:
```python
import ros_numpy
import rospy
from sensor_msgs.msg import Image
from ultralytics import YOLO
detection_model = YOLO("yolov8m.pt")
rospy.init_node("ultralytics")
det_image_pub = rospy.Publisher("/ultralytics/detection/image", Image, queue_size=5)
def callback(data):
array = ros_numpy.numpify(data)
det_result = detection_model(array)
det_annotated = det_result[0].plot(show=False)
det_image_pub.publish(ros_numpy.msgify(Image, det_annotated, encoding="rgb8"))
rospy.Subscriber("/camera/color/image_raw", Image, callback)
rospy.spin()
```
### What are ROS topics and how are they used in Ultralytics YOLO?
ROS topics facilitate communication between nodes in a ROS network by using a publish-subscribe model. A topic is a named channel that nodes use to send and receive messages asynchronously. In the context of Ultralytics YOLO, you can make a node subscribe to an image topic, process the images using YOLO for tasks like detection or segmentation, and publish outcomes to new topics.
For example, subscribe to a camera topic and process the incoming image for detection:
```python
rospy.Subscriber("/camera/color/image_raw", Image, callback)
```
### Why use depth images with Ultralytics YOLO in ROS?
Depth images in ROS, represented by `sensor_msgs/Image`, provide the distance of objects from the camera, crucial for tasks like obstacle avoidance, 3D mapping, and localization. By [using depth information](https://en.wikipedia.org/wiki/Depth_map) along with RGB images, robots can better understand their 3D environment.
With YOLO, you can extract segmentation masks from RGB images and apply these masks to depth images to obtain precise 3D object information, improving the robot's ability to navigate and interact with its surroundings.
### How can I visualize 3D point clouds with YOLO in ROS?
To visualize 3D point clouds in ROS with YOLO:
1. Convert `sensor_msgs/PointCloud2` messages to numpy arrays.
2. Use YOLO to segment RGB images.
3. Apply the segmentation mask to the point cloud.
Here's an example using Open3D for visualization:
```python
import sys
import open3d as o3d
import ros_numpy
import rospy
from sensor_msgs.msg import PointCloud2
from ultralytics import YOLO
rospy.init_node("ultralytics")
segmentation_model = YOLO("yolov8m-seg.pt")
def pointcloud2_to_array(pointcloud2):
pc_array = ros_numpy.point_cloud2.pointcloud2_to_array(pointcloud2)
split = ros_numpy.point_cloud2.split_rgb_field(pc_array)
rgb = np.stack([split["b"], split["g"], split["r"]], axis=2)
xyz = ros_numpy.point_cloud2.get_xyz_points(pc_array, remove_nans=False)
xyz = np.array(xyz).reshape((pointcloud2.height, pointcloud2.width, 3))
return xyz, rgb
ros_cloud = rospy.wait_for_message("/camera/depth/points", PointCloud2)
xyz, rgb = pointcloud2_to_array(ros_cloud)
result = segmentation_model(rgb)
if not len(result[0].boxes.cls):
print("No objects detected")
sys.exit()
classes = result[0].boxes.cls.cpu().numpy().astype(int)
for index, class_id in enumerate(classes):
mask = result[0].masks.data.cpu().numpy()[index, :, :].astype(int)
mask_expanded = np.stack([mask, mask, mask], axis=2)
obj_rgb = rgb * mask_expanded
obj_xyz = xyz * mask_expanded
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(obj_xyz.reshape((-1, 3)))
pcd.colors = o3d.utility.Vector3dVector(obj_rgb.reshape((-1, 3)) / 255)
o3d.visualization.draw_geometries([pcd])
```
This approach provides a 3D visualization of segmented objects, useful for tasks like navigation and manipulation.

View file

@ -203,3 +203,93 @@ If you use SAHI in your research or development work, please cite the original S
```
We extend our thanks to the SAHI research group for creating and maintaining this invaluable resource for the computer vision community. For more information about SAHI and its creators, visit the [SAHI GitHub repository](https://github.com/obss/sahi).
## FAQ
### How can I integrate YOLOv8 with SAHI for sliced inference in object detection?
Integrating Ultralytics YOLOv8 with SAHI (Slicing Aided Hyper Inference) for sliced inference optimizes your object detection tasks on high-resolution images by partitioning them into manageable slices. This approach improves memory usage and ensures high detection accuracy. To get started, you need to install the ultralytics and sahi libraries:
```bash
pip install -U ultralytics sahi
```
Then, download a YOLOv8 model and test images:
```python
from sahi.utils.file import download_from_url
from sahi.utils.yolov8 import download_yolov8s_model
# Download YOLOv8 model
yolov8_model_path = "models/yolov8s.pt"
download_yolov8s_model(yolov8_model_path)
# Download test images
download_from_url(
"https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg",
"demo_data/small-vehicles1.jpeg",
)
```
For more detailed instructions, refer to our [Sliced Inference guide](#sliced-inference-with-yolov8).
### Why should I use SAHI with YOLOv8 for object detection on large images?
Using SAHI with Ultralytics YOLOv8 for object detection on large images offers several benefits:
- **Reduced Computational Burden**: Smaller slices are faster to process and consume less memory, making it feasible to run high-quality detections on hardware with limited resources.
- **Maintained Detection Accuracy**: SAHI uses intelligent algorithms to merge overlapping boxes, preserving the detection quality.
- **Enhanced Scalability**: By scaling object detection tasks across different image sizes and resolutions, SAHI becomes ideal for various applications, such as satellite imagery analysis and medical diagnostics.
Learn more about the [benefits of sliced inference](#benefits-of-sliced-inference) in our documentation.
### Can I visualize prediction results when using YOLOv8 with SAHI?
Yes, you can visualize prediction results when using YOLOv8 with SAHI. Here's how you can export and visualize the results:
```python
result.export_visuals(export_dir="demo_data/")
from IPython.display import Image
Image("demo_data/prediction_visual.png")
```
This command will save the visualized predictions to the specified directory and you can then load the image to view it in your notebook or application. For a detailed guide, check out the [Standard Inference section](#visualize-results).
### What features does SAHI offer for improving YOLOv8 object detection?
SAHI (Slicing Aided Hyper Inference) offers several features that complement Ultralytics YOLOv8 for object detection:
- **Seamless Integration**: SAHI easily integrates with YOLO models, requiring minimal code adjustments.
- **Resource Efficiency**: It partitions large images into smaller slices, which optimizes memory usage and speed.
- **High Accuracy**: By effectively merging overlapping detection boxes during the stitching process, SAHI maintains high detection accuracy.
For a deeper understanding, read about SAHI's [key features](#key-features-of-sahi).
### How do I handle large-scale inference projects using YOLOv8 and SAHI?
To handle large-scale inference projects using YOLOv8 and SAHI, follow these best practices:
1. **Install Required Libraries**: Ensure that you have the latest versions of ultralytics and sahi.
2. **Configure Sliced Inference**: Determine the optimal slice dimensions and overlap ratios for your specific project.
3. **Run Batch Predictions**: Use SAHI's capabilities to perform batch predictions on a directory of images, which improves efficiency.
Example for batch prediction:
```python
from sahi.predict import predict
predict(
model_type="yolov8",
model_path="path/to/yolov8n.pt",
model_device="cpu", # or 'cuda:0'
model_confidence_threshold=0.4,
source="path/to/dir",
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
)
```
For more detailed steps, visit our section on [Batch Prediction](#batch-prediction).

View file

@ -176,3 +176,25 @@ That's it! When you execute the code, you'll receive a single notification on yo
#### Email Received Sample
<img width="256" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/db79ccc6-aabd-4566-a825-b34e679c90f9" alt="Email Received Sample">
## FAQ
### How does Ultralytics YOLOv8 improve the accuracy of a security alarm system?
Ultralytics YOLOv8 enhances security alarm systems by delivering high-accuracy, real-time object detection. Its advanced algorithms significantly reduce false positives, ensuring that the system only responds to genuine threats. This increased reliability can be seamlessly integrated with existing security infrastructure, upgrading the overall surveillance quality.
### Can I integrate Ultralytics YOLOv8 with my existing security infrastructure?
Yes, Ultralytics YOLOv8 can be seamlessly integrated with your existing security infrastructure. The system supports various modes and provides flexibility for customization, allowing you to enhance your existing setup with advanced object detection capabilities. For detailed instructions on integrating YOLOv8 in your projects, visit the [integration section](https://docs.ultralytics.com/integrations/).
### What are the storage requirements for running Ultralytics YOLOv8?
Running Ultralytics YOLOv8 on a standard setup typically requires around 5GB of free disk space. This includes space for storing the YOLOv8 model and any additional dependencies. For cloud-based solutions, Ultralytics HUB offers efficient project management and dataset handling, which can optimize storage needs. Learn more about the [Pro Plan](../hub/pro.md) for enhanced features including extended storage.
### What makes Ultralytics YOLOv8 different from other object detection models like Faster R-CNN or SSD?
Ultralytics YOLOv8 provides an edge over models like Faster R-CNN or SSD with its real-time detection capabilities and higher accuracy. Its unique architecture allows it to process images much faster without compromising on precision, making it ideal for time-sensitive applications like security alarm systems. For a comprehensive comparison of object detection models, you can explore our [guide](https://docs.ultralytics.com/models).
### How can I reduce the frequency of false positives in my security system using Ultralytics YOLOv8?
To reduce false positives, ensure your Ultralytics YOLOv8 model is adequately trained with a diverse and well-annotated dataset. Fine-tuning hyperparameters and regularly updating the model with new data can significantly improve detection accuracy. Detailed hyperparameter tuning techniques can be found in our [hyperparameter tuning guide](../guides/hyperparameter-tuning.md).

View file

@ -108,3 +108,86 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I estimate object speed using Ultralytics YOLOv8?
Estimating object speed with Ultralytics YOLOv8 involves combining object detection and tracking techniques. First, you need to detect objects in each frame using the YOLOv8 model. Then, track these objects across frames to calculate their movement over time. Finally, use the distance traveled by the object between frames and the frame rate to estimate its speed.
**Example**:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Initialize SpeedEstimator
speed_obj = solutions.SpeedEstimator(
reg_pts=[(0, 360), (1280, 360)],
names=names,
view_img=True,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
tracks = model.track(im0, persist=True, show=False)
im0 = speed_obj.estimate_speed(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
For more details, refer to our [official blog post](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects).
### What are the benefits of using Ultralytics YOLOv8 for speed estimation in traffic management?
Using Ultralytics YOLOv8 for speed estimation offers significant advantages in traffic management:
- **Enhanced Safety**: Accurately estimate vehicle speeds to detect over-speeding and improve road safety.
- **Real-Time Monitoring**: Benefit from YOLOv8's real-time object detection capability to monitor traffic flow and congestion effectively.
- **Scalability**: Deploy the model on various hardware setups, from edge devices to servers, ensuring flexible and scalable solutions for large-scale implementations.
For more applications, see [advantages of speed estimation](#advantages-of-speed-estimation).
### Can YOLOv8 be integrated with other AI frameworks like TensorFlow or PyTorch?
Yes, YOLOv8 can be integrated with other AI frameworks like TensorFlow and PyTorch. Ultralytics provides support for exporting YOLOv8 models to various formats like ONNX, TensorRT, and CoreML, ensuring smooth interoperability with other ML frameworks.
To export a YOLOv8 model to ONNX format:
```bash
yolo export --weights yolov8n.pt --include onnx
```
Learn more about exporting models in our [guide on export](../modes/export.md).
### How accurate is the speed estimation using Ultralytics YOLOv8?
The accuracy of speed estimation using Ultralytics YOLOv8 depends on several factors, including the quality of the object tracking, the resolution and frame rate of the video, and environmental variables. While the speed estimator provides reliable estimates, it may not be 100% accurate due to variances in frame processing speed and object occlusion.
**Note**: Always consider margin of error and validate the estimates with ground truth data when possible.
For further accuracy improvement tips, check the [Arguments `SpeedEstimator` section](#arguments-speedestimator).
### Why choose Ultralytics YOLOv8 over other object detection models like TensorFlow Object Detection API?
Ultralytics YOLOv8 offers several advantages over other object detection models, such as the TensorFlow Object Detection API:
- **Real-Time Performance**: YOLOv8 is optimized for real-time detection, providing high speed and accuracy.
- **Ease of Use**: Designed with a user-friendly interface, YOLOv8 simplifies model training and deployment.
- **Versatility**: Supports multiple tasks, including object detection, segmentation, and pose estimation.
- **Community and Support**: YOLOv8 is backed by an active community and extensive documentation, ensuring developers have the resources they need.
For more information on the benefits of YOLOv8, explore our detailed [model page](../models/yolov8.md).

View file

@ -142,3 +142,126 @@ subprocess.call(f"docker kill {container_id}", shell=True)
---
By following the above steps, you can deploy and run Ultralytics YOLOv8 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
## FAQ
### How do I set up Ultralytics YOLOv8 with NVIDIA Triton Inference Server?
Setting up [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) involves a few key steps:
1. **Export YOLOv8 to ONNX format**:
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
# Export the model to ONNX format
onnx_file = model.export(format="onnx", dynamic=True)
```
2. **Set up Triton Model Repository**:
```python
from pathlib import Path
# Define paths
model_name = "yolo"
triton_repo_path = Path("tmp") / "triton_repo"
triton_model_path = triton_repo_path / model_name
# Create directories
(triton_model_path / "1").mkdir(parents=True, exist_ok=True)
Path(onnx_file).rename(triton_model_path / "1" / "model.onnx")
(triton_model_path / "config.pbtxt").touch()
```
3. **Run the Triton Server**:
```python
import contextlib
import subprocess
import time
from tritonclient.http import InferenceServerClient
# Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
tag = "nvcr.io/nvidia/tritonserver:23.09-py3"
subprocess.call(f"docker pull {tag}", shell=True)
container_id = (
subprocess.check_output(
f"docker run -d --rm -v {triton_repo_path}/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
shell=True,
)
.decode("utf-8")
.strip()
)
triton_client = InferenceServerClient(url="localhost:8000", verbose=False, ssl=False)
for _ in range(10):
with contextlib.suppress(Exception):
assert triton_client.is_model_ready(model_name)
break
time.sleep(1)
```
This setup can help you efficiently deploy YOLOv8 models at scale on Triton Inference Server for high-performance AI model inference.
### What benefits does using Ultralytics YOLOv8 with NVIDIA Triton Inference Server offer?
Integrating [Ultralytics YOLOv8](../models/yolov8.md) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) provides several advantages:
- **Scalable AI Inference**: Triton allows serving multiple models from a single server instance, supporting dynamic model loading and unloading, making it highly scalable for diverse AI workloads.
- **High Performance**: Optimized for NVIDIA GPUs, Triton Inference Server ensures high-speed inference operations, perfect for real-time applications such as object detection.
- **Ensemble and Model Versioning**: Triton's ensemble mode enables combining multiple models to improve results, and its model versioning supports A/B testing and rolling updates.
For detailed instructions on setting up and running YOLOv8 with Triton, you can refer to the [setup guide](#setting-up-triton-model-repository).
### Why should I export my YOLOv8 model to ONNX format before using Triton Inference Server?
Using ONNX (Open Neural Network Exchange) format for your [Ultralytics YOLOv8](../models/yolov8.md) model before deploying it on [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) offers several key benefits:
- **Interoperability**: ONNX format supports transfer between different deep learning frameworks (such as PyTorch, TensorFlow), ensuring broader compatibility.
- **Optimization**: Many deployment environments, including Triton, optimize for ONNX, enabling faster inference and better performance.
- **Ease of Deployment**: ONNX is widely supported across frameworks and platforms, simplifying the deployment process in various operating systems and hardware configurations.
To export your model, use:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
onnx_file = model.export(format="onnx", dynamic=True)
```
You can follow the steps in the [exporting guide](../modes/export.md) to complete the process.
### Can I run inference using the Ultralytics YOLOv8 model on Triton Inference Server?
Yes, you can run inference using the [Ultralytics YOLOv8](../models/yolov8.md) model on [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server). Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
```python
from ultralytics import YOLO
# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")
# Run inference on the server
results = model("path/to/image.jpg")
```
For an in-depth guide on setting up and running Triton Server with YOLOv8, refer to the [running triton inference server](#running-triton-inference-server) section.
### How does Ultralytics YOLOv8 compare to TensorFlow and PyTorch models for deployment?
[Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8) offers several unique advantages compared to TensorFlow and PyTorch models for deployment:
- **Real-time Performance**: Optimized for real-time object detection tasks, YOLOv8 provides state-of-the-art accuracy and speed, making it ideal for applications requiring live video analytics.
- **Ease of Use**: YOLOv8 integrates seamlessly with Triton Inference Server and supports diverse export formats (ONNX, TensorRT, CoreML), making it flexible for various deployment scenarios.
- **Advanced Features**: YOLOv8 includes features like dynamic model loading, model versioning, and ensemble inference, which are crucial for scalable and reliable AI deployments.
For more details, compare the deployment options in the [model deployment guide](../modes/export.md).

View file

@ -139,3 +139,105 @@ w.draw(mem_file)
!!! tip
You may need to use `clear` to "erase" the view of the image in the terminal.
## FAQ
### How can I view YOLO inference results in a VSCode terminal on macOS or Linux?
To view YOLO inference results in a VSCode terminal on macOS or Linux, follow these steps:
1. Enable the necessary VSCode settings:
```yaml
"terminal.integrated.enableImages": true
"terminal.integrated.gpuAcceleration": "auto"
```
2. Install the sixel library:
```bash
pip install sixel
```
3. Load your YOLO model and run inference:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model.predict(source="path_to_image")
plot = results[0].plot()
```
4. Convert the inference result image to bytes and display it in the terminal:
```python
import io
import cv2
from sixel import SixelWriter
im_bytes = cv2.imencode(".png", plot)[1].tobytes()
mem_file = io.BytesIO(im_bytes)
SixelWriter().draw(mem_file)
```
For further details, visit the [predict mode](../modes/predict.md) page.
### Why does the sixel protocol only work on Linux and macOS?
The sixel protocol is currently only supported on Linux and macOS because these platforms have native terminal capabilities compatible with sixel graphics. Windows support for terminal graphics using sixel is still under development. For updates on Windows compatibility, check the [VSCode Issue status](https://github.com/microsoft/vscode/issues/198622) and [documentation](https://code.visualstudio.com/docs).
### What if I encounter issues with displaying images in the VSCode terminal?
If you encounter issues displaying images in the VSCode terminal using sixel:
1. Ensure the necessary settings in VSCode are enabled:
```yaml
"terminal.integrated.enableImages": true
"terminal.integrated.gpuAcceleration": "auto"
```
2. Verify the sixel library installation:
```bash
pip install sixel
```
3. Check your image data conversion and plotting code for errors. For example:
```python
import io
import cv2
from sixel import SixelWriter
im_bytes = cv2.imencode(".png", plot)[1].tobytes()
mem_file = io.BytesIO(im_bytes)
SixelWriter().draw(mem_file)
```
If problems persist, consult the [VSCode repository](https://github.com/microsoft/vscode), and visit the [plot method parameters](../modes/predict.md#plot-method-parameters) section for additional guidance.
### Can YOLO display video inference results in the terminal using sixel?
Displaying video inference results or animated GIF frames using sixel in the terminal is currently untested and may not be supported. We recommend starting with static images and verifying compatibility. Attempt video results at your own risk, keeping in mind performance constraints. For more information on plotting inference results, visit the [predict mode](../modes/predict.md) page.
### How can I troubleshoot issues with the `python-sixel` library?
To troubleshoot issues with the `python-sixel` library:
1. Ensure the library is correctly installed in your virtual environment:
```bash
pip install sixel
```
2. Verify that you have the necessary Python and system dependencies.
3. Refer to the [python-sixel GitHub repository](https://github.com/lubosz/python-sixel) for additional documentation and community support.
4. Double-check your code for potential errors, specifically the usage of `SixelWriter` and image data conversion steps.
For further assistance on working with YOLO models and sixel integration, see the [export](../modes/export.md) and [predict mode](../modes/predict.md) documentation pages.

View file

@ -177,3 +177,132 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
## Note
For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below.
## FAQ
### How do I start using VisionEye Object Mapping with Ultralytics YOLOv8?
To start using VisionEye Object Mapping with Ultralytics YOLOv8, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up object detection with VisionEye. Here's a simple example to get you started:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
results = model.predict(frame)
for result in results:
# Perform custom logic with result
pass
cv2.imshow("visioneye", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
### What are the key features of VisionEye's object tracking capability using Ultralytics YOLOv8?
VisionEye's object tracking with Ultralytics YOLOv8 allows users to follow the movement of objects within a video frame. Key features include:
1. **Real-Time Object Tracking**: Keeps up with objects as they move.
2. **Object Identification**: Utilizes YOLOv8's powerful detection algorithms.
3. **Distance Calculation**: Calculates distances between objects and specified points.
4. **Annotation and Visualization**: Provides visual markers for tracked objects.
Here's a brief code snippet demonstrating tracking with VisionEye:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, persist=True)
for result in results:
# Annotate and visualize tracking
pass
cv2.imshow("visioneye-tracking", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
For a comprehensive guide, visit the [VisionEye Object Mapping with Object Tracking](#samples).
### How can I calculate distances with VisionEye's YOLOv8 model?
Distance calculation with VisionEye and Ultralytics YOLOv8 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance.
Here's a simplified example:
```python
import math
import cv2
from ultralytics import YOLO
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
center_point = (0, 480) # Example center point
pixel_per_meter = 10
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, persist=True)
for result in results:
# Calculate distance logic
distances = [
(math.sqrt((box[0] - center_point[0]) ** 2 + (box[1] - center_point[1]) ** 2)) / pixel_per_meter
for box in results
]
cv2.imshow("visioneye-distance", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
For detailed instructions, refer to the [VisionEye with Distance Calculation](#samples).
### Why should I use Ultralytics YOLOv8 for object mapping and tracking?
Ultralytics YOLOv8 is renowned for its speed, accuracy, and ease of integration, making it a top choice for object mapping and tracking. Key advantages include:
1. **State-of-the-art Performance**: Delivers high accuracy in real-time object detection.
2. **Flexibility**: Supports various tasks such as detection, tracking, and distance calculation.
3. **Community and Support**: Extensive documentation and active GitHub community for troubleshooting and enhancements.
4. **Ease of Use**: Intuitive API simplifies complex tasks, allowing for rapid deployment and iteration.
For more information on applications and benefits, check out the [Ultralytics YOLOv8 documentation](https://docs.ultralytics.com/models/yolov8/).
### How can I integrate VisionEye with other machine learning tools like Comet or ClearML?
Ultralytics YOLOv8 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on [how to use YOLOv5 with Comet](https://www.ultralytics.com/blog/how-to-use-yolov5-with-comet) and [integrate YOLOv8 with ClearML](https://docs.ultralytics.com/integrations/clearml/) to get started.
For further exploration and integration examples, check our [Ultralytics Integrations Guide](https://docs.ultralytics.com/integrations/).

View file

@ -4,7 +4,7 @@ description: Optimize your fitness routine with real-time workouts monitoring us
keywords: workouts monitoring, Ultralytics YOLOv8, pose estimation, fitness tracking, exercise assessment, real-time feedback, exercise form, performance metrics
---
# Workouts Monitoring using Ultralytics YOLOv8 🚀
# Workouts Monitoring using Ultralytics YOLOv8
Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) enhances exercise assessment by accurately tracking key body landmarks and joints in real-time. This technology provides instant feedback on exercise form, tracks workout routines, and measures performance metrics, optimizing training sessions for users and trainers alike.
@ -152,3 +152,110 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I monitor my workouts using Ultralytics YOLOv8?
To monitor your workouts using Ultralytics YOLOv8, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10],
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.track(im0, verbose=False)
im0 = gym_object.start_counting(im0, results)
cv2.destroyAllWindows()
```
For further customization and settings, you can refer to the [AIGym](#arguments-aigym) section in the documentation.
### What are the benefits of using Ultralytics YOLOv8 for workout monitoring?
Using Ultralytics YOLOv8 for workout monitoring provides several key benefits:
- **Optimized Performance:** By tailoring workouts based on monitoring data, you can achieve better results.
- **Goal Achievement:** Easily track and adjust fitness goals for measurable progress.
- **Personalization:** Get customized workout plans based on your individual data for optimal effectiveness.
- **Health Awareness:** Early detection of patterns that indicate potential health issues or over-training.
- **Informed Decisions:** Make data-driven decisions to adjust routines and set realistic goals.
You can watch a [YouTube video demonstration](https://www.youtube.com/embed/LGGxqLZtvuw) to see these benefits in action.
### How accurate is Ultralytics YOLOv8 in detecting and tracking exercises?
Ultralytics YOLOv8 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high precision and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
### Can I use Ultralytics YOLOv8 for custom workout routines?
Yes, Ultralytics YOLOv8 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
```python
from ultralytics import solutions
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="squat",
kpts_to_check=[6, 8, 10],
)
```
For more details on setting arguments, refer to the [Arguments `AIGym`](#arguments-aigym) section. This flexibility allows you to monitor various exercises and customize routines based on your needs.
### How can I save the workout monitoring output using Ultralytics YOLOv8?
To save the workout monitoring output, you can modify the code to include a video writer that saves the processed frames. Here's an example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10],
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.track(im0, verbose=False)
im0 = gym_object.start_counting(im0, results)
video_writer.write(im0)
cv2.destroyAllWindows()
video_writer.release()
```
This setup writes the monitored video to an output file. For more details, refer to the [Workouts Monitoring with Save Output](#workouts-monitoring-using-ultralytics-yolov8) section.

View file

@ -285,3 +285,35 @@ Troubleshooting is an integral part of any development process, and being equipp
Remember, the Ultralytics community is a valuable resource. Engaging with fellow developers and experts can provide additional insights and solutions that might not be covered in standard documentation. Always keep learning, experimenting, and sharing your experiences to contribute to the collective knowledge of the community.
Happy troubleshooting!
## FAQ
### How do I resolve installation errors with YOLOv8?
Installation errors can often be due to compatibility issues or missing dependencies. Ensure you use Python 3.8 or later and have PyTorch 1.8 or later installed. It's beneficial to use virtual environments to avoid conflicts. For a step-by-step installation guide, follow our [official installation guide](../quickstart.md). If you encounter import errors, try a fresh installation or update the library to the latest version.
### Why is my YOLOv8 model training slow on a single GPU?
Training on a single GPU might be slow due to large batch sizes or insufficient memory. To speed up training, use multiple GPUs. Ensure your system has multiple GPUs available and adjust your `.yaml` configuration file to specify the number of GPUs, e.g., `gpus: 4`. Increase the batch size accordingly to fully utilize the GPUs without exceeding memory limits. Example command:
```python
model.train(data="/path/to/your/data.yaml", batch=32, multi_scale=True)
```
### How can I ensure my YOLOv8 model is training on the GPU?
If the 'device' value shows 'null' in the training logs, it generally means the training process is set to automatically use an available GPU. To explicitly assign a specific GPU, set the 'device' value in your `.yaml` configuration file. For instance:
```yaml
device: 0
```
This sets the training process to the first GPU. Consult the `nvidia-smi` command to confirm your CUDA setup.
### How can I monitor and track my YOLOv8 model training progress?
Tracking and visualizing training progress can be efficiently managed through tools like [TensorBoard](https://www.tensorflow.org/tensorboard), [Comet](https://bit.ly/yolov8-readme-comet), and [Ultralytics HUB](https://hub.ultralytics.com). These tools allow you to log and visualize metrics such as loss, precision, recall, and mAP. Implementing [early stopping](#continuous-monitoring-parameters) based on these metrics can also help achieve better training outcomes.
### What should I do if YOLOv8 is not recognizing my dataset format?
Ensure your dataset and labels conform to the expected format. Verify that annotations are accurate and of high quality. If you face any issues, refer to the [Data Collection and Annotation](https://docs.ultralytics.com/guides/data-collection-and-annotation/) guide for best practices. For more dataset-specific guidance, check the [Datasets](https://docs.ultralytics.com/datasets/) section in the documentation.

View file

@ -174,3 +174,39 @@ In this guide, we've taken a close look at the essential performance metrics for
Remember, the YOLOv8 and Ultralytics community is an invaluable asset. Engaging with fellow developers and experts can open doors to insights and solutions not found in standard documentation. As you journey through object detection, keep the spirit of learning alive, experiment with new strategies, and share your findings. By doing so, you contribute to the community's collective wisdom and ensure its growth.
Happy object detecting!
## FAQ
### What is the significance of Mean Average Precision (mAP) in evaluating YOLOv8 model performance?
Mean Average Precision (mAP) is crucial for evaluating YOLOv8 models as it provides a single metric encapsulating precision and recall across multiple classes. mAP@0.50 measures precision at an IoU threshold of 0.50, focusing on the model's ability to detect objects correctly. mAP@0.50:0.95 averages precision across a range of IoU thresholds, offering a comprehensive assessment of detection performance. High mAP scores indicate that the model effectively balances precision and recall, essential for applications like autonomous driving and surveillance.
### How do I interpret the Intersection over Union (IoU) value for YOLOv8 object detection?
Intersection over Union (IoU) measures the overlap between the predicted and ground truth bounding boxes. IoU values range from 0 to 1, where higher values indicate better localization accuracy. An IoU of 1.0 means perfect alignment. Typically, an IoU threshold of 0.50 is used to define true positives in metrics like mAP. Lower IoU values suggest that the model struggles with precise object localization, which can be improved by refining bounding box regression or increasing annotation accuracy.
### Why is the F1 Score important for evaluating YOLOv8 models in object detection?
The F1 Score is important for evaluating YOLOv8 models because it provides a harmonic mean of precision and recall, balancing both false positives and false negatives. It is particularly valuable when dealing with imbalanced datasets or applications where either precision or recall alone is insufficient. A high F1 Score indicates that the model effectively detects objects while minimizing both missed detections and false alarms, making it suitable for critical applications like security systems and medical imaging.
### What are the key advantages of using Ultralytics YOLOv8 for real-time object detection?
Ultralytics YOLOv8 offers multiple advantages for real-time object detection:
- **Speed and Efficiency**: Optimized for high-speed inference, suitable for applications requiring low latency.
- **High Accuracy**: Advanced algorithm ensures high mAP and IoU scores, balancing precision and recall.
- **Flexibility**: Supports various tasks including object detection, segmentation, and classification.
- **Ease of Use**: User-friendly interfaces, extensive documentation, and seamless integration with platforms like Ultralytics HUB ([HUB Quickstart](../hub/quickstart.md)).
This makes YOLOv8 ideal for diverse applications from autonomous vehicles to smart city solutions.
### How can validation metrics from YOLOv8 help improve model performance?
Validation metrics from YOLOv8 like precision, recall, mAP, and IoU help diagnose and improve model performance by providing insights into different aspects of detection:
- **Precision**: Helps identify and minimize false positives.
- **Recall**: Ensures all relevant objects are detected.
- **mAP**: Offers an overall performance snapshot, guiding general improvements.
- **IoU**: Helps fine-tune object localization accuracy.
By analyzing these metrics, specific weaknesses can be targeted, such as adjusting confidence thresholds to improve precision or gathering more diverse data to enhance recall. For detailed explanations of these metrics and how to interpret them, check [Object Detection Metrics](#object-detection-metrics).

View file

@ -111,3 +111,78 @@ In this example, each thread creates its own `YOLO` instance. This prevents any
When using YOLO models with Python's `threading`, always instantiate your models within the thread that will use them to ensure thread safety. This practice avoids race conditions and makes sure that your inference tasks run reliably.
For more advanced scenarios and to further optimize your multi-threaded inference performance, consider using process-based parallelism with `multiprocessing` or leveraging a task queue with dedicated worker processes.
## FAQ
### How can I avoid race conditions when using YOLO models in a multi-threaded Python environment?
To prevent race conditions when using Ultralytics YOLO models in a multi-threaded Python environment, instantiate a separate YOLO model within each thread. This ensures that each thread has its own isolated model instance, avoiding concurrent modification of the model state.
Example:
```python
from threading import Thread
from ultralytics import YOLO
def thread_safe_predict(image_path):
"""Predict on an image in a thread-safe manner."""
local_model = YOLO("yolov8n.pt")
results = local_model.predict(image_path)
# Process results
Thread(target=thread_safe_predict, args=("image1.jpg",)).start()
Thread(target=thread_safe_predict, args=("image2.jpg",)).start()
```
For more information on ensuring thread safety, visit the [Thread-Safe Inference with YOLO Models](#thread-safe-inference).
### What are the best practices for running multi-threaded YOLO model inference in Python?
To run multi-threaded YOLO model inference safely in Python, follow these best practices:
1. Instantiate YOLO models within each thread rather than sharing a single model instance across threads.
2. Use Python's `multiprocessing` module for parallel processing to avoid issues related to Global Interpreter Lock (GIL).
3. Release the GIL by using operations performed by YOLO's underlying C libraries.
Example for thread-safe model instantiation:
```python
from threading import Thread
from ultralytics import YOLO
def thread_safe_predict(image_path):
"""Runs inference in a thread-safe manner with a new YOLO model instance."""
model = YOLO("yolov8n.pt")
results = model.predict(image_path)
# Process results
# Initiate multiple threads
Thread(target=thread_safe_predict, args=("image1.jpg",)).start()
Thread(target=thread_safe_predict, args=("image2.jpg",)).start()
```
For additional context, refer to the section on [Thread-Safe Inference](#thread-safe-inference).
### Why should each thread have its own YOLO model instance?
Each thread should have its own YOLO model instance to prevent race conditions. When a single model instance is shared among multiple threads, concurrent accesses can lead to unpredictable behavior and modifications of the model's internal state. By using separate instances, you ensure thread isolation, making your multi-threaded tasks reliable and safe.
For detailed guidance, check the [Non-Thread-Safe Example: Single Model Instance](#non-thread-safe-example-single-model-instance) and [Thread-Safe Example](#thread-safe-example) sections.
### How does Python's Global Interpreter Lock (GIL) affect YOLO model inference?
Python's Global Interpreter Lock (GIL) allows only one thread to execute Python bytecode at a time, which can limit the performance of CPU-bound multi-threading tasks. However, for I/O-bound operations or processes that use libraries releasing the GIL, like YOLO's C libraries, you can still achieve concurrency. For enhanced performance, consider using process-based parallelism with Python's `multiprocessing` module.
For more about threading in Python, see the [Understanding Python Threading](#understanding-python-threading) section.
### Is it safer to use process-based parallelism instead of threading for YOLO model inference?
Yes, using Python's `multiprocessing` module is safer and often more efficient for running YOLO model inference in parallel. Process-based parallelism creates separate memory spaces, avoiding the Global Interpreter Lock (GIL) and reducing the risk of concurrency issues. Each process will operate independently with its own YOLO model instance.
For further details on process-based parallelism with YOLO models, refer to the page on [Thread-Safe Inference](#thread-safe-inference).