Reformat Markdown code blocks (#12795)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
2af71d15a6
commit
fceea033ad
128 changed files with 1067 additions and 1018 deletions
168
CONTRIBUTING.md
168
CONTRIBUTING.md
|
|
@ -1,96 +1,132 @@
|
||||||
# Contributing to YOLOv8 🚀
|
---
|
||||||
|
comments: true
|
||||||
|
description: Learn how to contribute to Ultralytics YOLO projects – guidelines for pull requests, reporting bugs, code conduct and CLA signing.
|
||||||
|
keywords: Ultralytics, YOLO, open-source, contribute, pull request, bug report, coding guidelines, CLA, code of conduct, GitHub
|
||||||
|
---
|
||||||
|
|
||||||
We love your input! We want to make contributing to YOLOv8 as easy and transparent as possible, whether it's:
|
# Contributing to Ultralytics Open-Source YOLO Repositories
|
||||||
|
|
||||||
- Reporting a bug
|
First of all, thank you for your interest in contributing to Ultralytics open-source YOLO repositories! Your contributions will help improve the project and benefit the community. This document provides guidelines and best practices to get you started.
|
||||||
- Discussing the current state of the code
|
|
||||||
- Submitting a fix
|
|
||||||
- Proposing a new feature
|
|
||||||
- Becoming a maintainer
|
|
||||||
|
|
||||||
YOLOv8 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃!
|
## Table of Contents
|
||||||
|
|
||||||
## Submitting a Pull Request (PR) 🛠️
|
1. [Code of Conduct](#code-of-conduct)
|
||||||
|
2. [Contributing via Pull Requests](#contributing-via-pull-requests)
|
||||||
|
- [CLA Signing](#cla-signing)
|
||||||
|
- [Google-Style Docstrings](#google-style-docstrings)
|
||||||
|
- [GitHub Actions CI Tests](#github-actions-ci-tests)
|
||||||
|
3. [Reporting Bugs](#reporting-bugs)
|
||||||
|
4. [License](#license)
|
||||||
|
5. [Conclusion](#conclusion)
|
||||||
|
|
||||||
Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
|
## Code of Conduct
|
||||||
|
|
||||||
### 1. Select File to Update
|
All contributors are expected to adhere to the [Code of Conduct](https://docs.ultralytics.com/help/code_of_conduct/) to ensure a welcoming and inclusive environment for everyone.
|
||||||
|
|
||||||
Select `requirements.txt` to update by clicking on it in GitHub.
|
## Contributing via Pull Requests
|
||||||
|
|
||||||
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
|
We welcome contributions in the form of pull requests. To make the review process smoother, please follow these guidelines:
|
||||||
|
|
||||||
### 2. Click 'Edit this file'
|
1. **[Fork the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo)**: Fork the Ultralytics YOLO repository to your own GitHub account.
|
||||||
|
|
||||||
Button is in top-right corner.
|
2. **[Create a branch](https://docs.github.com/en/desktop/making-changes-in-a-branch/managing-branches-in-github-desktop)**: Create a new branch in your forked repository with a descriptive name for your changes.
|
||||||
|
|
||||||
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
|
3. **Make your changes**: Make the changes you want to contribute. Ensure that your changes follow the coding style of the project and do not introduce new errors or warnings.
|
||||||
|
|
||||||
### 3. Make Changes
|
4. **[Test your changes](https://github.com/ultralytics/ultralytics/tree/main/tests)**: Test your changes locally to ensure that they work as expected and do not introduce new issues.
|
||||||
|
|
||||||
Change `matplotlib` version from `3.2.2` to `3.3`.
|
5. **[Commit your changes](https://docs.github.com/en/desktop/making-changes-in-a-branch/committing-and-reviewing-changes-to-your-project-in-github-desktop)**: Commit your changes with a descriptive commit message. Make sure to include any relevant issue numbers in your commit message.
|
||||||
|
|
||||||
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
|
6. **[Create a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)**: Create a pull request from your forked repository to the main Ultralytics YOLO repository. In the pull request description, provide a clear explanation of your changes and how they improve the project.
|
||||||
|
|
||||||
### 4. Preview Changes and Submit PR
|
### CLA Signing
|
||||||
|
|
||||||
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose changes** button. All done, your PR is now submitted to YOLOv8 for review and approval 😃!
|
Before we can accept your pull request, you need to sign a [Contributor License Agreement (CLA)](https://docs.ultralytics.com/help/CLA/). This is a legal document stating that you agree to the terms of contributing to the Ultralytics YOLO repositories. The CLA ensures that your contributions are properly licensed and that the project can continue to be distributed under the AGPL-3.0 license.
|
||||||
|
|
||||||
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
|
To sign the CLA, follow the instructions provided by the CLA bot after you submit your PR and add a comment in your PR saying:
|
||||||
|
|
||||||
### PR recommendations
|
```
|
||||||
|
I have read the CLA Document and I sign the CLA
|
||||||
To allow your work to be integrated as seamlessly as possible, we advise you to:
|
|
||||||
|
|
||||||
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge main` locally.
|
|
||||||
|
|
||||||
<p align="center"><img width="751" alt="PR recommendation 1" src="https://user-images.githubusercontent.com/26833433/187295893-50ed9f44-b2c9-4138-a614-de69bd1753d7.png"></p>
|
|
||||||
|
|
||||||
- ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
|
|
||||||
|
|
||||||
<p align="center"><img width="751" alt="PR recommendation 2" src="https://user-images.githubusercontent.com/26833433/187296922-545c5498-f64a-4d8c-8300-5fa764360da6.png"></p>
|
|
||||||
|
|
||||||
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
|
|
||||||
|
|
||||||
### Docstrings
|
|
||||||
|
|
||||||
Not all functions or classes require docstrings but when they do, we follow [google-style docstrings format](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings). Here is an example:
|
|
||||||
|
|
||||||
```python
|
|
||||||
"""
|
|
||||||
What the function does. Performs NMS on given detection predictions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
arg1: The description of the 1st argument
|
|
||||||
arg2: The description of the 2nd argument
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
What the function returns. Empty if nothing is returned.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
Exception Class: When and why this exception can be raised by the function.
|
|
||||||
"""
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Submitting a Bug Report 🐛
|
### Google-Style Docstrings
|
||||||
|
|
||||||
If you spot a problem with YOLOv8 please submit a Bug Report!
|
When adding new functions or classes, please include a [Google-style docstring](https://google.github.io/styleguide/pyguide.html) to provide clear and concise documentation for other developers. This will help ensure that your contributions are easy to understand and maintain.
|
||||||
|
|
||||||
For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started.
|
#### Google-style
|
||||||
|
|
||||||
When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). Your code that reproduces the problem should be:
|
This example shows a Google-style docstring. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
|
||||||
|
|
||||||
- ✅ **Minimal** – Use as little code as possible that still produces the same problem
|
```python
|
||||||
- ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
|
def example_function(arg1, arg2=4):
|
||||||
- ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
|
"""
|
||||||
|
Example function that demonstrates Google-style docstrings.
|
||||||
|
|
||||||
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be:
|
Args:
|
||||||
|
arg1 (int): The first argument.
|
||||||
|
arg2 (int): The second argument. Default value is 4.
|
||||||
|
|
||||||
- ✅ **Current** – Verify that your code is up-to-date with current GitHub [main](https://github.com/ultralytics/ultralytics/tree/main) branch, and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits.
|
Returns:
|
||||||
- ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
|
(bool): True if successful, False otherwise.
|
||||||
|
|
||||||
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/ultralytics/issues/new/choose) and providing a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us better understand and diagnose your problem.
|
Examples:
|
||||||
|
>>> result = example_function(1, 2) # returns False
|
||||||
|
"""
|
||||||
|
if arg1 == arg2:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Google-style with type hints
|
||||||
|
|
||||||
|
This example shows both a Google-style docstring and argument and return type hints, though both are not required, one can be used without the other.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def example_function(arg1: int, arg2: int = 4) -> bool:
|
||||||
|
"""
|
||||||
|
Example function that demonstrates Google-style docstrings.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
arg1: The first argument.
|
||||||
|
arg2: The second argument. Default value is 4.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if successful, False otherwise.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
>>> result = example_function(1, 2) # returns False
|
||||||
|
"""
|
||||||
|
if arg1 == arg2:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Single-line
|
||||||
|
|
||||||
|
Smaller or simpler functions can utilize a single-line docstring. Note the docstring must use 3 double-quotes, and be a complete sentence starting with a capital letter and ending with a period.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def example_small_function(arg1: int, arg2: int = 4) -> bool:
|
||||||
|
"""Example function that demonstrates a single-line docstring."""
|
||||||
|
return arg1 == arg2
|
||||||
|
```
|
||||||
|
|
||||||
|
### GitHub Actions CI Tests
|
||||||
|
|
||||||
|
Before your pull request can be merged, all GitHub Actions [Continuous Integration](https://docs.ultralytics.com/help/CI/) (CI) tests must pass. These tests include linting, unit tests, and other checks to ensure that your changes meet the quality standards of the project. Make sure to review the output of the GitHub Actions and fix any issues
|
||||||
|
|
||||||
|
## Reporting Bugs
|
||||||
|
|
||||||
|
We appreciate bug reports as they play a crucial role in maintaining the project's quality. When reporting bugs it is important to provide a [Minimum Reproducible Example](https://docs.ultralytics.com/help/minimum_reproducible_example/): a clear, concise code example that replicates the issue. This helps in quick identification and resolution of the bug.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
By contributing, you agree that your contributions will be licensed under the [AGPL-3.0 license](https://choosealicense.com/licenses/agpl-3.0/)
|
Ultralytics embraces the [GNU Affero General Public License v3.0 (AGPL-3.0)](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) for its repositories, promoting openness, transparency, and collaborative enhancement in software development. This strong copyleft license ensures that all users and developers retain the freedom to use, modify, and share the software. It fosters community collaboration, ensuring that any improvements remain accessible to all.
|
||||||
|
|
||||||
|
Users and developers are encouraged to familiarize themselves with the terms of AGPL-3.0 to contribute effectively and ethically to the Ultralytics open-source community.
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
Thank you for your interest in contributing to [Ultralytics open-source](https://github.com/ultralytics) YOLO projects. Your participation is crucial in shaping the future of our software and fostering a community of innovation and collaboration. Whether you're improving code, reporting bugs, or suggesting features, your contributions make a significant impact.
|
||||||
|
|
||||||
|
We're eager to see your ideas in action and appreciate your commitment to advancing object detection technology. Let's continue to grow and innovate together in this exciting open-source journey. Happy coding! 🚀🌟
|
||||||
|
|
|
||||||
|
|
@ -36,10 +36,10 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='caltech101', epochs=100, imgsz=416)
|
results = model.train(data="caltech101", epochs=100, imgsz=416)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -36,10 +36,10 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='caltech256', epochs=100, imgsz=416)
|
results = model.train(data="caltech256", epochs=100, imgsz=416)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -39,10 +39,10 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='cifar10', epochs=100, imgsz=32)
|
results = model.train(data="cifar10", epochs=100, imgsz=32)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -39,10 +39,10 @@ To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='cifar100', epochs=100, imgsz=32)
|
results = model.train(data="cifar100", epochs=100, imgsz=32)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -53,10 +53,10 @@ To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image s
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='fashion-mnist', epochs=100, imgsz=28)
|
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -49,10 +49,10 @@ To train a deep learning model on the ImageNet dataset for 100 epochs with an im
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='imagenet', epochs=100, imgsz=224)
|
results = model.train(data="imagenet", epochs=100, imgsz=224)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -35,10 +35,10 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='imagenet10', epochs=5, imgsz=224)
|
results = model.train(data="imagenet10", epochs=5, imgsz=224)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -37,10 +37,10 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='imagenette', epochs=100, imgsz=224)
|
results = model.train(data="imagenette", epochs=100, imgsz=224)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -72,10 +72,10 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model with ImageNette160
|
# Train the model with ImageNette160
|
||||||
results = model.train(data='imagenette160', epochs=100, imgsz=160)
|
results = model.train(data="imagenette160", epochs=100, imgsz=160)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -93,10 +93,10 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model with ImageNette320
|
# Train the model with ImageNette320
|
||||||
results = model.train(data='imagenette320', epochs=100, imgsz=320)
|
results = model.train(data="imagenette320", epochs=100, imgsz=320)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -34,10 +34,10 @@ To train a CNN model on the ImageWoof dataset for 100 epochs with an image size
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='imagewoof', epochs=100, imgsz=224)
|
results = model.train(data="imagewoof", epochs=100, imgsz=224)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -63,13 +63,13 @@ To use these variants in your training, simply replace 'imagewoof' in the datase
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# For medium-sized dataset
|
# For medium-sized dataset
|
||||||
model.train(data='imagewoof320', epochs=100, imgsz=224)
|
model.train(data="imagewoof320", epochs=100, imgsz=224)
|
||||||
|
|
||||||
# For small-sized dataset
|
# For small-sized dataset
|
||||||
model.train(data='imagewoof160', epochs=100, imgsz=224)
|
model.train(data="imagewoof160", epochs=100, imgsz=224)
|
||||||
```
|
```
|
||||||
|
|
||||||
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
|
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
|
||||||
|
|
|
||||||
|
|
@ -86,10 +86,10 @@ This structured approach ensures that the model can effectively learn from well-
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='path/to/dataset', epochs=100, imgsz=640)
|
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -42,10 +42,10 @@ To train a CNN model on the MNIST dataset for 100 epochs with an image size of 3
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='mnist', epochs=100, imgsz=32)
|
results = model.train(data="mnist", epochs=100, imgsz=32)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -42,10 +42,10 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='african-wildlife.yaml', epochs=100, imgsz=640)
|
results = model.train(data="african-wildlife.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -63,7 +63,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('path/to/best.pt') # load a brain-tumor fine-tuned model
|
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
|
||||||
|
|
||||||
# Inference using the model
|
# Inference using the model
|
||||||
results = model.predict("https://ultralytics.com/assets/african-wildlife-sample.jpg")
|
results = model.predict("https://ultralytics.com/assets/african-wildlife-sample.jpg")
|
||||||
|
|
|
||||||
|
|
@ -53,10 +53,10 @@ To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image s
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='Argoverse.yaml', epochs=100, imgsz=640)
|
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -52,10 +52,10 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='brain-tumor.yaml', epochs=100, imgsz=640)
|
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -73,7 +73,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('path/to/best.pt') # load a brain-tumor fine-tuned model
|
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
|
||||||
|
|
||||||
# Inference using the model
|
# Inference using the model
|
||||||
results = model.predict("https://ultralytics.com/assets/brain-tumor-sample.jpg")
|
results = model.predict("https://ultralytics.com/assets/brain-tumor-sample.jpg")
|
||||||
|
|
|
||||||
|
|
@ -70,10 +70,10 @@ To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size o
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -45,10 +45,10 @@ To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -48,10 +48,10 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='GlobalWheat2020.yaml', epochs=100, imgsz=640)
|
results = model.train(data="GlobalWheat2020.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -56,10 +56,10 @@ Here's how you can use these formats to train your model:
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -103,7 +103,7 @@ You can easily convert labels from the popular COCO dataset format to the YOLO f
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.converter import convert_coco
|
from ultralytics.data.converter import convert_coco
|
||||||
|
|
||||||
convert_coco(labels_dir='path/to/coco/annotations/')
|
convert_coco(labels_dir="path/to/coco/annotations/")
|
||||||
```
|
```
|
||||||
|
|
||||||
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.
|
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.
|
||||||
|
|
|
||||||
|
|
@ -66,10 +66,10 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='lvis.yaml', epochs=100, imgsz=640)
|
results = model.train(data="lvis.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -48,10 +48,10 @@ To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='Objects365.yaml', epochs=100, imgsz=640)
|
results = model.train(data="Objects365.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -88,10 +88,10 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a COCO-pretrained YOLOv8n model
|
# Load a COCO-pretrained YOLOv8n model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Train the model on the Open Images V7 dataset
|
# Train the model on the Open Images V7 dataset
|
||||||
results = model.train(data='open-images-v7.yaml', epochs=100, imgsz=640)
|
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -46,9 +46,10 @@ Dataset benchmarking evaluates machine learning model performance on specific da
|
||||||
=== "Python"
|
=== "Python"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from pathlib import Path
|
|
||||||
import shutil
|
|
||||||
import os
|
import os
|
||||||
|
import shutil
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
from ultralytics.utils.benchmarks import RF100Benchmark
|
from ultralytics.utils.benchmarks import RF100Benchmark
|
||||||
|
|
||||||
# Initialize RF100Benchmark and set API key
|
# Initialize RF100Benchmark and set API key
|
||||||
|
|
@ -66,10 +67,10 @@ Dataset benchmarking evaluates machine learning model performance on specific da
|
||||||
if path.exists():
|
if path.exists():
|
||||||
# Fix YAML file and run training
|
# Fix YAML file and run training
|
||||||
benchmark.fix_yaml(str(path))
|
benchmark.fix_yaml(str(path))
|
||||||
os.system(f'yolo detect train data={path} model=yolov8s.pt epochs=1 batch=16')
|
os.system(f"yolo detect train data={path} model=yolov8s.pt epochs=1 batch=16")
|
||||||
|
|
||||||
# Run validation and evaluate
|
# Run validation and evaluate
|
||||||
os.system(f'yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1')
|
os.system(f"yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1")
|
||||||
benchmark.evaluate(str(path), str(val_log_file), str(eval_log_file), ind)
|
benchmark.evaluate(str(path), str(val_log_file), str(eval_log_file), ind)
|
||||||
|
|
||||||
# Remove the 'runs' directory
|
# Remove the 'runs' directory
|
||||||
|
|
|
||||||
|
|
@ -50,10 +50,10 @@ To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image si
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='SKU-110K.yaml', epochs=100, imgsz=640)
|
results = model.train(data="SKU-110K.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -46,10 +46,10 @@ To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image si
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='VisDrone.yaml', epochs=100, imgsz=640)
|
results = model.train(data="VisDrone.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -49,10 +49,10 @@ To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='VOC.yaml', epochs=100, imgsz=640)
|
results = model.train(data="VOC.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -52,10 +52,10 @@ To train a model on the xView dataset for 100 epochs with an image size of 640,
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='xView.yaml', epochs=100, imgsz=640)
|
results = model.train(data="xView.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -36,13 +36,13 @@ pip install ultralytics[explorer]
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# Create an Explorer object
|
# Create an Explorer object
|
||||||
explorer = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
explorer = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
|
|
||||||
# Create embeddings for your dataset
|
# Create embeddings for your dataset
|
||||||
explorer.create_embeddings_table()
|
explorer.create_embeddings_table()
|
||||||
|
|
||||||
# Search for similar images to a given image/images
|
# Search for similar images to a given image/images
|
||||||
dataframe = explorer.get_similar(img='path/to/image.jpg')
|
dataframe = explorer.get_similar(img="path/to/image.jpg")
|
||||||
|
|
||||||
# Or search for similar images to a given index/indices
|
# Or search for similar images to a given index/indices
|
||||||
dataframe = explorer.get_similar(idx=0)
|
dataframe = explorer.get_similar(idx=0)
|
||||||
|
|
@ -75,18 +75,17 @@ You get a pandas dataframe with the `limit` number of most similar data points t
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
|
similar = exp.get_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
|
||||||
print(similar.head())
|
print(similar.head())
|
||||||
|
|
||||||
# Search using multiple indices
|
# Search using multiple indices
|
||||||
similar = exp.get_similar(
|
similar = exp.get_similar(
|
||||||
img=['https://ultralytics.com/images/bus.jpg',
|
img=["https://ultralytics.com/images/bus.jpg", "https://ultralytics.com/images/bus.jpg"],
|
||||||
'https://ultralytics.com/images/bus.jpg'],
|
limit=10,
|
||||||
limit=10
|
)
|
||||||
)
|
|
||||||
print(similar.head())
|
print(similar.head())
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -96,14 +95,14 @@ You get a pandas dataframe with the `limit` number of most similar data points t
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
similar = exp.get_similar(idx=1, limit=10)
|
similar = exp.get_similar(idx=1, limit=10)
|
||||||
print(similar.head())
|
print(similar.head())
|
||||||
|
|
||||||
# Search using multiple indices
|
# Search using multiple indices
|
||||||
similar = exp.get_similar(idx=[1,10], limit=10)
|
similar = exp.get_similar(idx=[1, 10], limit=10)
|
||||||
print(similar.head())
|
print(similar.head())
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -119,10 +118,10 @@ You can also plot the similar images using the `plot_similar` method. This metho
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
plt = exp.plot_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
|
plt = exp.plot_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
|
||||||
plt.show()
|
plt.show()
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -132,7 +131,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
plt = exp.plot_similar(idx=1, limit=10)
|
plt = exp.plot_similar(idx=1, limit=10)
|
||||||
|
|
@ -150,9 +149,8 @@ Note: This works using LLMs under the hood so the results are probabilistic and
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
from ultralytics.data.explorer import plot_query_result
|
from ultralytics.data.explorer import plot_query_result
|
||||||
|
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
df = exp.ask_ai("show me 100 images with exactly one person and 2 dogs. There can be other objects too")
|
df = exp.ask_ai("show me 100 images with exactly one person and 2 dogs. There can be other objects too")
|
||||||
|
|
@ -173,7 +171,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
df = exp.sql_query("WHERE labels LIKE '%person%' AND labels LIKE '%dog%'")
|
df = exp.sql_query("WHERE labels LIKE '%person%' AND labels LIKE '%dog%'")
|
||||||
|
|
@ -190,7 +188,7 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
|
||||||
from ultralytics import Explorer
|
from ultralytics import Explorer
|
||||||
|
|
||||||
# create an Explorer object
|
# create an Explorer object
|
||||||
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
|
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
|
||||||
exp.create_embeddings_table()
|
exp.create_embeddings_table()
|
||||||
|
|
||||||
# plot the SQL Query
|
# plot the SQL Query
|
||||||
|
|
@ -293,7 +291,7 @@ You can use similarity index to build custom conditions to filter out the datase
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
sim_count = np.array(sim_idx["count"])
|
sim_count = np.array(sim_idx["count"])
|
||||||
sim_idx['im_file'][sim_count > 30]
|
sim_idx["im_file"][sim_count > 30]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Visualize Embedding Space
|
### Visualize Embedding Space
|
||||||
|
|
@ -301,10 +299,10 @@ sim_idx['im_file'][sim_count > 30]
|
||||||
You can also visualize the embedding space using the plotting tool of your choice. For example here is a simple example using matplotlib:
|
You can also visualize the embedding space using the plotting tool of your choice. For example here is a simple example using matplotlib:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import numpy as np
|
|
||||||
from sklearn.decomposition import PCA
|
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
|
import numpy as np
|
||||||
from mpl_toolkits.mplot3d import Axes3D
|
from mpl_toolkits.mplot3d import Axes3D
|
||||||
|
from sklearn.decomposition import PCA
|
||||||
|
|
||||||
# Reduce dimensions using PCA to 3 components for visualization in 3D
|
# Reduce dimensions using PCA to 3 components for visualization in 3D
|
||||||
pca = PCA(n_components=3)
|
pca = PCA(n_components=3)
|
||||||
|
|
@ -312,14 +310,14 @@ reduced_data = pca.fit_transform(embeddings)
|
||||||
|
|
||||||
# Create a 3D scatter plot using Matplotlib Axes3D
|
# Create a 3D scatter plot using Matplotlib Axes3D
|
||||||
fig = plt.figure(figsize=(8, 6))
|
fig = plt.figure(figsize=(8, 6))
|
||||||
ax = fig.add_subplot(111, projection='3d')
|
ax = fig.add_subplot(111, projection="3d")
|
||||||
|
|
||||||
# Scatter plot
|
# Scatter plot
|
||||||
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], reduced_data[:, 2], alpha=0.5)
|
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], reduced_data[:, 2], alpha=0.5)
|
||||||
ax.set_title('3D Scatter Plot of Reduced 256-Dimensional Data (PCA)')
|
ax.set_title("3D Scatter Plot of Reduced 256-Dimensional Data (PCA)")
|
||||||
ax.set_xlabel('Component 1')
|
ax.set_xlabel("Component 1")
|
||||||
ax.set_ylabel('Component 2')
|
ax.set_ylabel("Component 2")
|
||||||
ax.set_zlabel('Component 3')
|
ax.set_zlabel("Component 3")
|
||||||
|
|
||||||
plt.show()
|
plt.show()
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -135,14 +135,15 @@ Contributing a new dataset involves several steps to ensure that it aligns well
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from ultralytics.data.utils import compress_one_image
|
from ultralytics.data.utils import compress_one_image
|
||||||
from ultralytics.utils.downloads import zip_directory
|
from ultralytics.utils.downloads import zip_directory
|
||||||
|
|
||||||
# Define dataset directory
|
# Define dataset directory
|
||||||
path = Path('path/to/dataset')
|
path = Path("path/to/dataset")
|
||||||
|
|
||||||
# Optimize images in dataset (optional)
|
# Optimize images in dataset (optional)
|
||||||
for f in path.rglob('*.jpg'):
|
for f in path.rglob("*.jpg"):
|
||||||
compress_one_image(f)
|
compress_one_image(f)
|
||||||
|
|
||||||
# Zip dataset into 'path/to/dataset.zip'
|
# Zip dataset into 'path/to/dataset.zip'
|
||||||
|
|
|
||||||
|
|
@ -75,21 +75,21 @@ To train DOTA dataset, we split original DOTA images with high-resolution into i
|
||||||
=== "Python"
|
=== "Python"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.split_dota import split_trainval, split_test
|
from ultralytics.data.split_dota import split_test, split_trainval
|
||||||
|
|
||||||
# split train and val set, with labels.
|
# split train and val set, with labels.
|
||||||
split_trainval(
|
split_trainval(
|
||||||
data_root='path/to/DOTAv1.0/',
|
data_root="path/to/DOTAv1.0/",
|
||||||
save_dir='path/to/DOTAv1.0-split/',
|
save_dir="path/to/DOTAv1.0-split/",
|
||||||
rates=[0.5, 1.0, 1.5], # multiscale
|
rates=[0.5, 1.0, 1.5], # multiscale
|
||||||
gap=500
|
gap=500,
|
||||||
)
|
)
|
||||||
# split test set, without labels.
|
# split test set, without labels.
|
||||||
split_test(
|
split_test(
|
||||||
data_root='path/to/DOTAv1.0/',
|
data_root="path/to/DOTAv1.0/",
|
||||||
save_dir='path/to/DOTAv1.0-split/',
|
save_dir="path/to/DOTAv1.0-split/",
|
||||||
rates=[0.5, 1.0, 1.5], # multiscale
|
rates=[0.5, 1.0, 1.5], # multiscale
|
||||||
gap=500
|
gap=500,
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -109,10 +109,10 @@ To train a model on the DOTA v1 dataset, you can utilize the following code snip
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Create a new YOLOv8n-OBB model from scratch
|
# Create a new YOLOv8n-OBB model from scratch
|
||||||
model = YOLO('yolov8n-obb.yaml')
|
model = YOLO("yolov8n-obb.yaml")
|
||||||
|
|
||||||
# Train the model on the DOTAv2 dataset
|
# Train the model on the DOTAv2 dataset
|
||||||
results = model.train(data='DOTAv1.yaml', epochs=100, imgsz=640)
|
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -34,10 +34,10 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-obb.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-obb.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='dota8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -40,10 +40,10 @@ To train a model using these OBB formats:
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Create a new YOLOv8n-OBB model from scratch
|
# Create a new YOLOv8n-OBB model from scratch
|
||||||
model = YOLO('yolov8n-obb.yaml')
|
model = YOLO("yolov8n-obb.yaml")
|
||||||
|
|
||||||
# Train the model on the DOTAv2 dataset
|
# Train the model on the DOTAv2 dataset
|
||||||
results = model.train(data='DOTAv1.yaml', epochs=100, imgsz=640)
|
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -78,7 +78,7 @@ Transitioning labels from the DOTA dataset format to the YOLO OBB format can be
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.converter import convert_dota_to_yolo_obb
|
from ultralytics.data.converter import convert_dota_to_yolo_obb
|
||||||
|
|
||||||
convert_dota_to_yolo_obb('path/to/DOTA')
|
convert_dota_to_yolo_obb("path/to/DOTA")
|
||||||
```
|
```
|
||||||
|
|
||||||
This conversion mechanism is instrumental for datasets in the DOTA format, ensuring alignment with the Ultralytics YOLO OBB format.
|
This conversion mechanism is instrumental for datasets in the DOTA format, ensuring alignment with the Ultralytics YOLO OBB format.
|
||||||
|
|
|
||||||
|
|
@ -61,10 +61,10 @@ To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an im
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco-pose.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco-pose.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -34,10 +34,10 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -72,10 +72,10 @@ The `train` and `val` fields specify the paths to the directories containing the
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -132,7 +132,7 @@ Ultralytics provides a convenient conversion tool to convert labels from the pop
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.converter import convert_coco
|
from ultralytics.data.converter import convert_coco
|
||||||
|
|
||||||
convert_coco(labels_dir='path/to/coco/annotations/', use_keypoints=True)
|
convert_coco(labels_dir="path/to/coco/annotations/", use_keypoints=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format. The `use_keypoints` parameter specifies whether to include keypoints (for pose estimation) in the converted labels.
|
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format. The `use_keypoints` parameter specifies whether to include keypoints (for pose estimation) in the converted labels.
|
||||||
|
|
|
||||||
|
|
@ -47,10 +47,10 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='tiger-pose.yaml', epochs=100, imgsz=640)
|
results = model.train(data="tiger-pose.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -55,10 +55,10 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='carparts-seg.yaml', epochs=100, imgsz=640)
|
results = model.train(data="carparts-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -59,10 +59,10 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco-seg.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -34,10 +34,10 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an ima
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -44,10 +44,10 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epo
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='crack-seg.yaml', epochs=100, imgsz=640)
|
results = model.train(data="crack-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -74,10 +74,10 @@ The `train` and `val` fields specify the paths to the directories containing the
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -117,7 +117,7 @@ You can easily convert labels from the popular COCO dataset format to the YOLO f
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.converter import convert_coco
|
from ultralytics.data.converter import convert_coco
|
||||||
|
|
||||||
convert_coco(labels_dir='path/to/coco/annotations/', use_segments=True)
|
convert_coco(labels_dir="path/to/coco/annotations/", use_segments=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.
|
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.
|
||||||
|
|
@ -139,7 +139,7 @@ To auto-annotate your dataset using the Ultralytics framework, you can use the `
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.annotator import auto_annotate
|
from ultralytics.data.annotator import auto_annotate
|
||||||
|
|
||||||
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
|
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
|
||||||
```
|
```
|
||||||
|
|
||||||
Certainly, here is the table updated with code snippets:
|
Certainly, here is the table updated with code snippets:
|
||||||
|
|
|
||||||
|
|
@ -44,10 +44,10 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 e
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
|
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
|
||||||
|
|
||||||
# Train the model
|
# Train the model
|
||||||
results = model.train(data='package-seg.yaml', epochs=100, imgsz=640)
|
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
|
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -70,8 +70,8 @@ With Ultralytics installed, you can now start using its robust features for obje
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
model = YOLO('yolov8n.pt') # initialize model
|
model = YOLO("yolov8n.pt") # initialize model
|
||||||
results = model('path/to/image.jpg') # perform inference
|
results = model("path/to/image.jpg") # perform inference
|
||||||
results[0].show() # display results for the first image
|
results[0].show() # display results for the first image
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -82,10 +82,10 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('path/to/model.pt') # Load an official model or custom model
|
model = YOLO("path/to/model.pt") # Load an official model or custom model
|
||||||
|
|
||||||
# Export the model
|
# Export the model
|
||||||
model.export(format='edgetpu')
|
model.export(format="edgetpu")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -108,7 +108,7 @@ After exporting your model, you can run inference with it using the following co
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('path/to/edgetpu_model.tflite') # Load an official model or custom model
|
model = YOLO("path/to/edgetpu_model.tflite") # Load an official model or custom model
|
||||||
|
|
||||||
# Run Prediction
|
# Run Prediction
|
||||||
model.predict("path/to/source.png")
|
model.predict("path/to/source.png")
|
||||||
|
|
|
||||||
|
|
@ -42,8 +42,8 @@ Measuring the gap between two objects is known as distance calculation within a
|
||||||
=== "Video Stream"
|
=== "Video Stream"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
names = model.model.names
|
names = model.model.names
|
||||||
|
|
@ -53,7 +53,7 @@ Measuring the gap between two objects is known as distance calculation within a
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("distance_calculation.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("distance_calculation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
# Init distance-calculation obj
|
# Init distance-calculation obj
|
||||||
dist_obj = solutions.DistanceCalculation(names=names, view_img=True)
|
dist_obj = solutions.DistanceCalculation(names=names, view_img=True)
|
||||||
|
|
@ -71,7 +71,6 @@ Measuring the gap between two objects is known as distance calculation within a
|
||||||
cap.release()
|
cap.release()
|
||||||
video_writer.release()
|
video_writer.release()
|
||||||
cv2.destroyAllWindows()
|
cv2.destroyAllWindows()
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
???+ tip "Note"
|
???+ tip "Note"
|
||||||
|
|
|
||||||
|
|
@ -44,8 +44,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
=== "Heatmap"
|
=== "Heatmap"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
|
|
@ -53,13 +53,15 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
# Init heatmap
|
# Init heatmap
|
||||||
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
|
heatmap_obj = solutions.Heatmap(
|
||||||
view_img=True,
|
colormap=cv2.COLORMAP_PARULA,
|
||||||
shape="circle",
|
view_img=True,
|
||||||
classes_names=model.names)
|
shape="circle",
|
||||||
|
classes_names=model.names,
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
@ -74,14 +76,13 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
cap.release()
|
cap.release()
|
||||||
video_writer.release()
|
video_writer.release()
|
||||||
cv2.destroyAllWindows()
|
cv2.destroyAllWindows()
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "Line Counting"
|
=== "Line Counting"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
|
|
@ -89,16 +90,18 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
line_points = [(20, 400), (1080, 404)] # line for object counting
|
line_points = [(20, 400), (1080, 404)] # line for object counting
|
||||||
|
|
||||||
# Init heatmap
|
# Init heatmap
|
||||||
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
|
heatmap_obj = solutions.Heatmap(
|
||||||
view_img=True,
|
colormap=cv2.COLORMAP_PARULA,
|
||||||
shape="circle",
|
view_img=True,
|
||||||
count_reg_pts=line_points,
|
shape="circle",
|
||||||
classes_names=model.names)
|
count_reg_pts=line_points,
|
||||||
|
classes_names=model.names,
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
@ -117,8 +120,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
|
|
||||||
=== "Polygon Counting"
|
=== "Polygon Counting"
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
|
|
@ -126,20 +129,19 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("heatmap_output.avi",
|
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
cv2.VideoWriter_fourcc(*'mp4v'),
|
|
||||||
fps,
|
|
||||||
(w, h))
|
|
||||||
|
|
||||||
# Define polygon points
|
# Define polygon points
|
||||||
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
|
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
|
||||||
|
|
||||||
# Init heatmap
|
# Init heatmap
|
||||||
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
|
heatmap_obj = solutions.Heatmap(
|
||||||
view_img=True,
|
colormap=cv2.COLORMAP_PARULA,
|
||||||
shape="circle",
|
view_img=True,
|
||||||
count_reg_pts=region_points,
|
shape="circle",
|
||||||
classes_names=model.names)
|
count_reg_pts=region_points,
|
||||||
|
classes_names=model.names,
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
@ -159,8 +161,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
=== "Region Counting"
|
=== "Region Counting"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
|
|
@ -168,17 +170,19 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
# Define region points
|
# Define region points
|
||||||
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
|
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
|
||||||
|
|
||||||
# Init heatmap
|
# Init heatmap
|
||||||
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
|
heatmap_obj = solutions.Heatmap(
|
||||||
view_img=True,
|
colormap=cv2.COLORMAP_PARULA,
|
||||||
shape="circle",
|
view_img=True,
|
||||||
count_reg_pts=region_points,
|
shape="circle",
|
||||||
classes_names=model.names)
|
count_reg_pts=region_points,
|
||||||
|
classes_names=model.names,
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
@ -198,19 +202,21 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
=== "Im0"
|
=== "Im0"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8s.pt") # YOLOv8 custom/pretrained model
|
model = YOLO("yolov8s.pt") # YOLOv8 custom/pretrained model
|
||||||
|
|
||||||
im0 = cv2.imread("path/to/image.png") # path to image file
|
im0 = cv2.imread("path/to/image.png") # path to image file
|
||||||
h, w = im0.shape[:2] # image height and width
|
h, w = im0.shape[:2] # image height and width
|
||||||
|
|
||||||
# Heatmap Init
|
# Heatmap Init
|
||||||
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
|
heatmap_obj = solutions.Heatmap(
|
||||||
view_img=True,
|
colormap=cv2.COLORMAP_PARULA,
|
||||||
shape="circle",
|
view_img=True,
|
||||||
classes_names=model.names)
|
shape="circle",
|
||||||
|
classes_names=model.names,
|
||||||
|
)
|
||||||
|
|
||||||
results = model.track(im0, persist=True)
|
results = model.track(im0, persist=True)
|
||||||
im0 = heatmap_obj.generate_heatmap(im0, tracks=results)
|
im0 = heatmap_obj.generate_heatmap(im0, tracks=results)
|
||||||
|
|
@ -220,8 +226,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
=== "Specific Classes"
|
=== "Specific Classes"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
|
|
@ -229,23 +235,24 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
classes_for_heatmap = [0, 2] # classes for heatmap
|
classes_for_heatmap = [0, 2] # classes for heatmap
|
||||||
|
|
||||||
# Init heatmap
|
# Init heatmap
|
||||||
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
|
heatmap_obj = solutions.Heatmap(
|
||||||
view_img=True,
|
colormap=cv2.COLORMAP_PARULA,
|
||||||
shape="circle",
|
view_img=True,
|
||||||
classes_names=model.names)
|
shape="circle",
|
||||||
|
classes_names=model.names,
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
if not success:
|
if not success:
|
||||||
print("Video frame is empty or video processing has been successfully completed.")
|
print("Video frame is empty or video processing has been successfully completed.")
|
||||||
break
|
break
|
||||||
tracks = model.track(im0, persist=True, show=False,
|
tracks = model.track(im0, persist=True, show=False, classes=classes_for_heatmap)
|
||||||
classes=classes_for_heatmap)
|
|
||||||
|
|
||||||
im0 = heatmap_obj.generate_heatmap(im0, tracks)
|
im0 = heatmap_obj.generate_heatmap(im0, tracks)
|
||||||
video_writer.write(im0)
|
video_writer.write(im0)
|
||||||
|
|
|
||||||
|
|
@ -77,10 +77,10 @@ Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyp
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Initialize the YOLO model
|
# Initialize the YOLO model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Tune hyperparameters on COCO8 for 30 epochs
|
# Tune hyperparameters on COCO8 for 30 epochs
|
||||||
model.tune(data='coco8.yaml', epochs=30, iterations=300, optimizer='AdamW', plots=False, save=False, val=False)
|
model.tune(data="coco8.yaml", epochs=30, iterations=300, optimizer="AdamW", plots=False, save=False, val=False)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Results
|
## Results
|
||||||
|
|
|
||||||
|
|
@ -48,7 +48,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
out = cv2.VideoWriter('instance-segmentation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
|
out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
ret, im0 = cap.read()
|
ret, im0 = cap.read()
|
||||||
|
|
@ -63,38 +63,35 @@ There are two types of instance segmentation tracking available in the Ultralyti
|
||||||
clss = results[0].boxes.cls.cpu().tolist()
|
clss = results[0].boxes.cls.cpu().tolist()
|
||||||
masks = results[0].masks.xy
|
masks = results[0].masks.xy
|
||||||
for mask, cls in zip(masks, clss):
|
for mask, cls in zip(masks, clss):
|
||||||
annotator.seg_bbox(mask=mask,
|
annotator.seg_bbox(mask=mask, mask_color=colors(int(cls), True), det_label=names[int(cls)])
|
||||||
mask_color=colors(int(cls), True),
|
|
||||||
det_label=names[int(cls)])
|
|
||||||
|
|
||||||
out.write(im0)
|
out.write(im0)
|
||||||
cv2.imshow("instance-segmentation", im0)
|
cv2.imshow("instance-segmentation", im0)
|
||||||
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
out.release()
|
out.release()
|
||||||
cap.release()
|
cap.release()
|
||||||
cv2.destroyAllWindows()
|
cv2.destroyAllWindows()
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "Instance Segmentation with Object Tracking"
|
=== "Instance Segmentation with Object Tracking"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
from collections import defaultdict
|
||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from ultralytics.utils.plotting import Annotator, colors
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
|
|
||||||
from collections import defaultdict
|
|
||||||
|
|
||||||
track_history = defaultdict(lambda: [])
|
track_history = defaultdict(lambda: [])
|
||||||
|
|
||||||
model = YOLO("yolov8n-seg.pt") # segmentation model
|
model = YOLO("yolov8n-seg.pt") # segmentation model
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
out = cv2.VideoWriter('instance-segmentation-object-tracking.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
|
out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
ret, im0 = cap.read()
|
ret, im0 = cap.read()
|
||||||
|
|
@ -111,14 +108,12 @@ There are two types of instance segmentation tracking available in the Ultralyti
|
||||||
track_ids = results[0].boxes.id.int().cpu().tolist()
|
track_ids = results[0].boxes.id.int().cpu().tolist()
|
||||||
|
|
||||||
for mask, track_id in zip(masks, track_ids):
|
for mask, track_id in zip(masks, track_ids):
|
||||||
annotator.seg_bbox(mask=mask,
|
annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id))
|
||||||
mask_color=colors(track_id, True),
|
|
||||||
track_label=str(track_id))
|
|
||||||
|
|
||||||
out.write(im0)
|
out.write(im0)
|
||||||
cv2.imshow("instance-segmentation-object-tracking", im0)
|
cv2.imshow("instance-segmentation-object-tracking", im0)
|
||||||
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
out.release()
|
out.release()
|
||||||
|
|
|
||||||
|
|
@ -36,7 +36,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n-seg.pt')
|
model = YOLO("yolov8n-seg.pt")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = model.predict()
|
results = model.predict()
|
||||||
|
|
@ -159,7 +159,6 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
||||||
|
|
||||||
# Isolate object with binary mask
|
# Isolate object with binary mask
|
||||||
isolated = cv2.bitwise_and(mask3ch, img)
|
isolated = cv2.bitwise_and(mask3ch, img)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
??? question "How does this work?"
|
??? question "How does this work?"
|
||||||
|
|
@ -209,7 +208,6 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
||||||
```py
|
```py
|
||||||
# Isolate object with transparent background (when saved as PNG)
|
# Isolate object with transparent background (when saved as PNG)
|
||||||
isolated = np.dstack([img, b_mask])
|
isolated = np.dstack([img, b_mask])
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
??? question "How does this work?"
|
??? question "How does this work?"
|
||||||
|
|
@ -266,7 +264,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
||||||
|
|
||||||
```py
|
```py
|
||||||
# Save isolated object to file
|
# Save isolated object to file
|
||||||
_ = cv2.imwrite(f'{img_name}_{label}-{ci}.png', iso_crop)
|
_ = cv2.imwrite(f"{img_name}_{label}-{ci}.png", iso_crop)
|
||||||
```
|
```
|
||||||
|
|
||||||
- In this example, the `img_name` is the base-name of the source image file, `label` is the detected class-name, and `ci` is the index of the object detection (in case of multiple instances with the same class name).
|
- In this example, the `img_name` is the base-name of the source image file, `label` is the detected class-name, and `ci` is the index of the object detection (in case of multiple instances with the same class name).
|
||||||
|
|
|
||||||
|
|
@ -62,36 +62,36 @@ Without further ado, let's dive in!
|
||||||
```python
|
```python
|
||||||
import datetime
|
import datetime
|
||||||
import shutil
|
import shutil
|
||||||
from pathlib import Path
|
|
||||||
from collections import Counter
|
from collections import Counter
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
import yaml
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
from ultralytics import YOLO
|
import yaml
|
||||||
from sklearn.model_selection import KFold
|
from sklearn.model_selection import KFold
|
||||||
|
from ultralytics import YOLO
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Proceed to retrieve all label files for your dataset.
|
2. Proceed to retrieve all label files for your dataset.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
dataset_path = Path('./Fruit-detection') # replace with 'path/to/dataset' for your custom data
|
dataset_path = Path("./Fruit-detection") # replace with 'path/to/dataset' for your custom data
|
||||||
labels = sorted(dataset_path.rglob("*labels/*.txt")) # all data in 'labels'
|
labels = sorted(dataset_path.rglob("*labels/*.txt")) # all data in 'labels'
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Now, read the contents of the dataset YAML file and extract the indices of the class labels.
|
3. Now, read the contents of the dataset YAML file and extract the indices of the class labels.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
yaml_file = 'path/to/data.yaml' # your data YAML with data directories and names dictionary
|
yaml_file = "path/to/data.yaml" # your data YAML with data directories and names dictionary
|
||||||
with open(yaml_file, 'r', encoding="utf8") as y:
|
with open(yaml_file, "r", encoding="utf8") as y:
|
||||||
classes = yaml.safe_load(y)['names']
|
classes = yaml.safe_load(y)["names"]
|
||||||
cls_idx = sorted(classes.keys())
|
cls_idx = sorted(classes.keys())
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Initialize an empty `pandas` DataFrame.
|
4. Initialize an empty `pandas` DataFrame.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
indx = [l.stem for l in labels] # uses base filename as ID (no extension)
|
indx = [l.stem for l in labels] # uses base filename as ID (no extension)
|
||||||
labels_df = pd.DataFrame([], columns=cls_idx, index=indx)
|
labels_df = pd.DataFrame([], columns=cls_idx, index=indx)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -101,16 +101,16 @@ Without further ado, let's dive in!
|
||||||
for label in labels:
|
for label in labels:
|
||||||
lbl_counter = Counter()
|
lbl_counter = Counter()
|
||||||
|
|
||||||
with open(label,'r') as lf:
|
with open(label, "r") as lf:
|
||||||
lines = lf.readlines()
|
lines = lf.readlines()
|
||||||
|
|
||||||
for l in lines:
|
for l in lines:
|
||||||
# classes for YOLO label uses integer at first position of each line
|
# classes for YOLO label uses integer at first position of each line
|
||||||
lbl_counter[int(l.split(' ')[0])] += 1
|
lbl_counter[int(l.split(" ")[0])] += 1
|
||||||
|
|
||||||
labels_df.loc[label.stem] = lbl_counter
|
labels_df.loc[label.stem] = lbl_counter
|
||||||
|
|
||||||
labels_df = labels_df.fillna(0.0) # replace `nan` values with `0.0`
|
labels_df = labels_df.fillna(0.0) # replace `nan` values with `0.0`
|
||||||
```
|
```
|
||||||
|
|
||||||
6. The following is a sample view of the populated DataFrame:
|
6. The following is a sample view of the populated DataFrame:
|
||||||
|
|
@ -142,7 +142,7 @@ The rows index the label files, each corresponding to an image in your dataset,
|
||||||
|
|
||||||
```python
|
```python
|
||||||
ksplit = 5
|
ksplit = 5
|
||||||
kf = KFold(n_splits=ksplit, shuffle=True, random_state=20) # setting random_state for repeatable results
|
kf = KFold(n_splits=ksplit, shuffle=True, random_state=20) # setting random_state for repeatable results
|
||||||
|
|
||||||
kfolds = list(kf.split(labels_df))
|
kfolds = list(kf.split(labels_df))
|
||||||
```
|
```
|
||||||
|
|
@ -150,12 +150,12 @@ The rows index the label files, each corresponding to an image in your dataset,
|
||||||
2. The dataset has now been split into `k` folds, each having a list of `train` and `val` indices. We will construct a DataFrame to display these results more clearly.
|
2. The dataset has now been split into `k` folds, each having a list of `train` and `val` indices. We will construct a DataFrame to display these results more clearly.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
folds = [f'split_{n}' for n in range(1, ksplit + 1)]
|
folds = [f"split_{n}" for n in range(1, ksplit + 1)]
|
||||||
folds_df = pd.DataFrame(index=indx, columns=folds)
|
folds_df = pd.DataFrame(index=indx, columns=folds)
|
||||||
|
|
||||||
for idx, (train, val) in enumerate(kfolds, start=1):
|
for idx, (train, val) in enumerate(kfolds, start=1):
|
||||||
folds_df[f'split_{idx}'].loc[labels_df.iloc[train].index] = 'train'
|
folds_df[f"split_{idx}"].loc[labels_df.iloc[train].index] = "train"
|
||||||
folds_df[f'split_{idx}'].loc[labels_df.iloc[val].index] = 'val'
|
folds_df[f"split_{idx}"].loc[labels_df.iloc[val].index] = "val"
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`.
|
3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`.
|
||||||
|
|
@ -168,8 +168,8 @@ The rows index the label files, each corresponding to an image in your dataset,
|
||||||
val_totals = labels_df.iloc[val_indices].sum()
|
val_totals = labels_df.iloc[val_indices].sum()
|
||||||
|
|
||||||
# To avoid division by zero, we add a small value (1E-7) to the denominator
|
# To avoid division by zero, we add a small value (1E-7) to the denominator
|
||||||
ratio = val_totals / (train_totals + 1E-7)
|
ratio = val_totals / (train_totals + 1e-7)
|
||||||
fold_lbl_distrb.loc[f'split_{n}'] = ratio
|
fold_lbl_distrb.loc[f"split_{n}"] = ratio
|
||||||
```
|
```
|
||||||
|
|
||||||
The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
|
The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
|
||||||
|
|
@ -177,17 +177,17 @@ The rows index the label files, each corresponding to an image in your dataset,
|
||||||
4. Next, we create the directories and dataset YAML files for each split.
|
4. Next, we create the directories and dataset YAML files for each split.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
supported_extensions = ['.jpg', '.jpeg', '.png']
|
supported_extensions = [".jpg", ".jpeg", ".png"]
|
||||||
|
|
||||||
# Initialize an empty list to store image file paths
|
# Initialize an empty list to store image file paths
|
||||||
images = []
|
images = []
|
||||||
|
|
||||||
# Loop through supported extensions and gather image files
|
# Loop through supported extensions and gather image files
|
||||||
for ext in supported_extensions:
|
for ext in supported_extensions:
|
||||||
images.extend(sorted((dataset_path / 'images').rglob(f"*{ext}")))
|
images.extend(sorted((dataset_path / "images").rglob(f"*{ext}")))
|
||||||
|
|
||||||
# Create the necessary directories and dataset YAML files (unchanged)
|
# Create the necessary directories and dataset YAML files (unchanged)
|
||||||
save_path = Path(dataset_path / f'{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val')
|
save_path = Path(dataset_path / f"{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val")
|
||||||
save_path.mkdir(parents=True, exist_ok=True)
|
save_path.mkdir(parents=True, exist_ok=True)
|
||||||
ds_yamls = []
|
ds_yamls = []
|
||||||
|
|
||||||
|
|
@ -195,22 +195,25 @@ The rows index the label files, each corresponding to an image in your dataset,
|
||||||
# Create directories
|
# Create directories
|
||||||
split_dir = save_path / split
|
split_dir = save_path / split
|
||||||
split_dir.mkdir(parents=True, exist_ok=True)
|
split_dir.mkdir(parents=True, exist_ok=True)
|
||||||
(split_dir / 'train' / 'images').mkdir(parents=True, exist_ok=True)
|
(split_dir / "train" / "images").mkdir(parents=True, exist_ok=True)
|
||||||
(split_dir / 'train' / 'labels').mkdir(parents=True, exist_ok=True)
|
(split_dir / "train" / "labels").mkdir(parents=True, exist_ok=True)
|
||||||
(split_dir / 'val' / 'images').mkdir(parents=True, exist_ok=True)
|
(split_dir / "val" / "images").mkdir(parents=True, exist_ok=True)
|
||||||
(split_dir / 'val' / 'labels').mkdir(parents=True, exist_ok=True)
|
(split_dir / "val" / "labels").mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Create dataset YAML files
|
# Create dataset YAML files
|
||||||
dataset_yaml = split_dir / f'{split}_dataset.yaml'
|
dataset_yaml = split_dir / f"{split}_dataset.yaml"
|
||||||
ds_yamls.append(dataset_yaml)
|
ds_yamls.append(dataset_yaml)
|
||||||
|
|
||||||
with open(dataset_yaml, 'w') as ds_y:
|
with open(dataset_yaml, "w") as ds_y:
|
||||||
yaml.safe_dump({
|
yaml.safe_dump(
|
||||||
'path': split_dir.as_posix(),
|
{
|
||||||
'train': 'train',
|
"path": split_dir.as_posix(),
|
||||||
'val': 'val',
|
"train": "train",
|
||||||
'names': classes
|
"val": "val",
|
||||||
}, ds_y)
|
"names": classes,
|
||||||
|
},
|
||||||
|
ds_y,
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Lastly, copy images and labels into the respective directory ('train' or 'val') for each split.
|
5. Lastly, copy images and labels into the respective directory ('train' or 'val') for each split.
|
||||||
|
|
@ -221,8 +224,8 @@ The rows index the label files, each corresponding to an image in your dataset,
|
||||||
for image, label in zip(images, labels):
|
for image, label in zip(images, labels):
|
||||||
for split, k_split in folds_df.loc[image.stem].items():
|
for split, k_split in folds_df.loc[image.stem].items():
|
||||||
# Destination directory
|
# Destination directory
|
||||||
img_to_path = save_path / split / k_split / 'images'
|
img_to_path = save_path / split / k_split / "images"
|
||||||
lbl_to_path = save_path / split / k_split / 'labels'
|
lbl_to_path = save_path / split / k_split / "labels"
|
||||||
|
|
||||||
# Copy image and label files to new directory (SamefileError if file already exists)
|
# Copy image and label files to new directory (SamefileError if file already exists)
|
||||||
shutil.copy(image, img_to_path / image.name)
|
shutil.copy(image, img_to_path / image.name)
|
||||||
|
|
@ -243,8 +246,8 @@ fold_lbl_distrb.to_csv(save_path / "kfold_label_distribution.csv")
|
||||||
1. First, load the YOLO model.
|
1. First, load the YOLO model.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
weights_path = 'path/to/weights.pt'
|
weights_path = "path/to/weights.pt"
|
||||||
model = YOLO(weights_path, task='detect')
|
model = YOLO(weights_path, task="detect")
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Next, iterate over the dataset YAML files to run training. The results will be saved to a directory specified by the `project` and `name` arguments. By default, this directory is 'exp/runs#' where # is an integer index.
|
2. Next, iterate over the dataset YAML files to run training. The results will be saved to a directory specified by the `project` and `name` arguments. By default, this directory is 'exp/runs#' where # is an integer index.
|
||||||
|
|
@ -254,12 +257,12 @@ fold_lbl_distrb.to_csv(save_path / "kfold_label_distribution.csv")
|
||||||
|
|
||||||
# Define your additional arguments here
|
# Define your additional arguments here
|
||||||
batch = 16
|
batch = 16
|
||||||
project = 'kfold_demo'
|
project = "kfold_demo"
|
||||||
epochs = 100
|
epochs = 100
|
||||||
|
|
||||||
for k in range(ksplit):
|
for k in range(ksplit):
|
||||||
dataset_yaml = ds_yamls[k]
|
dataset_yaml = ds_yamls[k]
|
||||||
model.train(data=dataset_yaml,epochs=epochs, batch=batch, project=project) # include any train arguments
|
model.train(data=dataset_yaml, epochs=epochs, batch=batch, project=project) # include any train arguments
|
||||||
results[k] = model.metrics # save output metrics for further analysis
|
results[k] = model.metrics # save output metrics for further analysis
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -158,16 +158,16 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model
|
# Export the model
|
||||||
model.export(format='engine') # creates 'yolov8n.engine'
|
model.export(format="engine") # creates 'yolov8n.engine'
|
||||||
|
|
||||||
# Load the exported TensorRT model
|
# Load the exported TensorRT model
|
||||||
trt_model = YOLO('yolov8n.engine')
|
trt_model = YOLO("yolov8n.engine")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = trt_model('https://ultralytics.com/images/bus.jpg')
|
results = trt_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -290,10 +290,10 @@ To reproduce the above Ultralytics benchmarks on all export [formats](../modes/e
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
||||||
results = model.benchmarks(data='coco8.yaml', imgsz=640)
|
results = model.benchmarks(data="coco8.yaml", imgsz=640)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -21,9 +21,9 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
||||||
=== "Object Blurring"
|
=== "Object Blurring"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
import cv2
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from ultralytics.utils.plotting import Annotator, colors
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
import cv2
|
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
names = model.names
|
names = model.names
|
||||||
|
|
@ -36,9 +36,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
||||||
blur_ratio = 50
|
blur_ratio = 50
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("object_blurring_output.avi",
|
video_writer = cv2.VideoWriter("object_blurring_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
cv2.VideoWriter_fourcc(*'mp4v'),
|
|
||||||
fps, (w, h))
|
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
@ -55,14 +53,14 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
||||||
for box, cls in zip(boxes, clss):
|
for box, cls in zip(boxes, clss):
|
||||||
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
|
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
|
||||||
|
|
||||||
obj = im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])]
|
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
|
||||||
blur_obj = cv2.blur(obj, (blur_ratio, blur_ratio))
|
blur_obj = cv2.blur(obj, (blur_ratio, blur_ratio))
|
||||||
|
|
||||||
im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])] = blur_obj
|
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = blur_obj
|
||||||
|
|
||||||
cv2.imshow("ultralytics", im0)
|
cv2.imshow("ultralytics", im0)
|
||||||
video_writer.write(im0)
|
video_writer.write(im0)
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
cap.release()
|
cap.release()
|
||||||
|
|
|
||||||
|
|
@ -28,10 +28,11 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
||||||
=== "Object Cropping"
|
=== "Object Cropping"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
import os
|
||||||
|
|
||||||
|
import cv2
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from ultralytics.utils.plotting import Annotator, colors
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
import cv2
|
|
||||||
import os
|
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
names = model.names
|
names = model.names
|
||||||
|
|
@ -45,9 +46,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
||||||
os.mkdir(crop_dir_name)
|
os.mkdir(crop_dir_name)
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("object_cropping_output.avi",
|
video_writer = cv2.VideoWriter("object_cropping_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
cv2.VideoWriter_fourcc(*'mp4v'),
|
|
||||||
fps, (w, h))
|
|
||||||
|
|
||||||
idx = 0
|
idx = 0
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
|
|
@ -66,14 +65,14 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
||||||
idx += 1
|
idx += 1
|
||||||
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
|
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
|
||||||
|
|
||||||
crop_obj = im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])]
|
crop_obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
|
||||||
|
|
||||||
cv2.imwrite(os.path.join(crop_dir_name, str(idx)+".png"), crop_obj)
|
cv2.imwrite(os.path.join(crop_dir_name, str(idx) + ".png"), crop_obj)
|
||||||
|
|
||||||
cv2.imshow("ultralytics", im0)
|
cv2.imshow("ultralytics", im0)
|
||||||
video_writer.write(im0)
|
video_writer.write(im0)
|
||||||
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
cap.release()
|
cap.release()
|
||||||
|
|
|
||||||
|
|
@ -66,12 +66,10 @@ root.mainloop()
|
||||||
# Video capture
|
# Video capture
|
||||||
cap = cv2.VideoCapture("Path/to/video/file.mp4")
|
cap = cv2.VideoCapture("Path/to/video/file.mp4")
|
||||||
assert cap.isOpened(), "Error reading video file"
|
assert cap.isOpened(), "Error reading video file"
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
cv2.CAP_PROP_FRAME_HEIGHT,
|
|
||||||
cv2.CAP_PROP_FPS))
|
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("parking management.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("parking management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
# Initialize parking management object
|
# Initialize parking management object
|
||||||
management = solutions.ParkingManagement(model_path="yolov8n.pt")
|
management = solutions.ParkingManagement(model_path="yolov8n.pt")
|
||||||
|
|
|
||||||
|
|
@ -36,26 +36,27 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
|
||||||
assert cap.isOpened(), "Error reading video file"
|
assert cap.isOpened(), "Error reading video file"
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
|
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
|
||||||
|
|
||||||
queue = solutions.QueueManager(classes_names=model.names,
|
queue = solutions.QueueManager(
|
||||||
reg_pts=queue_region,
|
classes_names=model.names,
|
||||||
line_thickness=3,
|
reg_pts=queue_region,
|
||||||
fontsize=1.0,
|
line_thickness=3,
|
||||||
region_color=(255, 144, 31))
|
fontsize=1.0,
|
||||||
|
region_color=(255, 144, 31),
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
||||||
if success:
|
if success:
|
||||||
tracks = model.track(im0, show=False, persist=True,
|
tracks = model.track(im0, show=False, persist=True, verbose=False)
|
||||||
verbose=False)
|
|
||||||
out = queue.process_queue(im0, tracks)
|
out = queue.process_queue(im0, tracks)
|
||||||
|
|
||||||
video_writer.write(im0)
|
video_writer.write(im0)
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
@ -78,26 +79,27 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
|
||||||
assert cap.isOpened(), "Error reading video file"
|
assert cap.isOpened(), "Error reading video file"
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
|
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
|
||||||
|
|
||||||
queue = solutions.QueueManager(classes_names=model.names,
|
queue = solutions.QueueManager(
|
||||||
reg_pts=queue_region,
|
classes_names=model.names,
|
||||||
line_thickness=3,
|
reg_pts=queue_region,
|
||||||
fontsize=1.0,
|
line_thickness=3,
|
||||||
region_color=(255, 144, 31))
|
fontsize=1.0,
|
||||||
|
region_color=(255, 144, 31),
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
|
|
||||||
if success:
|
if success:
|
||||||
tracks = model.track(im0, show=False, persist=True,
|
tracks = model.track(im0, show=False, persist=True, verbose=False, classes=0) # Only person class
|
||||||
verbose=False, classes=0) # Only person class
|
|
||||||
out = queue.process_queue(im0, tracks)
|
out = queue.process_queue(im0, tracks)
|
||||||
|
|
||||||
video_writer.write(im0)
|
video_writer.write(im0)
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -108,16 +108,16 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to NCNN format
|
# Export the model to NCNN format
|
||||||
model.export(format='ncnn') # creates 'yolov8n_ncnn_model'
|
model.export(format="ncnn") # creates 'yolov8n_ncnn_model'
|
||||||
|
|
||||||
# Load the exported NCNN model
|
# Load the exported NCNN model
|
||||||
ncnn_model = YOLO('yolov8n_ncnn_model')
|
ncnn_model = YOLO("yolov8n_ncnn_model")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = ncnn_model('https://ultralytics.com/images/bus.jpg')
|
results = ncnn_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -231,10 +231,10 @@ To reproduce the above Ultralytics benchmarks on all [export formats](../modes/e
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
||||||
results = model.benchmarks(data='coco8.yaml', imgsz=640)
|
results = model.benchmarks(data="coco8.yaml", imgsz=640)
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -293,10 +293,10 @@ With the TCP stream initiated, you can perform YOLOv8 inference.
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = model('tcp://127.0.0.1:8888')
|
results = model("tcp://127.0.0.1:8888")
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -60,21 +60,28 @@ pip install -U ultralytics sahi
|
||||||
Here's how to import the necessary modules and download a YOLOv8 model and some test images:
|
Here's how to import the necessary modules and download a YOLOv8 model and some test images:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from sahi.utils.yolov8 import download_yolov8s_model
|
from pathlib import Path
|
||||||
|
|
||||||
|
from IPython.display import Image
|
||||||
from sahi import AutoDetectionModel
|
from sahi import AutoDetectionModel
|
||||||
|
from sahi.predict import get_prediction, get_sliced_prediction, predict
|
||||||
from sahi.utils.cv import read_image
|
from sahi.utils.cv import read_image
|
||||||
from sahi.utils.file import download_from_url
|
from sahi.utils.file import download_from_url
|
||||||
from sahi.predict import get_prediction, get_sliced_prediction, predict
|
from sahi.utils.yolov8 import download_yolov8s_model
|
||||||
from pathlib import Path
|
|
||||||
from IPython.display import Image
|
|
||||||
|
|
||||||
# Download YOLOv8 model
|
# Download YOLOv8 model
|
||||||
yolov8_model_path = "models/yolov8s.pt"
|
yolov8_model_path = "models/yolov8s.pt"
|
||||||
download_yolov8s_model(yolov8_model_path)
|
download_yolov8s_model(yolov8_model_path)
|
||||||
|
|
||||||
# Download test images
|
# Download test images
|
||||||
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg', 'demo_data/small-vehicles1.jpeg')
|
download_from_url(
|
||||||
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png', 'demo_data/terrain2.png')
|
"https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg",
|
||||||
|
"demo_data/small-vehicles1.jpeg",
|
||||||
|
)
|
||||||
|
download_from_url(
|
||||||
|
"https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png",
|
||||||
|
"demo_data/terrain2.png",
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Standard Inference with YOLOv8
|
## Standard Inference with YOLOv8
|
||||||
|
|
@ -85,7 +92,7 @@ You can instantiate a YOLOv8 model for object detection like this:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
detection_model = AutoDetectionModel.from_pretrained(
|
detection_model = AutoDetectionModel.from_pretrained(
|
||||||
model_type='yolov8',
|
model_type="yolov8",
|
||||||
model_path=yolov8_model_path,
|
model_path=yolov8_model_path,
|
||||||
confidence_threshold=0.3,
|
confidence_threshold=0.3,
|
||||||
device="cpu", # or 'cuda:0'
|
device="cpu", # or 'cuda:0'
|
||||||
|
|
@ -124,7 +131,7 @@ result = get_sliced_prediction(
|
||||||
slice_height=256,
|
slice_height=256,
|
||||||
slice_width=256,
|
slice_width=256,
|
||||||
overlap_height_ratio=0.2,
|
overlap_height_ratio=0.2,
|
||||||
overlap_width_ratio=0.2
|
overlap_width_ratio=0.2,
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -30,15 +30,16 @@ The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanc
|
||||||
#### Import Libraries
|
#### Import Libraries
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import torch
|
|
||||||
import numpy as np
|
|
||||||
import cv2
|
|
||||||
from time import time
|
|
||||||
from ultralytics import YOLO
|
|
||||||
from ultralytics.utils.plotting import Annotator, colors
|
|
||||||
import smtplib
|
import smtplib
|
||||||
from email.mime.multipart import MIMEMultipart
|
from email.mime.multipart import MIMEMultipart
|
||||||
from email.mime.text import MIMEText
|
from email.mime.text import MIMEText
|
||||||
|
from time import time
|
||||||
|
|
||||||
|
import cv2
|
||||||
|
import numpy as np
|
||||||
|
import torch
|
||||||
|
from ultralytics import YOLO
|
||||||
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Set up the parameters of the message
|
#### Set up the parameters of the message
|
||||||
|
|
@ -58,7 +59,7 @@ to_email = "" # receiver email
|
||||||
#### Server creation and authentication
|
#### Server creation and authentication
|
||||||
|
|
||||||
```python
|
```python
|
||||||
server = smtplib.SMTP('smtp.gmail.com: 587')
|
server = smtplib.SMTP("smtp.gmail.com: 587")
|
||||||
server.starttls()
|
server.starttls()
|
||||||
server.login(from_email, password)
|
server.login(from_email, password)
|
||||||
```
|
```
|
||||||
|
|
@ -69,13 +70,13 @@ server.login(from_email, password)
|
||||||
def send_email(to_email, from_email, object_detected=1):
|
def send_email(to_email, from_email, object_detected=1):
|
||||||
"""Sends an email notification indicating the number of objects detected; defaults to 1 object."""
|
"""Sends an email notification indicating the number of objects detected; defaults to 1 object."""
|
||||||
message = MIMEMultipart()
|
message = MIMEMultipart()
|
||||||
message['From'] = from_email
|
message["From"] = from_email
|
||||||
message['To'] = to_email
|
message["To"] = to_email
|
||||||
message['Subject'] = "Security Alert"
|
message["Subject"] = "Security Alert"
|
||||||
# Add in the message body
|
# Add in the message body
|
||||||
message_body = f'ALERT - {object_detected} objects has been detected!!'
|
message_body = f"ALERT - {object_detected} objects has been detected!!"
|
||||||
|
|
||||||
message.attach(MIMEText(message_body, 'plain'))
|
message.attach(MIMEText(message_body, "plain"))
|
||||||
server.sendmail(from_email, to_email, message.as_string())
|
server.sendmail(from_email, to_email, message.as_string())
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -97,7 +98,7 @@ class ObjectDetection:
|
||||||
self.end_time = 0
|
self.end_time = 0
|
||||||
|
|
||||||
# device information
|
# device information
|
||||||
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
self.device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||||
|
|
||||||
def predict(self, im0):
|
def predict(self, im0):
|
||||||
"""Run prediction using a YOLO model for the input image `im0`."""
|
"""Run prediction using a YOLO model for the input image `im0`."""
|
||||||
|
|
@ -108,10 +109,16 @@ class ObjectDetection:
|
||||||
"""Displays the FPS on an image `im0` by calculating and overlaying as white text on a black rectangle."""
|
"""Displays the FPS on an image `im0` by calculating and overlaying as white text on a black rectangle."""
|
||||||
self.end_time = time()
|
self.end_time = time()
|
||||||
fps = 1 / np.round(self.end_time - self.start_time, 2)
|
fps = 1 / np.round(self.end_time - self.start_time, 2)
|
||||||
text = f'FPS: {int(fps)}'
|
text = f"FPS: {int(fps)}"
|
||||||
text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.0, 2)[0]
|
text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.0, 2)[0]
|
||||||
gap = 10
|
gap = 10
|
||||||
cv2.rectangle(im0, (20 - gap, 70 - text_size[1] - gap), (20 + text_size[0] + gap, 70 + gap), (255, 255, 255), -1)
|
cv2.rectangle(
|
||||||
|
im0,
|
||||||
|
(20 - gap, 70 - text_size[1] - gap),
|
||||||
|
(20 + text_size[0] + gap, 70 + gap),
|
||||||
|
(255, 255, 255),
|
||||||
|
-1,
|
||||||
|
)
|
||||||
cv2.putText(im0, text, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 2)
|
cv2.putText(im0, text, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 2)
|
||||||
|
|
||||||
def plot_bboxes(self, results, im0):
|
def plot_bboxes(self, results, im0):
|
||||||
|
|
@ -148,7 +155,7 @@ class ObjectDetection:
|
||||||
self.email_sent = False
|
self.email_sent = False
|
||||||
|
|
||||||
self.display_fps(im0)
|
self.display_fps(im0)
|
||||||
cv2.imshow('YOLOv8 Detection', im0)
|
cv2.imshow("YOLOv8 Detection", im0)
|
||||||
frame_count += 1
|
frame_count += 1
|
||||||
if cv2.waitKey(5) & 0xFF == 27:
|
if cv2.waitKey(5) & 0xFF == 27:
|
||||||
break
|
break
|
||||||
|
|
|
||||||
|
|
@ -39,8 +39,8 @@ Speed estimation is the process of calculating the rate of movement of an object
|
||||||
=== "Speed Estimation"
|
=== "Speed Estimation"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
names = model.model.names
|
names = model.model.names
|
||||||
|
|
@ -50,17 +50,18 @@ Speed estimation is the process of calculating the rate of movement of an object
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
# Video writer
|
# Video writer
|
||||||
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
line_pts = [(0, 360), (1280, 360)]
|
line_pts = [(0, 360), (1280, 360)]
|
||||||
|
|
||||||
# Init speed-estimation obj
|
# Init speed-estimation obj
|
||||||
speed_obj = solutions.SpeedEstimator(reg_pts=line_pts,
|
speed_obj = solutions.SpeedEstimator(
|
||||||
names=names,
|
reg_pts=line_pts,
|
||||||
view_img=True)
|
names=names,
|
||||||
|
view_img=True,
|
||||||
|
)
|
||||||
|
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
|
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
if not success:
|
if not success:
|
||||||
print("Video frame is empty or video processing has been successfully completed.")
|
print("Video frame is empty or video processing has been successfully completed.")
|
||||||
|
|
@ -74,7 +75,6 @@ Speed estimation is the process of calculating the rate of movement of an object
|
||||||
cap.release()
|
cap.release()
|
||||||
video_writer.release()
|
video_writer.release()
|
||||||
cv2.destroyAllWindows()
|
cv2.destroyAllWindows()
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
???+ warning "Speed is Estimate"
|
???+ warning "Speed is Estimate"
|
||||||
|
|
|
||||||
|
|
@ -46,10 +46,10 @@ Before deploying the model on Triton, it must be exported to the ONNX format. ON
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('yolov8n.pt') # load an official model
|
model = YOLO("yolov8n.pt") # load an official model
|
||||||
|
|
||||||
# Export the model
|
# Export the model
|
||||||
onnx_file = model.export(format='onnx', dynamic=True)
|
onnx_file = model.export(format="onnx", dynamic=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Setting Up Triton Model Repository
|
## Setting Up Triton Model Repository
|
||||||
|
|
@ -62,11 +62,11 @@ The Triton Model Repository is a storage location where Triton can access and lo
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
# Define paths
|
# Define paths
|
||||||
triton_repo_path = Path('tmp') / 'triton_repo'
|
triton_repo_path = Path("tmp") / "triton_repo"
|
||||||
triton_model_path = triton_repo_path / 'yolo'
|
triton_model_path = triton_repo_path / "yolo"
|
||||||
|
|
||||||
# Create directories
|
# Create directories
|
||||||
(triton_model_path / '1').mkdir(parents=True, exist_ok=True)
|
(triton_model_path / "1").mkdir(parents=True, exist_ok=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Move the exported ONNX model to the Triton repository:
|
2. Move the exported ONNX model to the Triton repository:
|
||||||
|
|
@ -75,10 +75,10 @@ The Triton Model Repository is a storage location where Triton can access and lo
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
# Move ONNX model to Triton Model path
|
# Move ONNX model to Triton Model path
|
||||||
Path(onnx_file).rename(triton_model_path / '1' / 'model.onnx')
|
Path(onnx_file).rename(triton_model_path / "1" / "model.onnx")
|
||||||
|
|
||||||
# Create config file
|
# Create config file
|
||||||
(triton_model_path / 'config.pbtxt').touch()
|
(triton_model_path / "config.pbtxt").touch()
|
||||||
```
|
```
|
||||||
|
|
||||||
## Running Triton Inference Server
|
## Running Triton Inference Server
|
||||||
|
|
@ -92,18 +92,23 @@ import time
|
||||||
from tritonclient.http import InferenceServerClient
|
from tritonclient.http import InferenceServerClient
|
||||||
|
|
||||||
# Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
|
# Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
|
||||||
tag = 'nvcr.io/nvidia/tritonserver:23.09-py3' # 6.4 GB
|
tag = "nvcr.io/nvidia/tritonserver:23.09-py3" # 6.4 GB
|
||||||
|
|
||||||
# Pull the image
|
# Pull the image
|
||||||
subprocess.call(f'docker pull {tag}', shell=True)
|
subprocess.call(f"docker pull {tag}", shell=True)
|
||||||
|
|
||||||
# Run the Triton server and capture the container ID
|
# Run the Triton server and capture the container ID
|
||||||
container_id = subprocess.check_output(
|
container_id = (
|
||||||
f'docker run -d --rm -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models',
|
subprocess.check_output(
|
||||||
shell=True).decode('utf-8').strip()
|
f"docker run -d --rm -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
|
||||||
|
shell=True,
|
||||||
|
)
|
||||||
|
.decode("utf-8")
|
||||||
|
.strip()
|
||||||
|
)
|
||||||
|
|
||||||
# Wait for the Triton server to start
|
# Wait for the Triton server to start
|
||||||
triton_client = InferenceServerClient(url='localhost:8000', verbose=False, ssl=False)
|
triton_client = InferenceServerClient(url="localhost:8000", verbose=False, ssl=False)
|
||||||
|
|
||||||
# Wait until model is ready
|
# Wait until model is ready
|
||||||
for _ in range(10):
|
for _ in range(10):
|
||||||
|
|
@ -119,17 +124,17 @@ Then run inference using the Triton Server model:
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the Triton Server model
|
# Load the Triton Server model
|
||||||
model = YOLO(f'http://localhost:8000/yolo', task='detect')
|
model = YOLO(f"http://localhost:8000/yolo", task="detect")
|
||||||
|
|
||||||
# Run inference on the server
|
# Run inference on the server
|
||||||
results = model('path/to/image.jpg')
|
results = model("path/to/image.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
Cleanup the container:
|
Cleanup the container:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Kill and remove the container at the end of the test
|
# Kill and remove the container at the end of the test
|
||||||
subprocess.call(f'docker kill {container_id}', shell=True)
|
subprocess.call(f"docker kill {container_id}", shell=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
|
||||||
|
|
@ -47,9 +47,8 @@ The VSCode compatible protocols for viewing images using the integrated terminal
|
||||||
import io
|
import io
|
||||||
|
|
||||||
import cv2 as cv
|
import cv2 as cv
|
||||||
|
|
||||||
from ultralytics import YOLO
|
|
||||||
from sixel import SixelWriter
|
from sixel import SixelWriter
|
||||||
|
from ultralytics import YOLO
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Load a model and execute inference, then plot the results and store in a variable. See more about inference arguments and working with results on the [predict mode](../modes/predict.md) page.
|
1. Load a model and execute inference, then plot the results and store in a variable. See more about inference arguments and working with results on the [predict mode](../modes/predict.md) page.
|
||||||
|
|
|
||||||
|
|
@ -24,14 +24,14 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
```python
|
```python
|
||||||
import cv2
|
import cv2
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from ultralytics.utils.plotting import colors, Annotator
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
names = model.model.names
|
names = model.model.names
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
|
out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
|
||||||
|
|
||||||
center_point = (-10, h)
|
center_point = (-10, h)
|
||||||
|
|
||||||
|
|
@ -54,7 +54,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
out.write(im0)
|
out.write(im0)
|
||||||
cv2.imshow("visioneye-pinpoint", im0)
|
cv2.imshow("visioneye-pinpoint", im0)
|
||||||
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
out.release()
|
out.release()
|
||||||
|
|
@ -67,13 +67,13 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
```python
|
```python
|
||||||
import cv2
|
import cv2
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from ultralytics.utils.plotting import colors, Annotator
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
|
out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
|
||||||
|
|
||||||
center_point = (-10, h)
|
center_point = (-10, h)
|
||||||
|
|
||||||
|
|
@ -98,7 +98,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
out.write(im0)
|
out.write(im0)
|
||||||
cv2.imshow("visioneye-pinpoint", im0)
|
cv2.imshow("visioneye-pinpoint", im0)
|
||||||
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
out.release()
|
out.release()
|
||||||
|
|
@ -109,8 +109,9 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
=== "VisionEye with Distance Calculation"
|
=== "VisionEye with Distance Calculation"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import cv2
|
|
||||||
import math
|
import math
|
||||||
|
|
||||||
|
import cv2
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from ultralytics.utils.plotting import Annotator, colors
|
from ultralytics.utils.plotting import Annotator, colors
|
||||||
|
|
||||||
|
|
@ -119,7 +120,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
|
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
out = cv2.VideoWriter('visioneye-distance-calculation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
|
out = cv2.VideoWriter("visioneye-distance-calculation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
|
||||||
|
|
||||||
center_point = (0, h)
|
center_point = (0, h)
|
||||||
pixel_per_meter = 10
|
pixel_per_meter = 10
|
||||||
|
|
@ -144,18 +145,18 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
|
||||||
annotator.box_label(box, label=str(track_id), color=bbox_clr)
|
annotator.box_label(box, label=str(track_id), color=bbox_clr)
|
||||||
annotator.visioneye(box, center_point)
|
annotator.visioneye(box, center_point)
|
||||||
|
|
||||||
x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid
|
x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid
|
||||||
|
|
||||||
distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2))/pixel_per_meter
|
distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2)) / pixel_per_meter
|
||||||
|
|
||||||
text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX,1.2, 3)
|
text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX, 1.2, 3)
|
||||||
cv2.rectangle(im0, (x1, y1 - text_size[1] - 10),(x1 + text_size[0] + 10, y1), txt_background, -1)
|
cv2.rectangle(im0, (x1, y1 - text_size[1] - 10), (x1 + text_size[0] + 10, y1), txt_background, -1)
|
||||||
cv2.putText(im0, f"Distance: {distance:.2f} m",(x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2,txt_color, 3)
|
cv2.putText(im0, f"Distance: {distance:.2f} m", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2, txt_color, 3)
|
||||||
|
|
||||||
out.write(im0)
|
out.write(im0)
|
||||||
cv2.imshow("visioneye-distance-calculation", im0)
|
cv2.imshow("visioneye-distance-calculation", im0)
|
||||||
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||||
break
|
break
|
||||||
|
|
||||||
out.release()
|
out.release()
|
||||||
|
|
|
||||||
|
|
@ -39,28 +39,30 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
|
||||||
=== "Workouts Monitoring"
|
=== "Workouts Monitoring"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n-pose.pt")
|
model = YOLO("yolov8n-pose.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
assert cap.isOpened(), "Error reading video file"
|
assert cap.isOpened(), "Error reading video file"
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
gym_object = solutions.AIGym(line_thickness=2,
|
gym_object = solutions.AIGym(
|
||||||
view_img=True,
|
line_thickness=2,
|
||||||
pose_type="pushup",
|
view_img=True,
|
||||||
kpts_to_check=[6, 8, 10])
|
pose_type="pushup",
|
||||||
|
kpts_to_check=[6, 8, 10],
|
||||||
|
)
|
||||||
|
|
||||||
frame_count = 0
|
frame_count = 0
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
if not success:
|
if not success:
|
||||||
print("Video frame is empty or video processing has been successfully completed.")
|
print("Video frame is empty or video processing has been successfully completed.")
|
||||||
break
|
break
|
||||||
frame_count += 1
|
frame_count += 1
|
||||||
results = model.track(im0, verbose=False) # Tracking recommended
|
results = model.track(im0, verbose=False) # Tracking recommended
|
||||||
#results = model.predict(im0) # Prediction also supported
|
# results = model.predict(im0) # Prediction also supported
|
||||||
im0 = gym_object.start_counting(im0, results, frame_count)
|
im0 = gym_object.start_counting(im0, results, frame_count)
|
||||||
|
|
||||||
cv2.destroyAllWindows()
|
cv2.destroyAllWindows()
|
||||||
|
|
@ -69,30 +71,32 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
|
||||||
=== "Workouts Monitoring with Save Output"
|
=== "Workouts Monitoring with Save Output"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO, solutions
|
|
||||||
import cv2
|
import cv2
|
||||||
|
from ultralytics import YOLO, solutions
|
||||||
|
|
||||||
model = YOLO("yolov8n-pose.pt")
|
model = YOLO("yolov8n-pose.pt")
|
||||||
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
cap = cv2.VideoCapture("path/to/video/file.mp4")
|
||||||
assert cap.isOpened(), "Error reading video file"
|
assert cap.isOpened(), "Error reading video file"
|
||||||
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
|
||||||
|
|
||||||
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
|
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
|
||||||
|
|
||||||
gym_object = solutions.AIGym(line_thickness=2,
|
gym_object = solutions.AIGym(
|
||||||
view_img=True,
|
line_thickness=2,
|
||||||
pose_type="pushup",
|
view_img=True,
|
||||||
kpts_to_check=[6, 8, 10])
|
pose_type="pushup",
|
||||||
|
kpts_to_check=[6, 8, 10],
|
||||||
|
)
|
||||||
|
|
||||||
frame_count = 0
|
frame_count = 0
|
||||||
while cap.isOpened():
|
while cap.isOpened():
|
||||||
success, im0 = cap.read()
|
success, im0 = cap.read()
|
||||||
if not success:
|
if not success:
|
||||||
print("Video frame is empty or video processing has been successfully completed.")
|
print("Video frame is empty or video processing has been successfully completed.")
|
||||||
break
|
break
|
||||||
frame_count += 1
|
frame_count += 1
|
||||||
results = model.track(im0, verbose=False) # Tracking recommended
|
results = model.track(im0, verbose=False) # Tracking recommended
|
||||||
#results = model.predict(im0) # Prediction also supported
|
# results = model.predict(im0) # Prediction also supported
|
||||||
im0 = gym_object.start_counting(im0, results, frame_count)
|
im0 = gym_object.start_counting(im0, results, frame_count)
|
||||||
video_writer.write(im0)
|
video_writer.write(im0)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -79,7 +79,7 @@ This section will address common issues faced while training and their respectiv
|
||||||
- Make sure you pass the path to your `.yaml` file as the `data` argument when calling `model.train()`, as shown below:
|
- Make sure you pass the path to your `.yaml` file as the `data` argument when calling `model.train()`, as shown below:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
model.train(data='/path/to/your/data.yaml', batch=4)
|
model.train(data="/path/to/your/data.yaml", batch=4)
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Accelerating Training with Multiple GPUs
|
#### Accelerating Training with Multiple GPUs
|
||||||
|
|
@ -98,7 +98,7 @@ model.train(data='/path/to/your/data.yaml', batch=4)
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Adjust the batch size and other settings as needed to optimize training speed
|
# Adjust the batch size and other settings as needed to optimize training speed
|
||||||
model.train(data='/path/to/your/data.yaml', batch=32, multi_scale=True)
|
model.train(data="/path/to/your/data.yaml", batch=32, multi_scale=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Continuous Monitoring Parameters
|
#### Continuous Monitoring Parameters
|
||||||
|
|
@ -221,10 +221,10 @@ yolo task=detect mode=segment model=yolov8n-seg.pt source='path/to/car.mp4' show
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a pre-trained YOLOv8 model
|
# Load a pre-trained YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Specify the source image
|
# Specify the source image
|
||||||
source = 'https://ultralytics.com/images/bus.jpg'
|
source = "https://ultralytics.com/images/bus.jpg"
|
||||||
|
|
||||||
# Make predictions
|
# Make predictions
|
||||||
results = model.predict(source, save=True, imgsz=320, conf=0.5)
|
results = model.predict(source, save=True, imgsz=320, conf=0.5)
|
||||||
|
|
|
||||||
|
|
@ -28,9 +28,10 @@ When using threads in Python, it's important to recognize patterns that can lead
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Unsafe: Sharing a single model instance across threads
|
# Unsafe: Sharing a single model instance across threads
|
||||||
from ultralytics import YOLO
|
|
||||||
from threading import Thread
|
from threading import Thread
|
||||||
|
|
||||||
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Instantiate the model outside the thread
|
# Instantiate the model outside the thread
|
||||||
shared_model = YOLO("yolov8n.pt")
|
shared_model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
|
|
@ -54,9 +55,10 @@ Similarly, here is an unsafe pattern with multiple YOLO model instances:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Unsafe: Sharing multiple model instances across threads can still lead to issues
|
# Unsafe: Sharing multiple model instances across threads can still lead to issues
|
||||||
from ultralytics import YOLO
|
|
||||||
from threading import Thread
|
from threading import Thread
|
||||||
|
|
||||||
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Instantiate multiple models outside the thread
|
# Instantiate multiple models outside the thread
|
||||||
shared_model_1 = YOLO("yolov8n_1.pt")
|
shared_model_1 = YOLO("yolov8n_1.pt")
|
||||||
shared_model_2 = YOLO("yolov8n_2.pt")
|
shared_model_2 = YOLO("yolov8n_2.pt")
|
||||||
|
|
@ -85,9 +87,10 @@ Here's how to instantiate a YOLO model inside each thread for safe parallel infe
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Safe: Instantiating a single model inside each thread
|
# Safe: Instantiating a single model inside each thread
|
||||||
from ultralytics import YOLO
|
|
||||||
from threading import Thread
|
from threading import Thread
|
||||||
|
|
||||||
|
from ultralytics import YOLO
|
||||||
|
|
||||||
|
|
||||||
def thread_safe_predict(image_path):
|
def thread_safe_predict(image_path):
|
||||||
"""Predict on an image using a new YOLO model instance in a thread-safe manner; takes image path as input."""
|
"""Predict on an image using a new YOLO model instance in a thread-safe manner; takes image path as input."""
|
||||||
|
|
|
||||||
|
|
@ -57,7 +57,7 @@ When adding new functions or classes, please include a [Google-style docstring](
|
||||||
|
|
||||||
=== "Google-style"
|
=== "Google-style"
|
||||||
|
|
||||||
This example shows both Google-style docstrings. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
|
This example shows a Google-style docstring. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
|
||||||
```python
|
```python
|
||||||
def example_function(arg1, arg2=4):
|
def example_function(arg1, arg2=4):
|
||||||
"""
|
"""
|
||||||
|
|
@ -80,7 +80,7 @@ When adding new functions or classes, please include a [Google-style docstring](
|
||||||
|
|
||||||
=== "Google-style with type hints"
|
=== "Google-style with type hints"
|
||||||
|
|
||||||
This example shows both Google-style docstrings and argument and return type hints, though both are not required, one can be used without the other.
|
This example shows both a Google-style docstring and argument and return type hints, though both are not required, one can be used without the other.
|
||||||
```python
|
```python
|
||||||
def example_function(arg1: int, arg2: int = 4) -> bool:
|
def example_function(arg1: int, arg2: int = 4) -> bool:
|
||||||
"""
|
"""
|
||||||
|
|
|
||||||
|
|
@ -85,7 +85,7 @@ To gain insight into the current configuration of your settings, you can view th
|
||||||
print(settings)
|
print(settings)
|
||||||
|
|
||||||
# Return analytics and crash reporting setting
|
# Return analytics and crash reporting setting
|
||||||
value = settings['sync']
|
value = settings["sync"]
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -106,7 +106,7 @@ Ultralytics allows users to easily modify their settings. Changes can be perform
|
||||||
from ultralytics import settings
|
from ultralytics import settings
|
||||||
|
|
||||||
# Disable analytics and crash reporting
|
# Disable analytics and crash reporting
|
||||||
settings.update({'sync': False})
|
settings.update({"sync": False})
|
||||||
|
|
||||||
# Reset settings to default values
|
# Reset settings to default values
|
||||||
settings.reset()
|
settings.reset()
|
||||||
|
|
|
||||||
|
|
@ -117,21 +117,22 @@ After creating the AWS CloudFormation Stack, the next step is to deploy YOLOv8.
|
||||||
```python
|
```python
|
||||||
import json
|
import json
|
||||||
|
|
||||||
|
|
||||||
def output_fn(prediction_output, content_type):
|
def output_fn(prediction_output, content_type):
|
||||||
"""Formats model outputs as JSON string according to content_type, extracting attributes like boxes, masks, keypoints."""
|
"""Formats model outputs as JSON string according to content_type, extracting attributes like boxes, masks, keypoints."""
|
||||||
print("Executing output_fn from inference.py ...")
|
print("Executing output_fn from inference.py ...")
|
||||||
infer = {}
|
infer = {}
|
||||||
for result in prediction_output:
|
for result in prediction_output:
|
||||||
if result.boxes is not None:
|
if result.boxes is not None:
|
||||||
infer['boxes'] = result.boxes.numpy().data.tolist()
|
infer["boxes"] = result.boxes.numpy().data.tolist()
|
||||||
if result.masks is not None:
|
if result.masks is not None:
|
||||||
infer['masks'] = result.masks.numpy().data.tolist()
|
infer["masks"] = result.masks.numpy().data.tolist()
|
||||||
if result.keypoints is not None:
|
if result.keypoints is not None:
|
||||||
infer['keypoints'] = result.keypoints.numpy().data.tolist()
|
infer["keypoints"] = result.keypoints.numpy().data.tolist()
|
||||||
if result.obb is not None:
|
if result.obb is not None:
|
||||||
infer['obb'] = result.obb.numpy().data.tolist()
|
infer["obb"] = result.obb.numpy().data.tolist()
|
||||||
if result.probs is not None:
|
if result.probs is not None:
|
||||||
infer['probs'] = result.probs.numpy().data.tolist()
|
infer["probs"] = result.probs.numpy().data.tolist()
|
||||||
return json.dumps(infer)
|
return json.dumps(infer)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -67,17 +67,14 @@ Before diving into the usage instructions, be sure to check out the range of [YO
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Step 1: Creating a ClearML Task
|
# Step 1: Creating a ClearML Task
|
||||||
task = Task.init(
|
task = Task.init(project_name="my_project", task_name="my_yolov8_task")
|
||||||
project_name="my_project",
|
|
||||||
task_name="my_yolov8_task"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Step 2: Selecting the YOLOv8 Model
|
# Step 2: Selecting the YOLOv8 Model
|
||||||
model_variant = "yolov8n"
|
model_variant = "yolov8n"
|
||||||
task.set_parameter("model_variant", model_variant)
|
task.set_parameter("model_variant", model_variant)
|
||||||
|
|
||||||
# Step 3: Loading the YOLOv8 Model
|
# Step 3: Loading the YOLOv8 Model
|
||||||
model = YOLO(f'{model_variant}.pt')
|
model = YOLO(f"{model_variant}.pt")
|
||||||
|
|
||||||
# Step 4: Setting Up Training Arguments
|
# Step 4: Setting Up Training Arguments
|
||||||
args = dict(data="coco8.yaml", epochs=16)
|
args = dict(data="coco8.yaml", epochs=16)
|
||||||
|
|
|
||||||
|
|
@ -74,12 +74,12 @@ Before diving into the usage instructions, be sure to check out the range of [YO
|
||||||
|
|
||||||
# train the model
|
# train the model
|
||||||
results = model.train(
|
results = model.train(
|
||||||
data="coco8.yaml",
|
data="coco8.yaml",
|
||||||
project="comet-example-yolov8-coco128",
|
project="comet-example-yolov8-coco128",
|
||||||
batch=32,
|
batch=32,
|
||||||
save_period=1,
|
save_period=1,
|
||||||
save_json=True,
|
save_json=True,
|
||||||
epochs=3
|
epochs=3,
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -144,7 +144,7 @@ Comet ML allows you to specify how often batches of image predictions are logged
|
||||||
```python
|
```python
|
||||||
import os
|
import os
|
||||||
|
|
||||||
os.environ['COMET_EVAL_BATCH_LOGGING_INTERVAL'] = "4"
|
os.environ["COMET_EVAL_BATCH_LOGGING_INTERVAL"] = "4"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Disabling Confusion Matrix Logging
|
### Disabling Confusion Matrix Logging
|
||||||
|
|
|
||||||
|
|
@ -83,16 +83,16 @@ Before diving into the usage instructions, be sure to check out the range of [YO
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to CoreML format
|
# Export the model to CoreML format
|
||||||
model.export(format='coreml') # creates 'yolov8n.mlpackage'
|
model.export(format="coreml") # creates 'yolov8n.mlpackage'
|
||||||
|
|
||||||
# Load the exported CoreML model
|
# Load the exported CoreML model
|
||||||
coreml_model = YOLO('yolov8n.mlpackage')
|
coreml_model = YOLO("yolov8n.mlpackage")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = coreml_model('https://ultralytics.com/images/bus.jpg')
|
results = coreml_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -149,7 +149,7 @@ If you are using a Jupyter Notebook and you want to display the generated DVC pl
|
||||||
from IPython.display import HTML
|
from IPython.display import HTML
|
||||||
|
|
||||||
# Display the DVC plots as HTML
|
# Display the DVC plots as HTML
|
||||||
HTML(filename='./dvc_plots/index.html')
|
HTML(filename="./dvc_plots/index.html")
|
||||||
```
|
```
|
||||||
|
|
||||||
This code will render the HTML file containing the DVC plots directly in your Jupyter Notebook, providing an easy and convenient way to analyze the visualized experiment data.
|
This code will render the HTML file containing the DVC plots directly in your Jupyter Notebook, providing an easy and convenient way to analyze the visualized experiment data.
|
||||||
|
|
|
||||||
|
|
@ -73,16 +73,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TFLite Edge TPU format
|
# Export the model to TFLite Edge TPU format
|
||||||
model.export(format='edgetpu') # creates 'yolov8n_full_integer_quant_edgetpu.tflite’
|
model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite’
|
||||||
|
|
||||||
# Load the exported TFLite Edge TPU model
|
# Load the exported TFLite Edge TPU model
|
||||||
edgetpu_model = YOLO('yolov8n_full_integer_quant_edgetpu.tflite')
|
edgetpu_model = YOLO("yolov8n_full_integer_quant_edgetpu.tflite")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = edgetpu_model('https://ultralytics.com/images/bus.jpg')
|
results = edgetpu_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -44,9 +44,8 @@ pip install gradio
|
||||||
This section provides the Python code used to create the Gradio interface with the Ultralytics YOLOv8 model. Supports classification tasks, detection tasks, segmentation tasks, and key point tasks.
|
This section provides the Python code used to create the Gradio interface with the Ultralytics YOLOv8 model. Supports classification tasks, detection tasks, segmentation tasks, and key point tasks.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import PIL.Image as Image
|
|
||||||
import gradio as gr
|
import gradio as gr
|
||||||
|
import PIL.Image as Image
|
||||||
from ultralytics import ASSETS, YOLO
|
from ultralytics import ASSETS, YOLO
|
||||||
|
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
@ -75,7 +74,7 @@ iface = gr.Interface(
|
||||||
inputs=[
|
inputs=[
|
||||||
gr.Image(type="pil", label="Upload Image"),
|
gr.Image(type="pil", label="Upload Image"),
|
||||||
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
|
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
|
||||||
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold")
|
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold"),
|
||||||
],
|
],
|
||||||
outputs=gr.Image(type="pil", label="Result"),
|
outputs=gr.Image(type="pil", label="Result"),
|
||||||
title="Ultralytics Gradio",
|
title="Ultralytics Gradio",
|
||||||
|
|
@ -83,10 +82,10 @@ iface = gr.Interface(
|
||||||
examples=[
|
examples=[
|
||||||
[ASSETS / "bus.jpg", 0.25, 0.45],
|
[ASSETS / "bus.jpg", 0.25, 0.45],
|
||||||
[ASSETS / "zidane.jpg", 0.25, 0.45],
|
[ASSETS / "zidane.jpg", 0.25, 0.45],
|
||||||
]
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
iface.launch()
|
iface.launch()
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -42,7 +42,7 @@ Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this
|
||||||
from ultralytics import settings
|
from ultralytics import settings
|
||||||
|
|
||||||
# Update a setting
|
# Update a setting
|
||||||
settings.update({'mlflow': True})
|
settings.update({"mlflow": True})
|
||||||
|
|
||||||
# Reset settings to default values
|
# Reset settings to default values
|
||||||
settings.reset()
|
settings.reset()
|
||||||
|
|
|
||||||
|
|
@ -75,16 +75,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to NCNN format
|
# Export the model to NCNN format
|
||||||
model.export(format='ncnn') # creates '/yolov8n_ncnn_model'
|
model.export(format="ncnn") # creates '/yolov8n_ncnn_model'
|
||||||
|
|
||||||
# Load the exported NCNN model
|
# Load the exported NCNN model
|
||||||
ncnn_model = YOLO('./yolov8n_ncnn_model')
|
ncnn_model = YOLO("./yolov8n_ncnn_model")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = ncnn_model('https://ultralytics.com/images/bus.jpg')
|
results = ncnn_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -109,10 +109,7 @@ With your YOLOv8 model in ONNX format, you can deploy and run inferences using D
|
||||||
model_path = "path/to/yolov8n.onnx"
|
model_path = "path/to/yolov8n.onnx"
|
||||||
|
|
||||||
# Set up the DeepSparse Pipeline
|
# Set up the DeepSparse Pipeline
|
||||||
yolo_pipeline = Pipeline.create(
|
yolo_pipeline = Pipeline.create(task="yolov8", model_path=model_path)
|
||||||
task="yolov8",
|
|
||||||
model_path=model_path
|
|
||||||
)
|
|
||||||
|
|
||||||
# Run the model on your images
|
# Run the model on your images
|
||||||
images = ["path/to/image.jpg"]
|
images = ["path/to/image.jpg"]
|
||||||
|
|
|
||||||
|
|
@ -91,16 +91,16 @@ Before diving into the usage instructions, be sure to check out the range of [YO
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to ONNX format
|
# Export the model to ONNX format
|
||||||
model.export(format='onnx') # creates 'yolov8n.onnx'
|
model.export(format="onnx") # creates 'yolov8n.onnx'
|
||||||
|
|
||||||
# Load the exported ONNX model
|
# Load the exported ONNX model
|
||||||
onnx_model = YOLO('yolov8n.onnx')
|
onnx_model = YOLO("yolov8n.onnx")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = onnx_model('https://ultralytics.com/images/bus.jpg')
|
results = onnx_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -35,16 +35,16 @@ Export a YOLOv8n model to OpenVINO format and run inference with the exported mo
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model
|
# Export the model
|
||||||
model.export(format='openvino') # creates 'yolov8n_openvino_model/'
|
model.export(format="openvino") # creates 'yolov8n_openvino_model/'
|
||||||
|
|
||||||
# Load the exported OpenVINO model
|
# Load the exported OpenVINO model
|
||||||
ov_model = YOLO('yolov8n_openvino_model/')
|
ov_model = YOLO("yolov8n_openvino_model/")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = ov_model('https://ultralytics.com/images/bus.jpg')
|
results = ov_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
@ -259,10 +259,10 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n PyTorch model
|
# Load a YOLOv8n PyTorch model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
|
||||||
results= model.benchmarks(data='coco8.yaml')
|
results = model.benchmarks(data="coco8.yaml")
|
||||||
```
|
```
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -77,16 +77,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to PaddlePaddle format
|
# Export the model to PaddlePaddle format
|
||||||
model.export(format='paddle') # creates '/yolov8n_paddle_model'
|
model.export(format="paddle") # creates '/yolov8n_paddle_model'
|
||||||
|
|
||||||
# Load the exported PaddlePaddle model
|
# Load the exported PaddlePaddle model
|
||||||
paddle_model = YOLO('./yolov8n_paddle_model')
|
paddle_model = YOLO("./yolov8n_paddle_model")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = paddle_model('https://ultralytics.com/images/bus.jpg')
|
results = paddle_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -50,10 +50,10 @@ To install the required packages, run:
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a YOLOv8n model
|
# Load a YOLOv8n model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Start tuning hyperparameters for YOLOv8n training on the COCO8 dataset
|
# Start tuning hyperparameters for YOLOv8n training on the COCO8 dataset
|
||||||
result_grid = model.tune(data='coco8.yaml', use_ray=True)
|
result_grid = model.tune(data="coco8.yaml", use_ray=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
## `tune()` Method Parameters
|
## `tune()` Method Parameters
|
||||||
|
|
@ -112,10 +112,12 @@ In this example, we demonstrate how to use a custom search space for hyperparame
|
||||||
model = YOLO("yolov8n.pt")
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Run Ray Tune on the model
|
# Run Ray Tune on the model
|
||||||
result_grid = model.tune(data="coco8.yaml",
|
result_grid = model.tune(
|
||||||
space={"lr0": tune.uniform(1e-5, 1e-1)},
|
data="coco8.yaml",
|
||||||
epochs=50,
|
space={"lr0": tune.uniform(1e-5, 1e-1)},
|
||||||
use_ray=True)
|
epochs=50,
|
||||||
|
use_ray=True,
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco8.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
|
In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco8.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
|
||||||
|
|
@ -164,10 +166,14 @@ You can plot the history of reported metrics for each trial to see how the metri
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
|
|
||||||
for result in result_grid:
|
for result in result_grid:
|
||||||
plt.plot(result.metrics_dataframe["training_iteration"], result.metrics_dataframe["mean_accuracy"], label=f"Trial {i}")
|
plt.plot(
|
||||||
|
result.metrics_dataframe["training_iteration"],
|
||||||
|
result.metrics_dataframe["mean_accuracy"],
|
||||||
|
label=f"Trial {i}",
|
||||||
|
)
|
||||||
|
|
||||||
plt.xlabel('Training Iterations')
|
plt.xlabel("Training Iterations")
|
||||||
plt.ylabel('Mean Accuracy')
|
plt.ylabel("Mean Accuracy")
|
||||||
plt.legend()
|
plt.legend()
|
||||||
plt.show()
|
plt.show()
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -85,16 +85,16 @@ Before diving into the usage instructions, be sure to check out the range of [YO
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TensorRT format
|
# Export the model to TensorRT format
|
||||||
model.export(format='engine') # creates 'yolov8n.engine'
|
model.export(format="engine") # creates 'yolov8n.engine'
|
||||||
|
|
||||||
# Load the exported TensorRT model
|
# Load the exported TensorRT model
|
||||||
tensorrt_model = YOLO('yolov8n.engine')
|
tensorrt_model = YOLO("yolov8n.engine")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = tensorrt_model('https://ultralytics.com/images/bus.jpg')
|
results = tensorrt_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -434,7 +434,7 @@ Expand sections below for information on how these models were exported and test
|
||||||
result = model.predict(
|
result = model.predict(
|
||||||
[img] * 8, # batch=8 of the same image
|
[img] * 8, # batch=8 of the same image
|
||||||
verbose=False,
|
verbose=False,
|
||||||
device="cuda"
|
device="cuda",
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -451,7 +451,7 @@ Expand sections below for information on how these models were exported and test
|
||||||
batch=1,
|
batch=1,
|
||||||
imgsz=640,
|
imgsz=640,
|
||||||
verbose=False,
|
verbose=False,
|
||||||
device="cuda"
|
device="cuda",
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -81,16 +81,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TF GraphDef format
|
# Export the model to TF GraphDef format
|
||||||
model.export(format='pb') # creates 'yolov8n.pb'
|
model.export(format="pb") # creates 'yolov8n.pb'
|
||||||
|
|
||||||
# Load the exported TF GraphDef model
|
# Load the exported TF GraphDef model
|
||||||
tf_graphdef_model = YOLO('yolov8n.pb')
|
tf_graphdef_model = YOLO("yolov8n.pb")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = tf_graphdef_model('https://ultralytics.com/images/bus.jpg')
|
results = tf_graphdef_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -75,16 +75,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TF SavedModel format
|
# Export the model to TF SavedModel format
|
||||||
model.export(format='saved_model') # creates '/yolov8n_saved_model'
|
model.export(format="saved_model") # creates '/yolov8n_saved_model'
|
||||||
|
|
||||||
# Load the exported TF SavedModel model
|
# Load the exported TF SavedModel model
|
||||||
tf_savedmodel_model = YOLO('./yolov8n_saved_model')
|
tf_savedmodel_model = YOLO("./yolov8n_saved_model")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = tf_savedmodel_model('https://ultralytics.com/images/bus.jpg')
|
results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -73,16 +73,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TF.js format
|
# Export the model to TF.js format
|
||||||
model.export(format='tfjs') # creates '/yolov8n_web_model'
|
model.export(format="tfjs") # creates '/yolov8n_web_model'
|
||||||
|
|
||||||
# Load the exported TF.js model
|
# Load the exported TF.js model
|
||||||
tfjs_model = YOLO('./yolov8n_web_model')
|
tfjs_model = YOLO("./yolov8n_web_model")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = tfjs_model('https://ultralytics.com/images/bus.jpg')
|
results = tfjs_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -79,16 +79,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TFLite format
|
# Export the model to TFLite format
|
||||||
model.export(format='tflite') # creates 'yolov8n_float32.tflite'
|
model.export(format="tflite") # creates 'yolov8n_float32.tflite'
|
||||||
|
|
||||||
# Load the exported TFLite model
|
# Load the exported TFLite model
|
||||||
tflite_model = YOLO('yolov8n_float32.tflite')
|
tflite_model = YOLO("yolov8n_float32.tflite")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = tflite_model('https://ultralytics.com/images/bus.jpg')
|
results = tflite_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -83,16 +83,16 @@ Before diving into the usage instructions, it's important to note that while all
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load the YOLOv8 model
|
# Load the YOLOv8 model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Export the model to TorchScript format
|
# Export the model to TorchScript format
|
||||||
model.export(format='torchscript') # creates 'yolov8n.torchscript'
|
model.export(format="torchscript") # creates 'yolov8n.torchscript'
|
||||||
|
|
||||||
# Load the exported TorchScript model
|
# Load the exported TorchScript model
|
||||||
torchscript_model = YOLO('yolov8n.torchscript')
|
torchscript_model = YOLO("yolov8n.torchscript")
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
results = torchscript_model('https://ultralytics.com/images/bus.jpg')
|
results = torchscript_model("https://ultralytics.com/images/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -63,9 +63,9 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
|
||||||
|
|
||||||
=== "Python"
|
=== "Python"
|
||||||
```python
|
```python
|
||||||
|
import wandb
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
from wandb.integration.ultralytics import add_wandb_callback
|
from wandb.integration.ultralytics import add_wandb_callback
|
||||||
import wandb
|
|
||||||
|
|
||||||
# Step 1: Initialize a Weights & Biases run
|
# Step 1: Initialize a Weights & Biases run
|
||||||
wandb.init(project="ultralytics", job_type="training")
|
wandb.init(project="ultralytics", job_type="training")
|
||||||
|
|
|
||||||
|
|
@ -56,16 +56,16 @@ To perform object detection on an image, use the `predict` method as shown below
|
||||||
from ultralytics.models.fastsam import FastSAMPrompt
|
from ultralytics.models.fastsam import FastSAMPrompt
|
||||||
|
|
||||||
# Define an inference source
|
# Define an inference source
|
||||||
source = 'path/to/bus.jpg'
|
source = "path/to/bus.jpg"
|
||||||
|
|
||||||
# Create a FastSAM model
|
# Create a FastSAM model
|
||||||
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
|
model = FastSAM("FastSAM-s.pt") # or FastSAM-x.pt
|
||||||
|
|
||||||
# Run inference on an image
|
# Run inference on an image
|
||||||
everything_results = model(source, device='cpu', retina_masks=True, imgsz=1024, conf=0.4, iou=0.9)
|
everything_results = model(source, device="cpu", retina_masks=True, imgsz=1024, conf=0.4, iou=0.9)
|
||||||
|
|
||||||
# Prepare a Prompt Process object
|
# Prepare a Prompt Process object
|
||||||
prompt_process = FastSAMPrompt(source, everything_results, device='cpu')
|
prompt_process = FastSAMPrompt(source, everything_results, device="cpu")
|
||||||
|
|
||||||
# Everything prompt
|
# Everything prompt
|
||||||
ann = prompt_process.everything_prompt()
|
ann = prompt_process.everything_prompt()
|
||||||
|
|
@ -74,13 +74,13 @@ To perform object detection on an image, use the `predict` method as shown below
|
||||||
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300])
|
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300])
|
||||||
|
|
||||||
# Text prompt
|
# Text prompt
|
||||||
ann = prompt_process.text_prompt(text='a photo of a dog')
|
ann = prompt_process.text_prompt(text="a photo of a dog")
|
||||||
|
|
||||||
# Point prompt
|
# Point prompt
|
||||||
# points default [[0,0]] [[x1,y1],[x2,y2]]
|
# points default [[0,0]] [[x1,y1],[x2,y2]]
|
||||||
# point_label default [0] [1,0] 0:background, 1:foreground
|
# point_label default [0] [1,0] 0:background, 1:foreground
|
||||||
ann = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1])
|
ann = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1])
|
||||||
prompt_process.plot(annotations=ann, output='./')
|
prompt_process.plot(annotations=ann, output="./")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -104,10 +104,10 @@ Validation of the model on a dataset can be done as follows:
|
||||||
from ultralytics import FastSAM
|
from ultralytics import FastSAM
|
||||||
|
|
||||||
# Create a FastSAM model
|
# Create a FastSAM model
|
||||||
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
|
model = FastSAM("FastSAM-s.pt") # or FastSAM-x.pt
|
||||||
|
|
||||||
# Validate the model
|
# Validate the model
|
||||||
results = model.val(data='coco8-seg.yaml')
|
results = model.val(data="coco8-seg.yaml")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -131,7 +131,7 @@ To perform object tracking on an image, use the `track` method as shown below:
|
||||||
from ultralytics import FastSAM
|
from ultralytics import FastSAM
|
||||||
|
|
||||||
# Create a FastSAM model
|
# Create a FastSAM model
|
||||||
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
|
model = FastSAM("FastSAM-s.pt") # or FastSAM-x.pt
|
||||||
|
|
||||||
# Track with a FastSAM model on a video
|
# Track with a FastSAM model on a video
|
||||||
results = model.track(source="path/to/video.mp4", imgsz=640)
|
results = model.track(source="path/to/video.mp4", imgsz=640)
|
||||||
|
|
|
||||||
|
|
@ -53,16 +53,16 @@ Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for obj
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a COCO-pretrained YOLOv8n model
|
# Load a COCO-pretrained YOLOv8n model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the YOLOv8n model on the 'bus.jpg' image
|
# Run inference with the YOLOv8n model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -77,10 +77,10 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
||||||
from ultralytics import SAM
|
from ultralytics import SAM
|
||||||
|
|
||||||
# Load the model
|
# Load the model
|
||||||
model = SAM('mobile_sam.pt')
|
model = SAM("mobile_sam.pt")
|
||||||
|
|
||||||
# Predict a segment based on a point prompt
|
# Predict a segment based on a point prompt
|
||||||
model.predict('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
|
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||||
```
|
```
|
||||||
|
|
||||||
### Box Prompt
|
### Box Prompt
|
||||||
|
|
@ -93,10 +93,10 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
||||||
from ultralytics import SAM
|
from ultralytics import SAM
|
||||||
|
|
||||||
# Load the model
|
# Load the model
|
||||||
model = SAM('mobile_sam.pt')
|
model = SAM("mobile_sam.pt")
|
||||||
|
|
||||||
# Predict a segment based on a box prompt
|
# Predict a segment based on a box prompt
|
||||||
model.predict('ultralytics/assets/zidane.jpg', bboxes=[439, 437, 524, 709])
|
model.predict("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
|
||||||
```
|
```
|
||||||
|
|
||||||
We have implemented `MobileSAM` and `SAM` using the same API. For more usage information, please see the [SAM page](sam.md).
|
We have implemented `MobileSAM` and `SAM` using the same API. For more usage information, please see the [SAM page](sam.md).
|
||||||
|
|
|
||||||
|
|
@ -48,16 +48,16 @@ This example provides simple RT-DETR training and inference examples. For full d
|
||||||
from ultralytics import RTDETR
|
from ultralytics import RTDETR
|
||||||
|
|
||||||
# Load a COCO-pretrained RT-DETR-l model
|
# Load a COCO-pretrained RT-DETR-l model
|
||||||
model = RTDETR('rtdetr-l.pt')
|
model = RTDETR("rtdetr-l.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the RT-DETR-l model on the 'bus.jpg' image
|
# Run inference with the RT-DETR-l model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -50,16 +50,16 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
||||||
from ultralytics import SAM
|
from ultralytics import SAM
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = SAM('sam_b.pt')
|
model = SAM("sam_b.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Run inference with bboxes prompt
|
# Run inference with bboxes prompt
|
||||||
model('ultralytics/assets/zidane.jpg', bboxes=[439, 437, 524, 709])
|
model("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
|
||||||
|
|
||||||
# Run inference with points prompt
|
# Run inference with points prompt
|
||||||
model('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
|
model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! Example "Segment everything"
|
!!! Example "Segment everything"
|
||||||
|
|
@ -72,13 +72,13 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
||||||
from ultralytics import SAM
|
from ultralytics import SAM
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = SAM('sam_b.pt')
|
model = SAM("sam_b.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Run inference
|
# Run inference
|
||||||
model('path/to/image.jpg')
|
model("path/to/image.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -100,7 +100,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
||||||
from ultralytics.models.sam import Predictor as SAMPredictor
|
from ultralytics.models.sam import Predictor as SAMPredictor
|
||||||
|
|
||||||
# Create SAMPredictor
|
# Create SAMPredictor
|
||||||
overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt")
|
overrides = dict(conf=0.25, task="segment", mode="predict", imgsz=1024, model="mobile_sam.pt")
|
||||||
predictor = SAMPredictor(overrides=overrides)
|
predictor = SAMPredictor(overrides=overrides)
|
||||||
|
|
||||||
# Set image
|
# Set image
|
||||||
|
|
@ -121,7 +121,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
||||||
from ultralytics.models.sam import Predictor as SAMPredictor
|
from ultralytics.models.sam import Predictor as SAMPredictor
|
||||||
|
|
||||||
# Create SAMPredictor
|
# Create SAMPredictor
|
||||||
overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt")
|
overrides = dict(conf=0.25, task="segment", mode="predict", imgsz=1024, model="mobile_sam.pt")
|
||||||
predictor = SAMPredictor(overrides=overrides)
|
predictor = SAMPredictor(overrides=overrides)
|
||||||
|
|
||||||
# Segment with additional args
|
# Segment with additional args
|
||||||
|
|
@ -150,27 +150,27 @@ Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
|
||||||
=== "Python"
|
=== "Python"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import FastSAM, SAM, YOLO
|
from ultralytics import SAM, YOLO, FastSAM
|
||||||
|
|
||||||
# Profile SAM-b
|
# Profile SAM-b
|
||||||
model = SAM('sam_b.pt')
|
model = SAM("sam_b.pt")
|
||||||
model.info()
|
model.info()
|
||||||
model('ultralytics/assets')
|
model("ultralytics/assets")
|
||||||
|
|
||||||
# Profile MobileSAM
|
# Profile MobileSAM
|
||||||
model = SAM('mobile_sam.pt')
|
model = SAM("mobile_sam.pt")
|
||||||
model.info()
|
model.info()
|
||||||
model('ultralytics/assets')
|
model("ultralytics/assets")
|
||||||
|
|
||||||
# Profile FastSAM-s
|
# Profile FastSAM-s
|
||||||
model = FastSAM('FastSAM-s.pt')
|
model = FastSAM("FastSAM-s.pt")
|
||||||
model.info()
|
model.info()
|
||||||
model('ultralytics/assets')
|
model("ultralytics/assets")
|
||||||
|
|
||||||
# Profile YOLOv8n-seg
|
# Profile YOLOv8n-seg
|
||||||
model = YOLO('yolov8n-seg.pt')
|
model = YOLO("yolov8n-seg.pt")
|
||||||
model.info()
|
model.info()
|
||||||
model('ultralytics/assets')
|
model("ultralytics/assets")
|
||||||
```
|
```
|
||||||
|
|
||||||
## Auto-Annotation: A Quick Path to Segmentation Datasets
|
## Auto-Annotation: A Quick Path to Segmentation Datasets
|
||||||
|
|
@ -188,7 +188,7 @@ To auto-annotate your dataset with the Ultralytics framework, use the `auto_anno
|
||||||
```python
|
```python
|
||||||
from ultralytics.data.annotator import auto_annotate
|
from ultralytics.data.annotator import auto_annotate
|
||||||
|
|
||||||
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
|
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
|
||||||
```
|
```
|
||||||
|
|
||||||
| Argument | Type | Description | Default |
|
| Argument | Type | Description | Default |
|
||||||
|
|
|
||||||
|
|
@ -55,16 +55,16 @@ In this example we validate YOLO-NAS-s on the COCO8 dataset.
|
||||||
from ultralytics import NAS
|
from ultralytics import NAS
|
||||||
|
|
||||||
# Load a COCO-pretrained YOLO-NAS-s model
|
# Load a COCO-pretrained YOLO-NAS-s model
|
||||||
model = NAS('yolo_nas_s.pt')
|
model = NAS("yolo_nas_s.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Validate the model on the COCO8 example dataset
|
# Validate the model on the COCO8 example dataset
|
||||||
results = model.val(data='coco8.yaml')
|
results = model.val(data="coco8.yaml")
|
||||||
|
|
||||||
# Run inference with the YOLO-NAS-s model on the 'bus.jpg' image
|
# Run inference with the YOLO-NAS-s model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -92,13 +92,13 @@ Object detection is straightforward with the `train` method, as illustrated belo
|
||||||
from ultralytics import YOLOWorld
|
from ultralytics import YOLOWorld
|
||||||
|
|
||||||
# Load a pretrained YOLOv8s-worldv2 model
|
# Load a pretrained YOLOv8s-worldv2 model
|
||||||
model = YOLOWorld('yolov8s-worldv2.pt')
|
model = YOLOWorld("yolov8s-worldv2.pt")
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the YOLOv8n model on the 'bus.jpg' image
|
# Run inference with the YOLOv8n model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -120,10 +120,10 @@ Object detection is straightforward with the `predict` method, as illustrated be
|
||||||
from ultralytics import YOLOWorld
|
from ultralytics import YOLOWorld
|
||||||
|
|
||||||
# Initialize a YOLO-World model
|
# Initialize a YOLO-World model
|
||||||
model = YOLOWorld('yolov8s-world.pt') # or select yolov8m/l-world.pt for different sizes
|
model = YOLOWorld("yolov8s-world.pt") # or select yolov8m/l-world.pt for different sizes
|
||||||
|
|
||||||
# Execute inference with the YOLOv8s-world model on the specified image
|
# Execute inference with the YOLOv8s-world model on the specified image
|
||||||
results = model.predict('path/to/image.jpg')
|
results = model.predict("path/to/image.jpg")
|
||||||
|
|
||||||
# Show results
|
# Show results
|
||||||
results[0].show()
|
results[0].show()
|
||||||
|
|
@ -150,10 +150,10 @@ Model validation on a dataset is streamlined as follows:
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Create a YOLO-World model
|
# Create a YOLO-World model
|
||||||
model = YOLO('yolov8s-world.pt') # or select yolov8m/l-world.pt for different sizes
|
model = YOLO("yolov8s-world.pt") # or select yolov8m/l-world.pt for different sizes
|
||||||
|
|
||||||
# Conduct model validation on the COCO8 example dataset
|
# Conduct model validation on the COCO8 example dataset
|
||||||
metrics = model.val(data='coco8.yaml')
|
metrics = model.val(data="coco8.yaml")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
@ -175,7 +175,7 @@ Object tracking with YOLO-World model on a video/images is streamlined as follow
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Create a YOLO-World model
|
# Create a YOLO-World model
|
||||||
model = YOLO('yolov8s-world.pt') # or select yolov8m/l-world.pt for different sizes
|
model = YOLO("yolov8s-world.pt") # or select yolov8m/l-world.pt for different sizes
|
||||||
|
|
||||||
# Track with a YOLO-World model on a video
|
# Track with a YOLO-World model on a video
|
||||||
results = model.track(source="path/to/video.mp4")
|
results = model.track(source="path/to/video.mp4")
|
||||||
|
|
@ -208,13 +208,13 @@ For instance, if your application only requires detecting 'person' and 'bus' obj
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Initialize a YOLO-World model
|
# Initialize a YOLO-World model
|
||||||
model = YOLO('yolov8s-world.pt') # or choose yolov8m/l-world.pt
|
model = YOLO("yolov8s-world.pt") # or choose yolov8m/l-world.pt
|
||||||
|
|
||||||
# Define custom classes
|
# Define custom classes
|
||||||
model.set_classes(["person", "bus"])
|
model.set_classes(["person", "bus"])
|
||||||
|
|
||||||
# Execute prediction for specified categories on an image
|
# Execute prediction for specified categories on an image
|
||||||
results = model.predict('path/to/image.jpg')
|
results = model.predict("path/to/image.jpg")
|
||||||
|
|
||||||
# Show results
|
# Show results
|
||||||
results[0].show()
|
results[0].show()
|
||||||
|
|
@ -232,7 +232,7 @@ You can also save a model after setting custom classes. By doing this you create
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Initialize a YOLO-World model
|
# Initialize a YOLO-World model
|
||||||
model = YOLO('yolov8s-world.pt') # or select yolov8m/l-world.pt
|
model = YOLO("yolov8s-world.pt") # or select yolov8m/l-world.pt
|
||||||
|
|
||||||
# Define custom classes
|
# Define custom classes
|
||||||
model.set_classes(["person", "bus"])
|
model.set_classes(["person", "bus"])
|
||||||
|
|
@ -247,10 +247,10 @@ You can also save a model after setting custom classes. By doing this you create
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load your custom model
|
# Load your custom model
|
||||||
model = YOLO('custom_yolov8s.pt')
|
model = YOLO("custom_yolov8s.pt")
|
||||||
|
|
||||||
# Run inference to detect your custom classes
|
# Run inference to detect your custom classes
|
||||||
results = model.predict('path/to/image.jpg')
|
results = model.predict("path/to/image.jpg")
|
||||||
|
|
||||||
# Show results
|
# Show results
|
||||||
results[0].show()
|
results[0].show()
|
||||||
|
|
@ -294,8 +294,8 @@ This approach provides a powerful means of customizing state-of-the-art object d
|
||||||
=== "Python"
|
=== "Python"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics.models.yolo.world.train_world import WorldTrainerFromScratch
|
|
||||||
from ultralytics import YOLOWorld
|
from ultralytics import YOLOWorld
|
||||||
|
from ultralytics.models.yolo.world.train_world import WorldTrainerFromScratch
|
||||||
|
|
||||||
data = dict(
|
data = dict(
|
||||||
train=dict(
|
train=dict(
|
||||||
|
|
@ -315,7 +315,6 @@ This approach provides a powerful means of customizing state-of-the-art object d
|
||||||
)
|
)
|
||||||
model = YOLOWorld("yolov8s-worldv2.yaml")
|
model = YOLOWorld("yolov8s-worldv2.yaml")
|
||||||
model.train(data=data, batch=128, epochs=100, trainer=WorldTrainerFromScratch)
|
model.train(data=data, batch=128, epochs=100, trainer=WorldTrainerFromScratch)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Citations and Acknowledgements
|
## Citations and Acknowledgements
|
||||||
|
|
|
||||||
|
|
@ -54,16 +54,16 @@ This example provides simple YOLOv3 training and inference examples. For full do
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a COCO-pretrained YOLOv3n model
|
# Load a COCO-pretrained YOLOv3n model
|
||||||
model = YOLO('yolov3n.pt')
|
model = YOLO("yolov3n.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the YOLOv3n model on the 'bus.jpg' image
|
# Run inference with the YOLOv3n model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -66,16 +66,16 @@ This example provides simple YOLOv5 training and inference examples. For full do
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a COCO-pretrained YOLOv5n model
|
# Load a COCO-pretrained YOLOv5n model
|
||||||
model = YOLO('yolov5n.pt')
|
model = YOLO("yolov5n.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the YOLOv5n model on the 'bus.jpg' image
|
# Run inference with the YOLOv5n model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -46,16 +46,16 @@ This example provides simple YOLOv6 training and inference examples. For full do
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Build a YOLOv6n model from scratch
|
# Build a YOLOv6n model from scratch
|
||||||
model = YOLO('yolov6n.yaml')
|
model = YOLO("yolov6n.yaml")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the YOLOv6n model on the 'bus.jpg' image
|
# Run inference with the YOLOv6n model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
|
|
@ -139,16 +139,16 @@ Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for obj
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a COCO-pretrained YOLOv8n model
|
# Load a COCO-pretrained YOLOv8n model
|
||||||
model = YOLO('yolov8n.pt')
|
model = YOLO("yolov8n.pt")
|
||||||
|
|
||||||
# Display model information (optional)
|
# Display model information (optional)
|
||||||
model.info()
|
model.info()
|
||||||
|
|
||||||
# Train the model on the COCO8 example dataset for 100 epochs
|
# Train the model on the COCO8 example dataset for 100 epochs
|
||||||
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
|
||||||
|
|
||||||
# Run inference with the YOLOv8n model on the 'bus.jpg' image
|
# Run inference with the YOLOv8n model on the 'bus.jpg' image
|
||||||
results = model('path/to/bus.jpg')
|
results = model("path/to/bus.jpg")
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue