PyCharm Code Inspect fixes (#18392)
Signed-off-by: UltralyticsAssistant <web@ultralytics.com> Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
d35860d4a1
commit
e5e91967d9
31 changed files with 72 additions and 72 deletions
|
|
@ -31,7 +31,7 @@ keywords: DOTA dataset, object detection, aerial images, oriented bounding boxes
|
||||||
- Very small instances (less than 10 pixels) are also annotated.
|
- Very small instances (less than 10 pixels) are also annotated.
|
||||||
- Addition of a new category: "container crane".
|
- Addition of a new category: "container crane".
|
||||||
- A total of 403,318 instances.
|
- A total of 403,318 instances.
|
||||||
- Released for the DOAI Challenge 2019 on Object Detection in Aerial Images.
|
- Released for the [DOAI Challenge 2019 on Object Detection in Aerial Images](https://captain-whu.github.io/DOAI2019/challenge.html).
|
||||||
|
|
||||||
### DOTA-v2.0
|
### DOTA-v2.0
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -81,7 +81,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
|
||||||
sudo apt remove libedgetpu1-max
|
sudo apt remove libedgetpu1-max
|
||||||
```
|
```
|
||||||
|
|
||||||
## Export your model to a Edge TPU compatible model
|
## Export to Edge TPU
|
||||||
|
|
||||||
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
|
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
|
||||||
|
|
||||||
|
|
@ -105,7 +105,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
|
||||||
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
|
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
|
||||||
```
|
```
|
||||||
|
|
||||||
The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`. It is important that your model ends with the suffix `_edgetpu.tflite`, otherwise ultralytics doesn't know that you're using a Edge TPU model.
|
The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`. It is important that your model ends with the suffix `_edgetpu.tflite`, otherwise ultralytics doesn't know that you're using an Edge TPU model.
|
||||||
|
|
||||||
## Running the model
|
## Running the model
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -280,7 +280,7 @@ The following table provides a snapshot of the various deployment options availa
|
||||||
| TF Edge TPU | Optimized for Google's Edge TPU hardware | Exclusive to Edge TPU devices | Growing with Google and third-party resources | IoT devices requiring real-time processing | Improvements for new Edge TPU hardware | Google's robust IoT security | Custom-designed for Google Coral |
|
| TF Edge TPU | Optimized for Google's Edge TPU hardware | Exclusive to Edge TPU devices | Growing with Google and third-party resources | IoT devices requiring real-time processing | Improvements for new Edge TPU hardware | Google's robust IoT security | Custom-designed for Google Coral |
|
||||||
| TF.js | Reasonable in-browser performance | High with web technologies | Web and Node.js developers support | Interactive web applications | TensorFlow team and community contributions | Web platform security model | Enhanced with WebGL and other APIs |
|
| TF.js | Reasonable in-browser performance | High with web technologies | Web and Node.js developers support | Interactive web applications | TensorFlow team and community contributions | Web platform security model | Enhanced with WebGL and other APIs |
|
||||||
| PaddlePaddle | Competitive, easy to use and scalable | Baidu ecosystem, wide application support | Rapidly growing, especially in China | Chinese market and language processing | Focus on Chinese AI applications | Emphasizes data privacy and security | Including Baidu's Kunlun chips |
|
| PaddlePaddle | Competitive, easy to use and scalable | Baidu ecosystem, wide application support | Rapidly growing, especially in China | Chinese market and language processing | Focus on Chinese AI applications | Emphasizes data privacy and security | Including Baidu's Kunlun chips |
|
||||||
| MNN | High-performance for mobile devices. | Mobile and embedded ARM systems and X86-64 CPU | Mobile/embedded ML community | Moblile systems efficiency | High performance maintenance on Mobile Devices | On-device security advantages | ARM CPUs and GPUs optimizations |
|
| MNN | High-performance for mobile devices. | Mobile and embedded ARM systems and X86-64 CPU | Mobile/embedded ML community | Mobile systems efficiency | High performance maintenance on Mobile Devices | On-device security advantages | ARM CPUs and GPUs optimizations |
|
||||||
| NCNN | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations |
|
| NCNN | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations |
|
||||||
|
|
||||||
This comparative analysis gives you a high-level overview. For deployment, it's essential to consider the specific requirements and constraints of your project, and consult the detailed documentation and resources available for each option.
|
This comparative analysis gives you a high-level overview. For deployment, it's essential to consider the specific requirements and constraints of your project, and consult the detailed documentation and resources available for each option.
|
||||||
|
|
|
||||||
|
|
@ -81,7 +81,7 @@ Underfitting occurs when your model can't capture the underlying patterns in the
|
||||||
#### Signs of Underfitting
|
#### Signs of Underfitting
|
||||||
|
|
||||||
- **Low Training Accuracy:** If your model can't achieve high accuracy on the training set, it might be underfitting.
|
- **Low Training Accuracy:** If your model can't achieve high accuracy on the training set, it might be underfitting.
|
||||||
- **Visual Misclassification:** Consistent failure to recognize obvious features or objects suggests underfitting.
|
- **Visual Mis-classification:** Consistent failure to recognize obvious features or objects suggests underfitting.
|
||||||
|
|
||||||
### Balancing Overfitting and Underfitting
|
### Balancing Overfitting and Underfitting
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -16,7 +16,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
|
||||||
allowfullscreen>
|
allowfullscreen>
|
||||||
</iframe>
|
</iframe>
|
||||||
<br>
|
<br>
|
||||||
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLO11 | Pushups, Pullups, Ab Workouts
|
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLO11 | Push-ups, Pull-ups, Ab Workouts
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
## Advantages of Workouts Monitoring?
|
## Advantages of Workouts Monitoring?
|
||||||
|
|
@ -111,7 +111,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
|
||||||
|
|
||||||
### How do I monitor my workouts using Ultralytics YOLO11?
|
### How do I monitor my workouts using Ultralytics YOLO11?
|
||||||
|
|
||||||
To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
|
To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for push-ups, pull-ups, or ab workouts as shown:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import cv2
|
import cv2
|
||||||
|
|
@ -154,11 +154,11 @@ You can watch a [YouTube video demonstration](https://www.youtube.com/watch?v=LG
|
||||||
|
|
||||||
### How accurate is Ultralytics YOLO11 in detecting and tracking exercises?
|
### How accurate is Ultralytics YOLO11 in detecting and tracking exercises?
|
||||||
|
|
||||||
Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
|
Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases push-ups and pull-ups counting.
|
||||||
|
|
||||||
### Can I use Ultralytics YOLO11 for custom workout routines?
|
### Can I use Ultralytics YOLO11 for custom workout routines?
|
||||||
|
|
||||||
Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
|
Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as `pushup`, `pullup`, and `abworkout`. You can specify keypoints and angles to detect specific exercises. Here is an example setup:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import solutions
|
from ultralytics import solutions
|
||||||
|
|
|
||||||
|
|
@ -55,7 +55,7 @@ Explore the Ultralytics Docs, a comprehensive resource designed to help you unde
|
||||||
|
|
||||||
## Where to Start
|
## Where to Start
|
||||||
|
|
||||||
<div class="grid cards" markdown>
|
<div class="grid cards">
|
||||||
|
|
||||||
- :material-clock-fast:{ .lg .middle } **Getting Started**
|
- :material-clock-fast:{ .lg .middle } **Getting Started**
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -161,7 +161,7 @@ You are now ready to train YOLO11 on a custom dataset. Follow this [written guid
|
||||||
|
|
||||||
## Upload Custom YOLO11 Model Weights for Testing and Deployment
|
## Upload Custom YOLO11 Model Weights for Testing and Deployment
|
||||||
|
|
||||||
Roboflow offers an infinitely scalable API for deployed models and SDKs for use with NVIDIA Jetsons, Luxonis OAKs, Raspberry Pis, GPU-based devices, and more.
|
Roboflow offers a scalable API for deployed models and SDKs for use with NVIDIA Jetson, Luxonis OAK, Raspberry Pi, GPU-based devices, and more.
|
||||||
|
|
||||||
You can deploy YOLO11 models by uploading YOLO11 weights to Roboflow. You can do this in a few lines of Python code. Create a new Python file and add the following code:
|
You can deploy YOLO11 models by uploading YOLO11 weights to Roboflow. You can do this in a few lines of Python code. Create a new Python file and add the following code:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -259,7 +259,7 @@ Like any other VS Code extension, you can uninstall it by navigating to the Exte
|
||||||
[working with inference results]: ../modes/predict.md#working-with-results
|
[working with inference results]: ../modes/predict.md#working-with-results
|
||||||
[inference arguments]: ../modes/predict.md#inference-arguments
|
[inference arguments]: ../modes/predict.md#inference-arguments
|
||||||
[Simple Utilities page]: ../usage/simple-utilities.md
|
[Simple Utilities page]: ../usage/simple-utilities.md
|
||||||
[Ultralytics Settings]: ../quickstart.md/#ultralytics-settings
|
[Ultralytics Settings]: ../quickstart.md#ultralytics-settings
|
||||||
[quickstart]: ../quickstart.md
|
[quickstart]: ../quickstart.md
|
||||||
[Discord]: https://ultralytics.com/discord
|
[Discord]: https://ultralytics.com/discord
|
||||||
[Discourse]: https://community.ultralytics.com
|
[Discourse]: https://community.ultralytics.com
|
||||||
|
|
|
||||||
|
|
@ -1,20 +1,20 @@
|
||||||
| Argument | Type | Default | Description |
|
| Argument | Type | Default | Description |
|
||||||
| --------------- | ---------------- | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| --------------- | ---------------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `source` | `str` | `'ultralytics/assets'` | Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across [different types of input](/modes/predict.md/#inference-sources). |
|
| `source` | `str` | `'ultralytics/assets'` | Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across [different types of input](/modes/predict.md/#inference-sources). |
|
||||||
| `conf` | `float` | `0.25` | Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives. |
|
| `conf` | `float` | `0.25` | Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives. |
|
||||||
| `iou` | `float` | `0.7` | [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
|
| `iou` | `float` | `0.7` | [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
|
||||||
| `imgsz` | `int` or `tuple` | `640` | Defines the image size for inference. Can be a single integer `640` for square resizing or a (height, width) tuple. Proper sizing can improve detection [accuracy](https://www.ultralytics.com/glossary/accuracy) and processing speed. |
|
| `imgsz` | `int` or `tuple` | `640` | Defines the image size for inference. Can be a single integer `640` for square resizing or a (height, width) tuple. Proper sizing can improve detection [accuracy](https://www.ultralytics.com/glossary/accuracy) and processing speed. |
|
||||||
| `half` | `bool` | `False` | Enables half-[precision](https://www.ultralytics.com/glossary/precision) (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
|
| `half` | `bool` | `False` | Enables half-[precision](https://www.ultralytics.com/glossary/precision) (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
|
||||||
| `device` | `str` | `None` | Specifies the device for inference (e.g., `cpu`, `cuda:0` or `0`). Allows users to select between CPU, a specific GPU, or other compute devices for model execution. |
|
| `device` | `str` | `None` | Specifies the device for inference (e.g., `cpu`, `cuda:0` or `0`). Allows users to select between CPU, a specific GPU, or other compute devices for model execution. |
|
||||||
| `batch` | `int` | `1` | Specifies the batch size for inference (only works when the source is [a directory, video file or `.txt` file](/modes/predict.md/#inference-sources)). A larger batch size can provide higher throughput, shortening the total amount of time required for inference. |
|
| `batch` | `int` | `1` | Specifies the batch size for inference (only works when the source is [a directory, video file or `.txt` file](/modes/predict.md/#inference-sources)). A larger batch size can provide higher throughput, shortening the total amount of time required for inference. |
|
||||||
| `max_det` | `int` | `300` | Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes. |
|
| `max_det` | `int` | `300` | Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes. |
|
||||||
| `vid_stride` | `int` | `1` | Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames. |
|
| `vid_stride` | `int` | `1` | Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames. |
|
||||||
| `stream_buffer` | `bool` | `False` | Determines whether to queue incoming frames for video streams. If `False`, old frames get dropped to accomodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
|
| `stream_buffer` | `bool` | `False` | Determines whether to queue incoming frames for video streams. If `False`, old frames get dropped to accommodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
|
||||||
| `visualize` | `bool` | `False` | Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation. |
|
| `visualize` | `bool` | `False` | Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation. |
|
||||||
| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
|
| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
|
||||||
| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
|
| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
|
||||||
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
|
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
|
||||||
| `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
|
| `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
|
||||||
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
|
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
|
||||||
| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
|
| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
|
||||||
| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |
|
| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |
|
||||||
|
|
|
||||||
|
|
@ -15,4 +15,4 @@
|
||||||
| `rect` | `bool` | `True` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
|
| `rect` | `bool` | `True` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
|
||||||
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
|
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
|
||||||
| `project` | `str` | `None` | Name of the project directory where validation outputs are saved. |
|
| `project` | `str` | `None` | Name of the project directory where validation outputs are saved. |
|
||||||
| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where valdiation logs and outputs are stored. |
|
| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where validation logs and outputs are stored. |
|
||||||
|
|
|
||||||
|
|
@ -134,14 +134,18 @@ Here's an example of how to freeze BatchNorm statistics when freezing layers wit
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
|
|
||||||
# Add a callback to put the frozen layers in eval mode to prevent BN values from changing
|
# Add a callback to put the frozen layers in eval mode to prevent BN values from changing
|
||||||
def put_in_eval_mode(trainer):
|
def put_in_eval_mode(trainer):
|
||||||
n_layers = trainer.args.freeze
|
n_layers = trainer.args.freeze
|
||||||
if not isinstance(n_layers, int): return
|
if not isinstance(n_layers, int):
|
||||||
for i, (name, module) in enumerate(trainer.model.named_modules()):
|
return
|
||||||
if name.endswith("bn") and int(name.split('.')[1]) < n_layers:
|
|
||||||
module.eval()
|
for i, (name, module) in enumerate(trainer.model.named_modules()):
|
||||||
module.track_running_stats = False
|
if name.endswith("bn") and int(name.split(".")[1]) < n_layers:
|
||||||
|
module.eval()
|
||||||
|
module.track_running_stats = False
|
||||||
|
|
||||||
|
|
||||||
model = YOLO("yolo11n.pt")
|
model = YOLO("yolo11n.pt")
|
||||||
model.add_callback("on_train_epoch_start", put_in_eval_mode)
|
model.add_callback("on_train_epoch_start", put_in_eval_mode)
|
||||||
|
|
|
||||||
|
|
@ -87,7 +87,7 @@ This badge indicates that all [YOLOv5 GitHub Actions](https://github.com/ultraly
|
||||||
|
|
||||||
Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on [GitHub](https://github.com/ultralytics/yolov5), connect with professionals on [LinkedIn](https://www.linkedin.com/company/ultralytics/), share your results on [Twitter](https://twitter.com/ultralytics), and find educational resources on [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1). Follow us on [TikTok](https://www.tiktok.com/@ultralytics) and [BiliBili](https://ultralytics.com/bilibili) for more engaging content.
|
Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on [GitHub](https://github.com/ultralytics/yolov5), connect with professionals on [LinkedIn](https://www.linkedin.com/company/ultralytics/), share your results on [Twitter](https://twitter.com/ultralytics), and find educational resources on [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1). Follow us on [TikTok](https://www.tiktok.com/@ultralytics) and [BiliBili](https://ultralytics.com/bilibili) for more engaging content.
|
||||||
|
|
||||||
Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md/) for more information.
|
Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md) for more information.
|
||||||
|
|
||||||
We're excited to see the innovative ways you'll use YOLOv5. Dive in, experiment, and revolutionize your computer vision projects! 🚀
|
We're excited to see the innovative ways you'll use YOLOv5. Dive in, experiment, and revolutionize your computer vision projects! 🚀
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -29,7 +29,7 @@ After uploading data to Roboflow, you can label your data and review previous la
|
||||||
|
|
||||||
## Versioning
|
## Versioning
|
||||||
|
|
||||||
You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow's offline augmentations on top.
|
You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow offline augmentations on top.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,6 @@ __all__ = (
|
||||||
"SOURCE",
|
"SOURCE",
|
||||||
"SOURCES_LIST",
|
"SOURCES_LIST",
|
||||||
"TMP",
|
"TMP",
|
||||||
"IS_TMP_WRITEABLE",
|
|
||||||
"CUDA_IS_AVAILABLE",
|
"CUDA_IS_AVAILABLE",
|
||||||
"CUDA_DEVICE_COUNT",
|
"CUDA_DEVICE_COUNT",
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -86,7 +86,7 @@ SOLUTIONS_HELP_MSG = f"""
|
||||||
yolo solutions count source="path/to/video/file.mp4" region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
|
yolo solutions count source="path/to/video/file.mp4" region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
|
||||||
|
|
||||||
2. Call heatmaps solution
|
2. Call heatmaps solution
|
||||||
yolo solutions heatmap colormap=cv2.COLORMAP_PARAULA model=yolo11n.pt
|
yolo solutions heatmap colormap=cv2.COLORMAP_PARULA model=yolo11n.pt
|
||||||
|
|
||||||
3. Call queue management solution
|
3. Call queue management solution
|
||||||
yolo solutions queue region=[(20, 400), (1080, 400), (1080, 360), (20, 360)] model=yolo11n.pt
|
yolo solutions queue region=[(20, 400), (1080, 400), (1080, 360), (20, 360)] model=yolo11n.pt
|
||||||
|
|
|
||||||
|
|
@ -11,7 +11,7 @@
|
||||||
path: ../datasets/lvis # dataset root dir
|
path: ../datasets/lvis # dataset root dir
|
||||||
train: train.txt # train images (relative to 'path') 100170 images
|
train: train.txt # train images (relative to 'path') 100170 images
|
||||||
val: val.txt # val images (relative to 'path') 19809 images
|
val: val.txt # val images (relative to 'path') 19809 images
|
||||||
minival: minival.txt # minval images (relative to 'path') 5000 images
|
minival: minival.txt # minival images (relative to 'path') 5000 images
|
||||||
|
|
||||||
names:
|
names:
|
||||||
0: aerosol can/spray can
|
0: aerosol can/spray can
|
||||||
|
|
|
||||||
|
|
@ -12,7 +12,7 @@ colormap: # (int | str) colormap for heatmap, Only OPENCV supported colormaps c
|
||||||
# Workouts monitoring settings -----------------------------------------------------------------------------------------
|
# Workouts monitoring settings -----------------------------------------------------------------------------------------
|
||||||
up_angle: 145.0 # (float) Workouts up_angle for counts, 145.0 is default value.
|
up_angle: 145.0 # (float) Workouts up_angle for counts, 145.0 is default value.
|
||||||
down_angle: 90 # (float) Workouts down_angle for counts, 90 is default value. Y
|
down_angle: 90 # (float) Workouts down_angle for counts, 90 is default value. Y
|
||||||
kpts: [6, 8, 10] # (list[int]) keypoints for workouts monitoring, i.e. for pushups kpts have values of [6, 8, 10].
|
kpts: [6, 8, 10] # (list[int]) keypoints for workouts monitoring, i.e. for push-ups kpts have values of [6, 8, 10].
|
||||||
|
|
||||||
# Analytics settings ---------------------------------------------------------------------------------------------------
|
# Analytics settings ---------------------------------------------------------------------------------------------------
|
||||||
analytics_type: "line" # (str) analytics type i.e "line", "pie", "bar" or "area" charts.
|
analytics_type: "line" # (str) analytics type i.e "line", "pie", "bar" or "area" charts.
|
||||||
|
|
|
||||||
|
|
@ -441,7 +441,8 @@ class BaseMixTransform:
|
||||||
"""
|
"""
|
||||||
raise NotImplementedError
|
raise NotImplementedError
|
||||||
|
|
||||||
def _update_label_text(self, labels):
|
@staticmethod
|
||||||
|
def _update_label_text(labels):
|
||||||
"""
|
"""
|
||||||
Updates label text and class IDs for mixed labels in image augmentation.
|
Updates label text and class IDs for mixed labels in image augmentation.
|
||||||
|
|
||||||
|
|
@ -1259,7 +1260,8 @@ class RandomPerspective:
|
||||||
labels["resized_shape"] = img.shape[:2]
|
labels["resized_shape"] = img.shape[:2]
|
||||||
return labels
|
return labels
|
||||||
|
|
||||||
def box_candidates(self, box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16):
|
@staticmethod
|
||||||
|
def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16):
|
||||||
"""
|
"""
|
||||||
Compute candidate boxes for further processing based on size and aspect ratio criteria.
|
Compute candidate boxes for further processing based on size and aspect ratio criteria.
|
||||||
|
|
||||||
|
|
@ -1598,7 +1600,8 @@ class LetterBox:
|
||||||
else:
|
else:
|
||||||
return img
|
return img
|
||||||
|
|
||||||
def _update_labels(self, labels, ratio, padw, padh):
|
@staticmethod
|
||||||
|
def _update_labels(labels, ratio, padw, padh):
|
||||||
"""
|
"""
|
||||||
Updates labels after applying letterboxing to an image.
|
Updates labels after applying letterboxing to an image.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -68,7 +68,7 @@ class YOLODataset(BaseDataset):
|
||||||
Cache dataset labels, check images and read shapes.
|
Cache dataset labels, check images and read shapes.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
path (Path): Path where to save the cache file. Default is Path('./labels.cache').
|
path (Path): Path where to save the cache file. Default is Path("./labels.cache").
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
(dict): labels.
|
(dict): labels.
|
||||||
|
|
@ -219,7 +219,7 @@ class YOLODataset(BaseDataset):
|
||||||
segment_resamples = 100 if self.use_obb else 1000
|
segment_resamples = 100 if self.use_obb else 1000
|
||||||
if len(segments) > 0:
|
if len(segments) > 0:
|
||||||
# make sure segments interpolate correctly if original length is greater than segment_resamples
|
# make sure segments interpolate correctly if original length is greater than segment_resamples
|
||||||
max_len = max([len(s) for s in segments])
|
max_len = max(len(s) for s in segments)
|
||||||
segment_resamples = (max_len + 1) if segment_resamples < max_len else segment_resamples
|
segment_resamples = (max_len + 1) if segment_resamples < max_len else segment_resamples
|
||||||
# list[np.array(segment_resamples, 2)] * num_samples
|
# list[np.array(segment_resamples, 2)] * num_samples
|
||||||
segments = np.stack(resample_segments(segments, n=segment_resamples), axis=0)
|
segments = np.stack(resample_segments(segments, n=segment_resamples), axis=0)
|
||||||
|
|
|
||||||
|
|
@ -11,8 +11,8 @@
|
||||||
python - <<EOF
|
python - <<EOF
|
||||||
from ultralytics.utils.downloads import attempt_download_asset
|
from ultralytics.utils.downloads import attempt_download_asset
|
||||||
|
|
||||||
assets = [f'yolov8{size}{suffix}.pt' for size in 'nsmlx' for suffix in ('', '-cls', '-seg', '-pose')]
|
assets = [f"yolov8{size}{suffix}.pt" for size in "nsmlx" for suffix in ("", "-cls", "-seg", "-pose")]
|
||||||
for x in assets:
|
for x in assets:
|
||||||
attempt_download_asset(f'weights/{x}')
|
attempt_download_asset(f"weights/{x}")
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
|
|
|
||||||
|
|
@ -813,7 +813,7 @@ class Exporter:
|
||||||
workspace = int(self.args.workspace * (1 << 30)) if self.args.workspace is not None else 0
|
workspace = int(self.args.workspace * (1 << 30)) if self.args.workspace is not None else 0
|
||||||
if is_trt10 and workspace > 0:
|
if is_trt10 and workspace > 0:
|
||||||
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace)
|
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace)
|
||||||
elif workspace > 0 and not is_trt10: # TensorRT versions 7, 8
|
elif workspace > 0: # TensorRT versions 7, 8
|
||||||
config.max_workspace_size = workspace
|
config.max_workspace_size = workspace
|
||||||
flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
|
flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
|
||||||
network = builder.create_network(flag)
|
network = builder.create_network(flag)
|
||||||
|
|
|
||||||
|
|
@ -1170,6 +1170,4 @@ class Model(nn.Module):
|
||||||
>>> print(model.stride)
|
>>> print(model.stride)
|
||||||
>>> print(model.task)
|
>>> print(model.task)
|
||||||
"""
|
"""
|
||||||
if name == "model":
|
return self._modules["model"] if name == "model" else getattr(self.model, name)
|
||||||
return self._modules["model"]
|
|
||||||
return getattr(self.model, name)
|
|
||||||
|
|
|
||||||
|
|
@ -245,7 +245,7 @@ class BaseValidator:
|
||||||
|
|
||||||
cost_matrix = iou * (iou >= threshold)
|
cost_matrix = iou * (iou >= threshold)
|
||||||
if cost_matrix.any():
|
if cost_matrix.any():
|
||||||
labels_idx, detections_idx = scipy.optimize.linear_sum_assignment(cost_matrix, maximize=True)
|
labels_idx, detections_idx = scipy.optimize.linear_sum_assignment(cost_matrix)
|
||||||
valid = cost_matrix[labels_idx, detections_idx] > 0
|
valid = cost_matrix[labels_idx, detections_idx] > 0
|
||||||
if valid.any():
|
if valid.any():
|
||||||
correct[detections_idx[valid], i] = True
|
correct[detections_idx[valid], i] = True
|
||||||
|
|
|
||||||
|
|
@ -955,7 +955,8 @@ class TinyViT(nn.Module):
|
||||||
|
|
||||||
self.apply(_check_lr_scale)
|
self.apply(_check_lr_scale)
|
||||||
|
|
||||||
def _init_weights(self, m):
|
@staticmethod
|
||||||
|
def _init_weights(m):
|
||||||
"""Initializes weights for linear and normalization layers in the TinyViT model."""
|
"""Initializes weights for linear and normalization layers in the TinyViT model."""
|
||||||
if isinstance(m, nn.Linear):
|
if isinstance(m, nn.Linear):
|
||||||
# NOTE: This initialization is needed only for training.
|
# NOTE: This initialization is needed only for training.
|
||||||
|
|
|
||||||
|
|
@ -1377,7 +1377,7 @@ class SAM2VideoPredictor(SAM2Predictor):
|
||||||
if "maskmem_pos_enc" not in model_constants:
|
if "maskmem_pos_enc" not in model_constants:
|
||||||
assert isinstance(out_maskmem_pos_enc, list)
|
assert isinstance(out_maskmem_pos_enc, list)
|
||||||
# only take the slice for one object, since it's same across objects
|
# only take the slice for one object, since it's same across objects
|
||||||
maskmem_pos_enc = [x[0:1].clone() for x in out_maskmem_pos_enc]
|
maskmem_pos_enc = [x[:1].clone() for x in out_maskmem_pos_enc]
|
||||||
model_constants["maskmem_pos_enc"] = maskmem_pos_enc
|
model_constants["maskmem_pos_enc"] = maskmem_pos_enc
|
||||||
else:
|
else:
|
||||||
maskmem_pos_enc = model_constants["maskmem_pos_enc"]
|
maskmem_pos_enc = model_constants["maskmem_pos_enc"]
|
||||||
|
|
|
||||||
|
|
@ -429,10 +429,7 @@ class AutoBackend(nn.Module):
|
||||||
|
|
||||||
import MNN
|
import MNN
|
||||||
|
|
||||||
config = {}
|
config = {"precision": "low", "backend": "CPU", "numThread": (os.cpu_count() + 1) // 2}
|
||||||
config["precision"] = "low"
|
|
||||||
config["backend"] = "CPU"
|
|
||||||
config["numThread"] = (os.cpu_count() + 1) // 2
|
|
||||||
rt = MNN.nn.create_runtime_manager((config,))
|
rt = MNN.nn.create_runtime_manager((config,))
|
||||||
net = MNN.nn.load_module_from_file(w, [], [], runtime_manager=rt, rearrange=True)
|
net = MNN.nn.load_module_from_file(w, [], [], runtime_manager=rt, rearrange=True)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -181,12 +181,8 @@ class Inference:
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
import sys # Import the sys module for accessing command-line arguments
|
import sys # Import the sys module for accessing command-line arguments
|
||||||
|
|
||||||
model = None # Initialize the model variable as None
|
|
||||||
|
|
||||||
# Check if a model name is provided as a command-line argument
|
# Check if a model name is provided as a command-line argument
|
||||||
args = len(sys.argv)
|
args = len(sys.argv)
|
||||||
if args > 1:
|
model = args if args > 1 else None
|
||||||
model = args # Assign the first argument as the model name
|
|
||||||
|
|
||||||
# Create an instance of the Inference class and run inference
|
# Create an instance of the Inference class and run inference
|
||||||
Inference(model=model).inference()
|
Inference(model=model).inference()
|
||||||
|
|
|
||||||
|
|
@ -440,7 +440,8 @@ class ProfileModels:
|
||||||
print(f"Profiling: {sorted(files)}")
|
print(f"Profiling: {sorted(files)}")
|
||||||
return [Path(file) for file in sorted(files)]
|
return [Path(file) for file in sorted(files)]
|
||||||
|
|
||||||
def get_onnx_model_info(self, onnx_file: str):
|
@staticmethod
|
||||||
|
def get_onnx_model_info(onnx_file: str):
|
||||||
"""Extracts metadata from an ONNX model file including parameters, GFLOPs, and input shape."""
|
"""Extracts metadata from an ONNX model file including parameters, GFLOPs, and input shape."""
|
||||||
return 0.0, 0.0, 0.0, 0.0 # return (num_layers, num_params, num_gradients, num_flops)
|
return 0.0, 0.0, 0.0, 0.0 # return (num_layers, num_params, num_gradients, num_flops)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -138,7 +138,7 @@ def unzip_file(file, path=None, exclude=(".DS_Store", "__MACOSX"), exist_ok=Fals
|
||||||
If a path is not provided, the function will use the parent directory of the zipfile as the default path.
|
If a path is not provided, the function will use the parent directory of the zipfile as the default path.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
file (str): The path to the zipfile to be extracted.
|
file (str | Path): The path to the zipfile to be extracted.
|
||||||
path (str, optional): The path to extract the zipfile to. Defaults to None.
|
path (str, optional): The path to extract the zipfile to. Defaults to None.
|
||||||
exclude (tuple, optional): A tuple of filename strings to be excluded. Defaults to ('.DS_Store', '__MACOSX').
|
exclude (tuple, optional): A tuple of filename strings to be excluded. Defaults to ('.DS_Store', '__MACOSX').
|
||||||
exist_ok (bool, optional): Whether to overwrite existing contents if they exist. Defaults to False.
|
exist_ok (bool, optional): Whether to overwrite existing contents if they exist. Defaults to False.
|
||||||
|
|
|
||||||
|
|
@ -28,7 +28,7 @@ to_4tuple = _ntuple(4)
|
||||||
# `ltwh` means left top and width, height(COCO format)
|
# `ltwh` means left top and width, height(COCO format)
|
||||||
_formats = ["xyxy", "xywh", "ltwh"]
|
_formats = ["xyxy", "xywh", "ltwh"]
|
||||||
|
|
||||||
__all__ = ("Bboxes",) # tuple or list
|
__all__ = ("Bboxes", "Instances") # tuple or list
|
||||||
|
|
||||||
|
|
||||||
class Bboxes:
|
class Bboxes:
|
||||||
|
|
|
||||||
|
|
@ -545,7 +545,8 @@ class Annotator:
|
||||||
"""Save the annotated image to 'filename'."""
|
"""Save the annotated image to 'filename'."""
|
||||||
cv2.imwrite(filename, np.asarray(self.im))
|
cv2.imwrite(filename, np.asarray(self.im))
|
||||||
|
|
||||||
def get_bbox_dimension(self, bbox=None):
|
@staticmethod
|
||||||
|
def get_bbox_dimension(bbox=None):
|
||||||
"""
|
"""
|
||||||
Calculate the area of a bounding box.
|
Calculate the area of a bounding box.
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue