PyCharm Code Inspect fixes (#18392)
Signed-off-by: UltralyticsAssistant <web@ultralytics.com> Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
d35860d4a1
commit
e5e91967d9
31 changed files with 72 additions and 72 deletions
|
|
@ -31,7 +31,7 @@ keywords: DOTA dataset, object detection, aerial images, oriented bounding boxes
|
|||
- Very small instances (less than 10 pixels) are also annotated.
|
||||
- Addition of a new category: "container crane".
|
||||
- A total of 403,318 instances.
|
||||
- Released for the DOAI Challenge 2019 on Object Detection in Aerial Images.
|
||||
- Released for the [DOAI Challenge 2019 on Object Detection in Aerial Images](https://captain-whu.github.io/DOAI2019/challenge.html).
|
||||
|
||||
### DOTA-v2.0
|
||||
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
|
|||
sudo apt remove libedgetpu1-max
|
||||
```
|
||||
|
||||
## Export your model to a Edge TPU compatible model
|
||||
## Export to Edge TPU
|
||||
|
||||
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
|
||||
|
||||
|
|
@ -105,7 +105,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
|
|||
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
|
||||
```
|
||||
|
||||
The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`. It is important that your model ends with the suffix `_edgetpu.tflite`, otherwise ultralytics doesn't know that you're using a Edge TPU model.
|
||||
The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`. It is important that your model ends with the suffix `_edgetpu.tflite`, otherwise ultralytics doesn't know that you're using an Edge TPU model.
|
||||
|
||||
## Running the model
|
||||
|
||||
|
|
|
|||
|
|
@ -280,7 +280,7 @@ The following table provides a snapshot of the various deployment options availa
|
|||
| TF Edge TPU | Optimized for Google's Edge TPU hardware | Exclusive to Edge TPU devices | Growing with Google and third-party resources | IoT devices requiring real-time processing | Improvements for new Edge TPU hardware | Google's robust IoT security | Custom-designed for Google Coral |
|
||||
| TF.js | Reasonable in-browser performance | High with web technologies | Web and Node.js developers support | Interactive web applications | TensorFlow team and community contributions | Web platform security model | Enhanced with WebGL and other APIs |
|
||||
| PaddlePaddle | Competitive, easy to use and scalable | Baidu ecosystem, wide application support | Rapidly growing, especially in China | Chinese market and language processing | Focus on Chinese AI applications | Emphasizes data privacy and security | Including Baidu's Kunlun chips |
|
||||
| MNN | High-performance for mobile devices. | Mobile and embedded ARM systems and X86-64 CPU | Mobile/embedded ML community | Moblile systems efficiency | High performance maintenance on Mobile Devices | On-device security advantages | ARM CPUs and GPUs optimizations |
|
||||
| MNN | High-performance for mobile devices. | Mobile and embedded ARM systems and X86-64 CPU | Mobile/embedded ML community | Mobile systems efficiency | High performance maintenance on Mobile Devices | On-device security advantages | ARM CPUs and GPUs optimizations |
|
||||
| NCNN | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations |
|
||||
|
||||
This comparative analysis gives you a high-level overview. For deployment, it's essential to consider the specific requirements and constraints of your project, and consult the detailed documentation and resources available for each option.
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ Underfitting occurs when your model can't capture the underlying patterns in the
|
|||
#### Signs of Underfitting
|
||||
|
||||
- **Low Training Accuracy:** If your model can't achieve high accuracy on the training set, it might be underfitting.
|
||||
- **Visual Misclassification:** Consistent failure to recognize obvious features or objects suggests underfitting.
|
||||
- **Visual Mis-classification:** Consistent failure to recognize obvious features or objects suggests underfitting.
|
||||
|
||||
### Balancing Overfitting and Underfitting
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
|
|||
allowfullscreen>
|
||||
</iframe>
|
||||
<br>
|
||||
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLO11 | Pushups, Pullups, Ab Workouts
|
||||
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLO11 | Push-ups, Pull-ups, Ab Workouts
|
||||
</p>
|
||||
|
||||
## Advantages of Workouts Monitoring?
|
||||
|
|
@ -111,7 +111,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
|
|||
|
||||
### How do I monitor my workouts using Ultralytics YOLO11?
|
||||
|
||||
To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
|
||||
To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for push-ups, pull-ups, or ab workouts as shown:
|
||||
|
||||
```python
|
||||
import cv2
|
||||
|
|
@ -154,11 +154,11 @@ You can watch a [YouTube video demonstration](https://www.youtube.com/watch?v=LG
|
|||
|
||||
### How accurate is Ultralytics YOLO11 in detecting and tracking exercises?
|
||||
|
||||
Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
|
||||
Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases push-ups and pull-ups counting.
|
||||
|
||||
### Can I use Ultralytics YOLO11 for custom workout routines?
|
||||
|
||||
Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
|
||||
Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as `pushup`, `pullup`, and `abworkout`. You can specify keypoints and angles to detect specific exercises. Here is an example setup:
|
||||
|
||||
```python
|
||||
from ultralytics import solutions
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ Explore the Ultralytics Docs, a comprehensive resource designed to help you unde
|
|||
|
||||
## Where to Start
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
<div class="grid cards">
|
||||
|
||||
- :material-clock-fast:{ .lg .middle } **Getting Started**
|
||||
|
||||
|
|
|
|||
|
|
@ -161,7 +161,7 @@ You are now ready to train YOLO11 on a custom dataset. Follow this [written guid
|
|||
|
||||
## Upload Custom YOLO11 Model Weights for Testing and Deployment
|
||||
|
||||
Roboflow offers an infinitely scalable API for deployed models and SDKs for use with NVIDIA Jetsons, Luxonis OAKs, Raspberry Pis, GPU-based devices, and more.
|
||||
Roboflow offers a scalable API for deployed models and SDKs for use with NVIDIA Jetson, Luxonis OAK, Raspberry Pi, GPU-based devices, and more.
|
||||
|
||||
You can deploy YOLO11 models by uploading YOLO11 weights to Roboflow. You can do this in a few lines of Python code. Create a new Python file and add the following code:
|
||||
|
||||
|
|
|
|||
|
|
@ -259,7 +259,7 @@ Like any other VS Code extension, you can uninstall it by navigating to the Exte
|
|||
[working with inference results]: ../modes/predict.md#working-with-results
|
||||
[inference arguments]: ../modes/predict.md#inference-arguments
|
||||
[Simple Utilities page]: ../usage/simple-utilities.md
|
||||
[Ultralytics Settings]: ../quickstart.md/#ultralytics-settings
|
||||
[Ultralytics Settings]: ../quickstart.md#ultralytics-settings
|
||||
[quickstart]: ../quickstart.md
|
||||
[Discord]: https://ultralytics.com/discord
|
||||
[Discourse]: https://community.ultralytics.com
|
||||
|
|
|
|||
|
|
@ -1,20 +1,20 @@
|
|||
| Argument | Type | Default | Description |
|
||||
| --------------- | ---------------- | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `source` | `str` | `'ultralytics/assets'` | Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across [different types of input](/modes/predict.md/#inference-sources). |
|
||||
| `conf` | `float` | `0.25` | Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives. |
|
||||
| `iou` | `float` | `0.7` | [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
|
||||
| `imgsz` | `int` or `tuple` | `640` | Defines the image size for inference. Can be a single integer `640` for square resizing or a (height, width) tuple. Proper sizing can improve detection [accuracy](https://www.ultralytics.com/glossary/accuracy) and processing speed. |
|
||||
| `half` | `bool` | `False` | Enables half-[precision](https://www.ultralytics.com/glossary/precision) (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
|
||||
| `device` | `str` | `None` | Specifies the device for inference (e.g., `cpu`, `cuda:0` or `0`). Allows users to select between CPU, a specific GPU, or other compute devices for model execution. |
|
||||
| `batch` | `int` | `1` | Specifies the batch size for inference (only works when the source is [a directory, video file or `.txt` file](/modes/predict.md/#inference-sources)). A larger batch size can provide higher throughput, shortening the total amount of time required for inference. |
|
||||
| `max_det` | `int` | `300` | Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes. |
|
||||
| `vid_stride` | `int` | `1` | Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames. |
|
||||
| `stream_buffer` | `bool` | `False` | Determines whether to queue incoming frames for video streams. If `False`, old frames get dropped to accomodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
|
||||
| `visualize` | `bool` | `False` | Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation. |
|
||||
| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
|
||||
| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
|
||||
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
|
||||
| `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
|
||||
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
|
||||
| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
|
||||
| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |
|
||||
| Argument | Type | Default | Description |
|
||||
| --------------- | ---------------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `source` | `str` | `'ultralytics/assets'` | Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across [different types of input](/modes/predict.md/#inference-sources). |
|
||||
| `conf` | `float` | `0.25` | Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives. |
|
||||
| `iou` | `float` | `0.7` | [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
|
||||
| `imgsz` | `int` or `tuple` | `640` | Defines the image size for inference. Can be a single integer `640` for square resizing or a (height, width) tuple. Proper sizing can improve detection [accuracy](https://www.ultralytics.com/glossary/accuracy) and processing speed. |
|
||||
| `half` | `bool` | `False` | Enables half-[precision](https://www.ultralytics.com/glossary/precision) (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
|
||||
| `device` | `str` | `None` | Specifies the device for inference (e.g., `cpu`, `cuda:0` or `0`). Allows users to select between CPU, a specific GPU, or other compute devices for model execution. |
|
||||
| `batch` | `int` | `1` | Specifies the batch size for inference (only works when the source is [a directory, video file or `.txt` file](/modes/predict.md/#inference-sources)). A larger batch size can provide higher throughput, shortening the total amount of time required for inference. |
|
||||
| `max_det` | `int` | `300` | Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes. |
|
||||
| `vid_stride` | `int` | `1` | Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames. |
|
||||
| `stream_buffer` | `bool` | `False` | Determines whether to queue incoming frames for video streams. If `False`, old frames get dropped to accommodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
|
||||
| `visualize` | `bool` | `False` | Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation. |
|
||||
| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
|
||||
| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
|
||||
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
|
||||
| `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
|
||||
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
|
||||
| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
|
||||
| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |
|
||||
|
|
|
|||
|
|
@ -15,4 +15,4 @@
|
|||
| `rect` | `bool` | `True` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
|
||||
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
|
||||
| `project` | `str` | `None` | Name of the project directory where validation outputs are saved. |
|
||||
| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where valdiation logs and outputs are stored. |
|
||||
| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where validation logs and outputs are stored. |
|
||||
|
|
|
|||
|
|
@ -134,14 +134,18 @@ Here's an example of how to freeze BatchNorm statistics when freezing layers wit
|
|||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Add a callback to put the frozen layers in eval mode to prevent BN values from changing
|
||||
def put_in_eval_mode(trainer):
|
||||
n_layers = trainer.args.freeze
|
||||
if not isinstance(n_layers, int): return
|
||||
for i, (name, module) in enumerate(trainer.model.named_modules()):
|
||||
if name.endswith("bn") and int(name.split('.')[1]) < n_layers:
|
||||
module.eval()
|
||||
module.track_running_stats = False
|
||||
n_layers = trainer.args.freeze
|
||||
if not isinstance(n_layers, int):
|
||||
return
|
||||
|
||||
for i, (name, module) in enumerate(trainer.model.named_modules()):
|
||||
if name.endswith("bn") and int(name.split(".")[1]) < n_layers:
|
||||
module.eval()
|
||||
module.track_running_stats = False
|
||||
|
||||
|
||||
model = YOLO("yolo11n.pt")
|
||||
model.add_callback("on_train_epoch_start", put_in_eval_mode)
|
||||
|
|
|
|||
|
|
@ -87,7 +87,7 @@ This badge indicates that all [YOLOv5 GitHub Actions](https://github.com/ultraly
|
|||
|
||||
Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on [GitHub](https://github.com/ultralytics/yolov5), connect with professionals on [LinkedIn](https://www.linkedin.com/company/ultralytics/), share your results on [Twitter](https://twitter.com/ultralytics), and find educational resources on [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1). Follow us on [TikTok](https://www.tiktok.com/@ultralytics) and [BiliBili](https://ultralytics.com/bilibili) for more engaging content.
|
||||
|
||||
Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md/) for more information.
|
||||
Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md) for more information.
|
||||
|
||||
We're excited to see the innovative ways you'll use YOLOv5. Dive in, experiment, and revolutionize your computer vision projects! 🚀
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ After uploading data to Roboflow, you can label your data and review previous la
|
|||
|
||||
## Versioning
|
||||
|
||||
You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow's offline augmentations on top.
|
||||
You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow offline augmentations on top.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue