diff --git a/docs/en/datasets/obb/dota-v2.md b/docs/en/datasets/obb/dota-v2.md
index c2f64e1a..a2c73947 100644
--- a/docs/en/datasets/obb/dota-v2.md
+++ b/docs/en/datasets/obb/dota-v2.md
@@ -31,7 +31,7 @@ keywords: DOTA dataset, object detection, aerial images, oriented bounding boxes
- Very small instances (less than 10 pixels) are also annotated.
- Addition of a new category: "container crane".
- A total of 403,318 instances.
-- Released for the DOAI Challenge 2019 on Object Detection in Aerial Images.
+- Released for the [DOAI Challenge 2019 on Object Detection in Aerial Images](https://captain-whu.github.io/DOAI2019/challenge.html).
### DOTA-v2.0
diff --git a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
index 87154838..a6f91db7 100644
--- a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
+++ b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
@@ -81,7 +81,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
sudo apt remove libedgetpu1-max
```
-## Export your model to a Edge TPU compatible model
+## Export to Edge TPU
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
@@ -105,7 +105,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
```
-The exported model will be saved in the `_saved_model/` folder with the name `_full_integer_quant_edgetpu.tflite`. It is important that your model ends with the suffix `_edgetpu.tflite`, otherwise ultralytics doesn't know that you're using a Edge TPU model.
+The exported model will be saved in the `_saved_model/` folder with the name `_full_integer_quant_edgetpu.tflite`. It is important that your model ends with the suffix `_edgetpu.tflite`, otherwise ultralytics doesn't know that you're using an Edge TPU model.
## Running the model
diff --git a/docs/en/guides/model-deployment-options.md b/docs/en/guides/model-deployment-options.md
index 1b97e31e..2e7e9830 100644
--- a/docs/en/guides/model-deployment-options.md
+++ b/docs/en/guides/model-deployment-options.md
@@ -280,7 +280,7 @@ The following table provides a snapshot of the various deployment options availa
| TF Edge TPU | Optimized for Google's Edge TPU hardware | Exclusive to Edge TPU devices | Growing with Google and third-party resources | IoT devices requiring real-time processing | Improvements for new Edge TPU hardware | Google's robust IoT security | Custom-designed for Google Coral |
| TF.js | Reasonable in-browser performance | High with web technologies | Web and Node.js developers support | Interactive web applications | TensorFlow team and community contributions | Web platform security model | Enhanced with WebGL and other APIs |
| PaddlePaddle | Competitive, easy to use and scalable | Baidu ecosystem, wide application support | Rapidly growing, especially in China | Chinese market and language processing | Focus on Chinese AI applications | Emphasizes data privacy and security | Including Baidu's Kunlun chips |
-| MNN | High-performance for mobile devices. | Mobile and embedded ARM systems and X86-64 CPU | Mobile/embedded ML community | Moblile systems efficiency | High performance maintenance on Mobile Devices | On-device security advantages | ARM CPUs and GPUs optimizations |
+| MNN | High-performance for mobile devices. | Mobile and embedded ARM systems and X86-64 CPU | Mobile/embedded ML community | Mobile systems efficiency | High performance maintenance on Mobile Devices | On-device security advantages | ARM CPUs and GPUs optimizations |
| NCNN | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations |
This comparative analysis gives you a high-level overview. For deployment, it's essential to consider the specific requirements and constraints of your project, and consult the detailed documentation and resources available for each option.
diff --git a/docs/en/guides/model-testing.md b/docs/en/guides/model-testing.md
index b8bcb913..6f4a7795 100644
--- a/docs/en/guides/model-testing.md
+++ b/docs/en/guides/model-testing.md
@@ -81,7 +81,7 @@ Underfitting occurs when your model can't capture the underlying patterns in the
#### Signs of Underfitting
- **Low Training Accuracy:** If your model can't achieve high accuracy on the training set, it might be underfitting.
-- **Visual Misclassification:** Consistent failure to recognize obvious features or objects suggests underfitting.
+- **Visual Mis-classification:** Consistent failure to recognize obvious features or objects suggests underfitting.
### Balancing Overfitting and Underfitting
diff --git a/docs/en/guides/workouts-monitoring.md b/docs/en/guides/workouts-monitoring.md
index 02816e9c..e2ec839d 100644
--- a/docs/en/guides/workouts-monitoring.md
+++ b/docs/en/guides/workouts-monitoring.md
@@ -16,7 +16,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
allowfullscreen>
- Watch: Workouts Monitoring using Ultralytics YOLO11 | Pushups, Pullups, Ab Workouts
+ Watch: Workouts Monitoring using Ultralytics YOLO11 | Push-ups, Pull-ups, Ab Workouts
## Advantages of Workouts Monitoring?
@@ -111,7 +111,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
### How do I monitor my workouts using Ultralytics YOLO11?
-To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
+To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for push-ups, pull-ups, or ab workouts as shown:
```python
import cv2
@@ -154,11 +154,11 @@ You can watch a [YouTube video demonstration](https://www.youtube.com/watch?v=LG
### How accurate is Ultralytics YOLO11 in detecting and tracking exercises?
-Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
+Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases push-ups and pull-ups counting.
### Can I use Ultralytics YOLO11 for custom workout routines?
-Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
+Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as `pushup`, `pullup`, and `abworkout`. You can specify keypoints and angles to detect specific exercises. Here is an example setup:
```python
from ultralytics import solutions
diff --git a/docs/en/index.md b/docs/en/index.md
index 1d52a315..b31fa886 100644
--- a/docs/en/index.md
+++ b/docs/en/index.md
@@ -55,7 +55,7 @@ Explore the Ultralytics Docs, a comprehensive resource designed to help you unde
## Where to Start
-
+
- :material-clock-fast:{ .lg .middle } **Getting Started**
diff --git a/docs/en/integrations/roboflow.md b/docs/en/integrations/roboflow.md
index 92153885..5a9d5e31 100644
--- a/docs/en/integrations/roboflow.md
+++ b/docs/en/integrations/roboflow.md
@@ -161,7 +161,7 @@ You are now ready to train YOLO11 on a custom dataset. Follow this [written guid
## Upload Custom YOLO11 Model Weights for Testing and Deployment
-Roboflow offers an infinitely scalable API for deployed models and SDKs for use with NVIDIA Jetsons, Luxonis OAKs, Raspberry Pis, GPU-based devices, and more.
+Roboflow offers a scalable API for deployed models and SDKs for use with NVIDIA Jetson, Luxonis OAK, Raspberry Pi, GPU-based devices, and more.
You can deploy YOLO11 models by uploading YOLO11 weights to Roboflow. You can do this in a few lines of Python code. Create a new Python file and add the following code:
diff --git a/docs/en/integrations/vscode.md b/docs/en/integrations/vscode.md
index faf8c893..aab7970f 100644
--- a/docs/en/integrations/vscode.md
+++ b/docs/en/integrations/vscode.md
@@ -259,7 +259,7 @@ Like any other VS Code extension, you can uninstall it by navigating to the Exte
[working with inference results]: ../modes/predict.md#working-with-results
[inference arguments]: ../modes/predict.md#inference-arguments
[Simple Utilities page]: ../usage/simple-utilities.md
-[Ultralytics Settings]: ../quickstart.md/#ultralytics-settings
+[Ultralytics Settings]: ../quickstart.md#ultralytics-settings
[quickstart]: ../quickstart.md
[Discord]: https://ultralytics.com/discord
[Discourse]: https://community.ultralytics.com
diff --git a/docs/en/macros/predict-args.md b/docs/en/macros/predict-args.md
index d9470259..8491f1ac 100644
--- a/docs/en/macros/predict-args.md
+++ b/docs/en/macros/predict-args.md
@@ -1,20 +1,20 @@
-| Argument | Type | Default | Description |
-| --------------- | ---------------- | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `source` | `str` | `'ultralytics/assets'` | Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across [different types of input](/modes/predict.md/#inference-sources). |
-| `conf` | `float` | `0.25` | Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives. |
-| `iou` | `float` | `0.7` | [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
-| `imgsz` | `int` or `tuple` | `640` | Defines the image size for inference. Can be a single integer `640` for square resizing or a (height, width) tuple. Proper sizing can improve detection [accuracy](https://www.ultralytics.com/glossary/accuracy) and processing speed. |
-| `half` | `bool` | `False` | Enables half-[precision](https://www.ultralytics.com/glossary/precision) (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
-| `device` | `str` | `None` | Specifies the device for inference (e.g., `cpu`, `cuda:0` or `0`). Allows users to select between CPU, a specific GPU, or other compute devices for model execution. |
-| `batch` | `int` | `1` | Specifies the batch size for inference (only works when the source is [a directory, video file or `.txt` file](/modes/predict.md/#inference-sources)). A larger batch size can provide higher throughput, shortening the total amount of time required for inference. |
-| `max_det` | `int` | `300` | Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes. |
-| `vid_stride` | `int` | `1` | Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames. |
-| `stream_buffer` | `bool` | `False` | Determines whether to queue incoming frames for video streams. If `False`, old frames get dropped to accomodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
-| `visualize` | `bool` | `False` | Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation. |
-| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
-| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
-| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
-| `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
-| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
-| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
-| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |
+| Argument | Type | Default | Description |
+| --------------- | ---------------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `source` | `str` | `'ultralytics/assets'` | Specifies the data source for inference. Can be an image path, video file, directory, URL, or device ID for live feeds. Supports a wide range of formats and sources, enabling flexible application across [different types of input](/modes/predict.md/#inference-sources). |
+| `conf` | `float` | `0.25` | Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives. |
+| `iou` | `float` | `0.7` | [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU) threshold for Non-Maximum Suppression (NMS). Lower values result in fewer detections by eliminating overlapping boxes, useful for reducing duplicates. |
+| `imgsz` | `int` or `tuple` | `640` | Defines the image size for inference. Can be a single integer `640` for square resizing or a (height, width) tuple. Proper sizing can improve detection [accuracy](https://www.ultralytics.com/glossary/accuracy) and processing speed. |
+| `half` | `bool` | `False` | Enables half-[precision](https://www.ultralytics.com/glossary/precision) (FP16) inference, which can speed up model inference on supported GPUs with minimal impact on accuracy. |
+| `device` | `str` | `None` | Specifies the device for inference (e.g., `cpu`, `cuda:0` or `0`). Allows users to select between CPU, a specific GPU, or other compute devices for model execution. |
+| `batch` | `int` | `1` | Specifies the batch size for inference (only works when the source is [a directory, video file or `.txt` file](/modes/predict.md/#inference-sources)). A larger batch size can provide higher throughput, shortening the total amount of time required for inference. |
+| `max_det` | `int` | `300` | Maximum number of detections allowed per image. Limits the total number of objects the model can detect in a single inference, preventing excessive outputs in dense scenes. |
+| `vid_stride` | `int` | `1` | Frame stride for video inputs. Allows skipping frames in videos to speed up processing at the cost of temporal resolution. A value of 1 processes every frame, higher values skip frames. |
+| `stream_buffer` | `bool` | `False` | Determines whether to queue incoming frames for video streams. If `False`, old frames get dropped to accommodate new frames (optimized for real-time applications). If `True', queues new frames in a buffer, ensuring no frames get skipped, but will cause latency if inference FPS is lower than stream FPS. |
+| `visualize` | `bool` | `False` | Activates visualization of model features during inference, providing insights into what the model is "seeing". Useful for debugging and model interpretation. |
+| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
+| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
+| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
+| `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
+| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
+| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
+| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |
diff --git a/docs/en/macros/validation-args.md b/docs/en/macros/validation-args.md
index c28a8e47..ab5014c0 100644
--- a/docs/en/macros/validation-args.md
+++ b/docs/en/macros/validation-args.md
@@ -15,4 +15,4 @@
| `rect` | `bool` | `True` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
| `project` | `str` | `None` | Name of the project directory where validation outputs are saved. |
-| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where valdiation logs and outputs are stored. |
+| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where validation logs and outputs are stored. |
diff --git a/docs/en/usage/callbacks.md b/docs/en/usage/callbacks.md
index 57472143..12056395 100644
--- a/docs/en/usage/callbacks.md
+++ b/docs/en/usage/callbacks.md
@@ -134,14 +134,18 @@ Here's an example of how to freeze BatchNorm statistics when freezing layers wit
```python
from ultralytics import YOLO
+
# Add a callback to put the frozen layers in eval mode to prevent BN values from changing
def put_in_eval_mode(trainer):
- n_layers = trainer.args.freeze
- if not isinstance(n_layers, int): return
- for i, (name, module) in enumerate(trainer.model.named_modules()):
- if name.endswith("bn") and int(name.split('.')[1]) < n_layers:
- module.eval()
- module.track_running_stats = False
+ n_layers = trainer.args.freeze
+ if not isinstance(n_layers, int):
+ return
+
+ for i, (name, module) in enumerate(trainer.model.named_modules()):
+ if name.endswith("bn") and int(name.split(".")[1]) < n_layers:
+ module.eval()
+ module.track_running_stats = False
+
model = YOLO("yolo11n.pt")
model.add_callback("on_train_epoch_start", put_in_eval_mode)
diff --git a/docs/en/yolov5/index.md b/docs/en/yolov5/index.md
index 0e071299..180bd25e 100644
--- a/docs/en/yolov5/index.md
+++ b/docs/en/yolov5/index.md
@@ -87,7 +87,7 @@ This badge indicates that all [YOLOv5 GitHub Actions](https://github.com/ultraly
Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on [GitHub](https://github.com/ultralytics/yolov5), connect with professionals on [LinkedIn](https://www.linkedin.com/company/ultralytics/), share your results on [Twitter](https://twitter.com/ultralytics), and find educational resources on [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1). Follow us on [TikTok](https://www.tiktok.com/@ultralytics) and [BiliBili](https://ultralytics.com/bilibili) for more engaging content.
-Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md/) for more information.
+Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md) for more information.
We're excited to see the innovative ways you'll use YOLOv5. Dive in, experiment, and revolutionize your computer vision projects! 🚀
diff --git a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
index a6f70069..53f29d6f 100644
--- a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
+++ b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
@@ -29,7 +29,7 @@ After uploading data to Roboflow, you can label your data and review previous la
## Versioning
-You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow's offline augmentations on top.
+You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow offline augmentations on top.

diff --git a/tests/__init__.py b/tests/__init__.py
index ea8afff5..9ff563de 100644
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -17,7 +17,6 @@ __all__ = (
"SOURCE",
"SOURCES_LIST",
"TMP",
- "IS_TMP_WRITEABLE",
"CUDA_IS_AVAILABLE",
"CUDA_DEVICE_COUNT",
)
diff --git a/ultralytics/cfg/__init__.py b/ultralytics/cfg/__init__.py
index ca35aff0..b69dfe8c 100644
--- a/ultralytics/cfg/__init__.py
+++ b/ultralytics/cfg/__init__.py
@@ -86,7 +86,7 @@ SOLUTIONS_HELP_MSG = f"""
yolo solutions count source="path/to/video/file.mp4" region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
2. Call heatmaps solution
- yolo solutions heatmap colormap=cv2.COLORMAP_PARAULA model=yolo11n.pt
+ yolo solutions heatmap colormap=cv2.COLORMAP_PARULA model=yolo11n.pt
3. Call queue management solution
yolo solutions queue region=[(20, 400), (1080, 400), (1080, 360), (20, 360)] model=yolo11n.pt
diff --git a/ultralytics/cfg/datasets/lvis.yaml b/ultralytics/cfg/datasets/lvis.yaml
index 9a79bde6..2c6851dd 100644
--- a/ultralytics/cfg/datasets/lvis.yaml
+++ b/ultralytics/cfg/datasets/lvis.yaml
@@ -11,7 +11,7 @@
path: ../datasets/lvis # dataset root dir
train: train.txt # train images (relative to 'path') 100170 images
val: val.txt # val images (relative to 'path') 19809 images
-minival: minival.txt # minval images (relative to 'path') 5000 images
+minival: minival.txt # minival images (relative to 'path') 5000 images
names:
0: aerosol can/spray can
diff --git a/ultralytics/cfg/solutions/default.yaml b/ultralytics/cfg/solutions/default.yaml
index 63d7dd77..165a07e3 100644
--- a/ultralytics/cfg/solutions/default.yaml
+++ b/ultralytics/cfg/solutions/default.yaml
@@ -12,7 +12,7 @@ colormap: # (int | str) colormap for heatmap, Only OPENCV supported colormaps c
# Workouts monitoring settings -----------------------------------------------------------------------------------------
up_angle: 145.0 # (float) Workouts up_angle for counts, 145.0 is default value.
down_angle: 90 # (float) Workouts down_angle for counts, 90 is default value. Y
-kpts: [6, 8, 10] # (list[int]) keypoints for workouts monitoring, i.e. for pushups kpts have values of [6, 8, 10].
+kpts: [6, 8, 10] # (list[int]) keypoints for workouts monitoring, i.e. for push-ups kpts have values of [6, 8, 10].
# Analytics settings ---------------------------------------------------------------------------------------------------
analytics_type: "line" # (str) analytics type i.e "line", "pie", "bar" or "area" charts.
diff --git a/ultralytics/data/augment.py b/ultralytics/data/augment.py
index 5ec011d8..705b5c46 100644
--- a/ultralytics/data/augment.py
+++ b/ultralytics/data/augment.py
@@ -441,7 +441,8 @@ class BaseMixTransform:
"""
raise NotImplementedError
- def _update_label_text(self, labels):
+ @staticmethod
+ def _update_label_text(labels):
"""
Updates label text and class IDs for mixed labels in image augmentation.
@@ -1259,7 +1260,8 @@ class RandomPerspective:
labels["resized_shape"] = img.shape[:2]
return labels
- def box_candidates(self, box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16):
+ @staticmethod
+ def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16):
"""
Compute candidate boxes for further processing based on size and aspect ratio criteria.
@@ -1598,7 +1600,8 @@ class LetterBox:
else:
return img
- def _update_labels(self, labels, ratio, padw, padh):
+ @staticmethod
+ def _update_labels(labels, ratio, padw, padh):
"""
Updates labels after applying letterboxing to an image.
diff --git a/ultralytics/data/dataset.py b/ultralytics/data/dataset.py
index 50477a4c..f3be2305 100644
--- a/ultralytics/data/dataset.py
+++ b/ultralytics/data/dataset.py
@@ -68,7 +68,7 @@ class YOLODataset(BaseDataset):
Cache dataset labels, check images and read shapes.
Args:
- path (Path): Path where to save the cache file. Default is Path('./labels.cache').
+ path (Path): Path where to save the cache file. Default is Path("./labels.cache").
Returns:
(dict): labels.
@@ -219,7 +219,7 @@ class YOLODataset(BaseDataset):
segment_resamples = 100 if self.use_obb else 1000
if len(segments) > 0:
# make sure segments interpolate correctly if original length is greater than segment_resamples
- max_len = max([len(s) for s in segments])
+ max_len = max(len(s) for s in segments)
segment_resamples = (max_len + 1) if segment_resamples < max_len else segment_resamples
# list[np.array(segment_resamples, 2)] * num_samples
segments = np.stack(resample_segments(segments, n=segment_resamples), axis=0)
diff --git a/ultralytics/data/scripts/download_weights.sh b/ultralytics/data/scripts/download_weights.sh
index 87db31fe..f8a739f6 100755
--- a/ultralytics/data/scripts/download_weights.sh
+++ b/ultralytics/data/scripts/download_weights.sh
@@ -11,8 +11,8 @@
python - < 0:
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace)
- elif workspace > 0 and not is_trt10: # TensorRT versions 7, 8
+ elif workspace > 0: # TensorRT versions 7, 8
config.max_workspace_size = workspace
flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
network = builder.create_network(flag)
diff --git a/ultralytics/engine/model.py b/ultralytics/engine/model.py
index d5d4db26..a357c5d6 100644
--- a/ultralytics/engine/model.py
+++ b/ultralytics/engine/model.py
@@ -1170,6 +1170,4 @@ class Model(nn.Module):
>>> print(model.stride)
>>> print(model.task)
"""
- if name == "model":
- return self._modules["model"]
- return getattr(self.model, name)
+ return self._modules["model"] if name == "model" else getattr(self.model, name)
diff --git a/ultralytics/engine/validator.py b/ultralytics/engine/validator.py
index 5f5268ea..1f8b32b0 100644
--- a/ultralytics/engine/validator.py
+++ b/ultralytics/engine/validator.py
@@ -245,7 +245,7 @@ class BaseValidator:
cost_matrix = iou * (iou >= threshold)
if cost_matrix.any():
- labels_idx, detections_idx = scipy.optimize.linear_sum_assignment(cost_matrix, maximize=True)
+ labels_idx, detections_idx = scipy.optimize.linear_sum_assignment(cost_matrix)
valid = cost_matrix[labels_idx, detections_idx] > 0
if valid.any():
correct[detections_idx[valid], i] = True
diff --git a/ultralytics/models/sam/modules/tiny_encoder.py b/ultralytics/models/sam/modules/tiny_encoder.py
index d036ab98..b347c328 100644
--- a/ultralytics/models/sam/modules/tiny_encoder.py
+++ b/ultralytics/models/sam/modules/tiny_encoder.py
@@ -955,7 +955,8 @@ class TinyViT(nn.Module):
self.apply(_check_lr_scale)
- def _init_weights(self, m):
+ @staticmethod
+ def _init_weights(m):
"""Initializes weights for linear and normalization layers in the TinyViT model."""
if isinstance(m, nn.Linear):
# NOTE: This initialization is needed only for training.
diff --git a/ultralytics/models/sam/predict.py b/ultralytics/models/sam/predict.py
index b657ef70..4f237751 100644
--- a/ultralytics/models/sam/predict.py
+++ b/ultralytics/models/sam/predict.py
@@ -1377,7 +1377,7 @@ class SAM2VideoPredictor(SAM2Predictor):
if "maskmem_pos_enc" not in model_constants:
assert isinstance(out_maskmem_pos_enc, list)
# only take the slice for one object, since it's same across objects
- maskmem_pos_enc = [x[0:1].clone() for x in out_maskmem_pos_enc]
+ maskmem_pos_enc = [x[:1].clone() for x in out_maskmem_pos_enc]
model_constants["maskmem_pos_enc"] = maskmem_pos_enc
else:
maskmem_pos_enc = model_constants["maskmem_pos_enc"]
diff --git a/ultralytics/nn/autobackend.py b/ultralytics/nn/autobackend.py
index b6df3753..a2a7816a 100644
--- a/ultralytics/nn/autobackend.py
+++ b/ultralytics/nn/autobackend.py
@@ -429,10 +429,7 @@ class AutoBackend(nn.Module):
import MNN
- config = {}
- config["precision"] = "low"
- config["backend"] = "CPU"
- config["numThread"] = (os.cpu_count() + 1) // 2
+ config = {"precision": "low", "backend": "CPU", "numThread": (os.cpu_count() + 1) // 2}
rt = MNN.nn.create_runtime_manager((config,))
net = MNN.nn.load_module_from_file(w, [], [], runtime_manager=rt, rearrange=True)
diff --git a/ultralytics/solutions/streamlit_inference.py b/ultralytics/solutions/streamlit_inference.py
index bc452f53..e07059bd 100644
--- a/ultralytics/solutions/streamlit_inference.py
+++ b/ultralytics/solutions/streamlit_inference.py
@@ -181,12 +181,8 @@ class Inference:
if __name__ == "__main__":
import sys # Import the sys module for accessing command-line arguments
- model = None # Initialize the model variable as None
-
# Check if a model name is provided as a command-line argument
args = len(sys.argv)
- if args > 1:
- model = args # Assign the first argument as the model name
-
+ model = args if args > 1 else None
# Create an instance of the Inference class and run inference
Inference(model=model).inference()
diff --git a/ultralytics/utils/benchmarks.py b/ultralytics/utils/benchmarks.py
index e5a6c22a..2d33704a 100644
--- a/ultralytics/utils/benchmarks.py
+++ b/ultralytics/utils/benchmarks.py
@@ -440,7 +440,8 @@ class ProfileModels:
print(f"Profiling: {sorted(files)}")
return [Path(file) for file in sorted(files)]
- def get_onnx_model_info(self, onnx_file: str):
+ @staticmethod
+ def get_onnx_model_info(onnx_file: str):
"""Extracts metadata from an ONNX model file including parameters, GFLOPs, and input shape."""
return 0.0, 0.0, 0.0, 0.0 # return (num_layers, num_params, num_gradients, num_flops)
diff --git a/ultralytics/utils/downloads.py b/ultralytics/utils/downloads.py
index be182f40..555fbaf5 100644
--- a/ultralytics/utils/downloads.py
+++ b/ultralytics/utils/downloads.py
@@ -138,7 +138,7 @@ def unzip_file(file, path=None, exclude=(".DS_Store", "__MACOSX"), exist_ok=Fals
If a path is not provided, the function will use the parent directory of the zipfile as the default path.
Args:
- file (str): The path to the zipfile to be extracted.
+ file (str | Path): The path to the zipfile to be extracted.
path (str, optional): The path to extract the zipfile to. Defaults to None.
exclude (tuple, optional): A tuple of filename strings to be excluded. Defaults to ('.DS_Store', '__MACOSX').
exist_ok (bool, optional): Whether to overwrite existing contents if they exist. Defaults to False.
diff --git a/ultralytics/utils/instance.py b/ultralytics/utils/instance.py
index 58fc2db4..aac23d23 100644
--- a/ultralytics/utils/instance.py
+++ b/ultralytics/utils/instance.py
@@ -28,7 +28,7 @@ to_4tuple = _ntuple(4)
# `ltwh` means left top and width, height(COCO format)
_formats = ["xyxy", "xywh", "ltwh"]
-__all__ = ("Bboxes",) # tuple or list
+__all__ = ("Bboxes", "Instances") # tuple or list
class Bboxes:
diff --git a/ultralytics/utils/plotting.py b/ultralytics/utils/plotting.py
index 3943a87e..2eab071b 100644
--- a/ultralytics/utils/plotting.py
+++ b/ultralytics/utils/plotting.py
@@ -545,7 +545,8 @@ class Annotator:
"""Save the annotated image to 'filename'."""
cv2.imwrite(filename, np.asarray(self.im))
- def get_bbox_dimension(self, bbox=None):
+ @staticmethod
+ def get_bbox_dimension(bbox=None):
"""
Calculate the area of a bounding box.