PyCharm Docs Inspect fixes (#18432)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Muhammad Rizwan Munawar 2024-12-29 23:19:05 +05:00 committed by GitHub
parent ef16c56c99
commit f7b9009c91
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
15 changed files with 52 additions and 50 deletions

View file

@ -37,6 +37,7 @@ The Fashion-MNIST dataset is split into two subsets:
Each training and test example is assigned to one of the following labels:
```
0. T-shirt/top
1. Trouser
2. Pullover
@ -47,6 +48,7 @@ Each training and test example is assigned to one of the following labels:
7. Sneaker
8. Bag
9. Ankle boot
```
## Applications

View file

@ -209,7 +209,7 @@ Yes, you can export your Ultralytics YOLO11 model to be compatible with the Cora
For more information, refer to the [Export Mode](../modes/export.md) documentation.
### What should I do if TensorFlow is already installed on my Raspberry Pi but I want to use tflite-runtime instead?
### What should I do if TensorFlow is already installed on my Raspberry Pi, but I want to use tflite-runtime instead?
If you have TensorFlow installed on your Raspberry Pi and need to switch to `tflite-runtime`, you'll need to uninstall TensorFlow first using:

View file

@ -31,7 +31,7 @@ Evaluating how well a model performs helps us understand how effectively it work
The confidence score represents the model's certainty that a detected object belongs to a particular class. It ranges from 0 to 1, with higher scores indicating greater confidence. The confidence score helps filter predictions; only detections with confidence scores above a specified threshold are considered valid.
_Quick Tip:_ When running inferences, if you aren't seeing any predictions and you've checked everything else, try lowering the confidence score. Sometimes, the threshold is too high, causing the model to ignore valid predictions. Lowering the score allows the model to consider more possibilities. This might not meet your project goals, but it's a good way to see what the model can do and decide how to fine-tune it.
_Quick Tip:_ When running inferences, if you aren't seeing any predictions, and you've checked everything else, try lowering the confidence score. Sometimes, the threshold is too high, causing the model to ignore valid predictions. Lowering the score allows the model to consider more possibilities. This might not meet your project goals, but it's a good way to see what the model can do and decide how to fine-tune it.
### Intersection over Union

View file

@ -23,7 +23,7 @@ Regular model monitoring helps developers track the [model's performance](./mode
Here are some best practices to keep in mind while monitoring your computer vision model in production:
- **Track Performance Regularly**: Continuously monitor the model's performance to detect changes over time.
- **Double Check the Data Quality**: Check for missing values or anomalies in the data.
- **Double-Check the Data Quality**: Check for missing values or anomalies in the data.
- **Use Diverse Data Sources**: Monitor data from various sources to get a comprehensive view of the model's performance.
- **Combine Monitoring Techniques**: Use a mix of drift detection algorithms and rule-based approaches to identify a wide range of issues.
- **Monitor Inputs and Outputs**: Keep an eye on both the data the model processes and the results it produces to make sure everything is functioning correctly.

View file

@ -195,22 +195,22 @@ Performing inference with a trained Ultralytics YOLO model is straightforward:
1. Load the Model:
```python
from ultralytics import YOLO
```python
from ultralytics import YOLO
model = YOLO("path/to/your/model.pt")
```
model = YOLO("path/to/your/model.pt")
```
2. Run Inference:
```python
results = model("path/to/image.jpg")
```python
results = model("path/to/image.jpg")
for r in results:
for r in results:
print(r.boxes) # print bounding box predictions
print(r.masks) # print mask predictions
print(r.probs) # print class probabilities
```
```
For advanced inference techniques, including batch processing, video inference, and custom preprocessing, refer to the detailed [prediction guide](https://docs.ultralytics.com/modes/predict/).

View file

@ -12,7 +12,7 @@ You can train [Ultralytics YOLO11 models](https://github.com/ultralytics/ultraly
## What is IBM Watsonx?
[Watsonx](https://www.ibm.com/watsonx) is IBM's cloud-based platform designed for commercial [generative AI](https://www.ultralytics.com/glossary/generative-ai) and scientific data. IBM Watsonx's three components - watsonx.ai, watsonx.data, and watsonx.governance - come together to create an end-to-end, trustworthy AI platform that can accelerate AI projects aimed at solving business problems. It provides powerful tools for building, training, and [deploying machine learning models](../guides/model-deployment-options.md) and makes it easy to connect with various data sources.
[Watsonx](https://www.ibm.com/watsonx) is IBM's cloud-based platform designed for commercial [generative AI](https://www.ultralytics.com/glossary/generative-ai) and scientific data. IBM Watsonx's three components - `watsonx.ai`, `watsonx.data`, and `watsonx.governance` - come together to create an end-to-end, trustworthy AI platform that can accelerate AI projects aimed at solving business problems. It provides powerful tools for building, training, and [deploying machine learning models](../guides/model-deployment-options.md) and makes it easy to connect with various data sources.
<p align="center">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/overview-of-ibm-watsonx.avif" alt="Overview of IBM Watsonx">
@ -22,7 +22,7 @@ Its user-friendly interface and collaborative capabilities streamline the develo
## Key Features of IBM Watsonx
IBM Watsonx is made of three main components: watsonx.ai, watsonx.data, and watsonx.governance. Each component offers features that cater to different aspects of AI and data management. Let's take a closer look at them.
IBM Watsonx is made of three main components: `watsonx.ai`, `watsonx.data`, and `watsonx.governance`. Each component offers features that cater to different aspects of AI and data management. Let's take a closer look at them.
### [Watsonx.ai](https://www.ibm.com/products/watsonx-ai)

View file

@ -62,7 +62,7 @@ Next, let's understand the features Kaggle offers that make it an excellent plat
- **Datasets**: Kaggle hosts a massive collection of datasets on various topics. You can easily search and use these datasets in your projects, which is particularly handy for training and testing your YOLO11 models.
- **Competitions**: Known for its exciting competitions, Kaggle allows data scientists and machine learning enthusiasts to solve real-world problems. Competing helps you improve your skills, learn new techniques, and gain recognition in the community.
- **Free Access to TPUs**: Kaggle provides free access to powerful TPUs, which are essential for training complex machine learning models. This means you can speed up processing and boost the performance of your YOLO11 projects without incurring extra costs.
- **Integration with Github**: Kaggle allows you to easily connect your GitHub repository to upload notebooks and save your work. This integration makes it convenient to manage and access your files.
- **Integration with GitHub**: Kaggle allows you to easily connect your GitHub repository to upload notebooks and save your work. This integration makes it convenient to manage and access your files.
- **Community and Discussions**: Kaggle boasts a strong community of data scientists and machine learning practitioners. The discussion forums and shared notebooks are fantastic resources for learning and troubleshooting. You can easily find help, share your knowledge, and collaborate with others.
## Why Should You Use Kaggle for Your YOLO11 Projects?
@ -81,7 +81,7 @@ If you want to learn more about Kaggle, here are some helpful resources to guide
- [**Kaggle Learn**](https://www.kaggle.com/learn): Discover a variety of free, interactive tutorials on Kaggle Learn. These courses cover essential data science topics and provide hands-on experience to help you master new skills.
- [**Getting Started with Kaggle**](https://www.kaggle.com/code/alexisbcook/getting-started-with-kaggle): This comprehensive guide walks you through the basics of using Kaggle, from joining competitions to creating your first notebook. It's a great starting point for newcomers.
- [**Kaggle Medium Page**](https://medium.com/@kaggleteam): Explore tutorials, updates, and community contributions on Kaggle's Medium page. It's an excellent source for staying up-to-date with the latest trends and gaining deeper insights into data science.
- [**Kaggle Medium Page**](https://medium.com/@kaggleteam): Explore tutorials, updates, and community contributions to Kaggle's Medium page. It's an excellent source for staying up-to-date with the latest trends and gaining deeper insights into data science.
## Summary

View file

@ -101,7 +101,7 @@ For more details about supported export options, visit the [Ultralytics document
## Deploying Exported YOLO11 NCNN Models
After successfully exporting your Ultralytics YOLO11 models to NCNN format, you can now deploy them. The primary and recommended first step for running a NCNN model is to utilize the YOLO("./model_ncnn_model") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your NCNN models in various other settings, take a look at the following resources:
After successfully exporting your Ultralytics YOLO11 models to NCNN format, you can now deploy them. The primary and recommended first step for running a NCNN model is to utilize the YOLO("yolo11n_ncnn_model/") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your NCNN models in various other settings, take a look at the following resources:
- **[Android](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-android)**: This blog explains how to use NCNN models for performing tasks like [object detection](https://www.ultralytics.com/glossary/object-detection) through Android applications.

View file

@ -168,26 +168,26 @@ To integrate Weights & Biases with Ultralytics YOLO11:
1. Install the required packages:
```bash
pip install -U ultralytics wandb
```
```bash
pip install -U ultralytics wandb
```
2. Log in to your Weights & Biases account:
```python
import wandb
```python
import wandb
wandb.login(key="<API_KEY>")
```
wandb.login(key="<API_KEY>")
```
3. Train your YOLO11 model with W&B logging enabled:
```python
from ultralytics import YOLO
```python
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.train(data="coco8.yaml", epochs=5, project="ultralytics", name="yolo11n")
```
model = YOLO("yolo11n.pt")
model.train(data="coco8.yaml", epochs=5, project="ultralytics", name="yolo11n")
```
This will automatically log metrics, hyperparameters, and model artifacts to your W&B project.

View file

@ -32,7 +32,7 @@ Dataset annotation is a very resource intensive and time-consuming process. If y
```{ .py .annotate }
from ultralytics.data.annotator import auto_annotate
auto_annotate( # (1)!
auto_annotate(
data="path/to/new/data",
det_model="yolo11n.pt",
sam_model="mobile_sam.pt",
@ -41,17 +41,16 @@ auto_annotate( # (1)!
)
```
1. Nothing returns from this function
This function does not return any value. For further details on how the function operates:
- [See the reference section for `annotator.auto_annotate`](../reference/data/annotator.md#ultralytics.data.annotator.auto_annotate) for more insight on how the function operates.
- Use in combination with the [function `segments2boxes`](#convert-segments-to-bounding-boxes) to generate object detection bounding boxes as well
### Convert Segmentation Masks into YOLO Format
![Segmentation Masks to YOLO Format](https://github.com/ultralytics/docs/releases/download/0/segmentation-masks-to-yolo-format.avif)
Use to convert a dataset of segmentation mask images to the `YOLO` segmentation format.
Use to convert a dataset of segmentation mask images to the [`YOLO`](../models/yolo11.md) segmentation format.
This function takes the directory containing the binary format mask images and converts them into YOLO segmentation format.
The converted masks will be saved in the specified output directory.
@ -59,7 +58,8 @@ The converted masks will be saved in the specified output directory.
```python
from ultralytics.data.converter import convert_segment_masks_to_yolo_seg
# The classes here is the total classes in the dataset, for COCO dataset we have 80 classes
# The classes here is the total classes in the dataset.
# for COCO dataset we have 80 classes.
convert_segment_masks_to_yolo_seg(masks_dir="path/to/masks_dir", output_dir="path/to/output_dir", classes=80)
```

View file

@ -1,6 +1,6 @@
# YOLOv8/YOLOv5 Inference C++
This example demonstrates how to perform inference using YOLOv8 and YOLOv5 models in C++ with OpenCV's DNN API.
This example demonstrates how to perform inference using YOLOv8 and YOLOv5 models in C++ with OpenCV DNN API.
## Usage
@ -27,13 +27,13 @@ make
To export YOLOv8 models:
```commandline
```bash
yolo export model=yolov8s.pt imgsz=480,640 format=onnx opset=12
```
To export YOLOv5 models:
```commandline
```bash
python3 export.py --weights yolov5s.pt --img 480 640 --include onnx --opset 12
```
@ -45,6 +45,6 @@ yolov5s.onnx:
![image](https://user-images.githubusercontent.com/40023722/217357005-07464492-d1da-42e3-98a7-fc753f87d5e6.png)
This repository utilizes OpenCV's DNN API to run ONNX exported models of YOLOv5 and YOLOv8. In theory, it should work for YOLOv6 and YOLOv7 as well, but they have not been tested. Note that the example networks are exported with rectangular (640x480) resolutions, but any exported resolution will work. You may want to use the letterbox approach for square images, depending on your use case.
This repository utilizes OpenCV DNN API to run ONNX exported models of YOLOv5 and YOLOv8. In theory, it should work for YOLOv6 and YOLOv7 as well, but they have not been tested. Note that the example networks are exported with rectangular (640x480) resolutions, but any exported resolution will work. You may want to use the letterbox approach for square images, depending on your use case.
The **main** branch version uses Qt as a GUI wrapper. The primary focus here is the **Inference** class file, which demonstrates how to transpose YOLOv8 models to work as YOLOv5 models.

View file

@ -30,6 +30,6 @@ make
To export YOLOv8 models:
```commandline
```bash
yolo export model=yolov8s.pt imgsz=640 format=torchscript
```

View file

@ -50,7 +50,7 @@ Once built, you can run inference on an image using the following command:
To use your YOLOv8 model with OpenVINO, you need to export it first. Use the command below to export the model:
```commandline
```bash
yolo export model=yolov8s.pt imgsz=640 format=openvino
```

View file

@ -91,9 +91,9 @@ class Predictor(BasePredictor):
_callbacks (Dict | None): Dictionary of callback functions to customize behavior.
Examples:
>>> predictor = Predictor(cfg=DEFAULT_CFG)
>>> predictor = Predictor(overrides={"imgsz": 640})
>>> predictor = Predictor(_callbacks={"on_predict_start": custom_callback})
>>> predictor_example = Predictor(cfg=DEFAULT_CFG)
>>> predictor_example_with_imgsz = Predictor(overrides={"imgsz": 640})
>>> predictor_example_with_callback = Predictor(_callbacks={"on_predict_start": custom_callback})
"""
if overrides is None:
overrides = {}
@ -215,7 +215,7 @@ class Predictor(BasePredictor):
im (torch.Tensor): Preprocessed input image tensor with shape (N, C, H, W).
bboxes (np.ndarray | List | None): Bounding boxes in XYXY format with shape (N, 4).
points (np.ndarray | List | None): Points indicating object locations with shape (N, 2) or (N, num_points, 2), in pixels.
labels (np.ndarray | List | None): Point prompt labels with shape (N,) or (N, num_points). 1 for foreground, 0 for background.
labels (np.ndarray | List | None): Point prompt labels with shape (N) or (N, num_points). 1 for foreground, 0 for background.
masks (np.ndarray | None): Low-res masks from previous predictions with shape (N, H, W). For SAM, H=W=256.
multimask_output (bool): Flag to return multiple masks for ambiguous prompts.
@ -260,7 +260,7 @@ class Predictor(BasePredictor):
dst_shape (tuple): The target shape (height, width) for the prompts.
bboxes (np.ndarray | List | None): Bounding boxes in XYXY format with shape (N, 4).
points (np.ndarray | List | None): Points indicating object locations with shape (N, 2) or (N, num_points, 2), in pixels.
labels (np.ndarray | List | None): Point prompt labels with shape (N,) or (N, num_points). 1 for foreground, 0 for background.
labels (np.ndarray | List | None): Point prompt labels with shape (N) or (N, num_points). 1 for foreground, 0 for background.
masks (List | np.ndarray, Optional): Masks for the objects, where each mask is a 2D array.
Raises:
@ -853,8 +853,8 @@ class SAM2VideoPredictor(SAM2Predictor):
Examples:
>>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
>>> predictor = SAM2VideoPredictor(overrides={"imgsz": 640})
>>> predictor = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
>>> predictor_example_with_imgsz = SAM2VideoPredictor(overrides={"imgsz": 640})
>>> predictor_example_with_callback = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
"""
super().__init__(cfg, overrides, _callbacks)
self.inference_state = {}

View file

@ -1269,7 +1269,7 @@ def plt_color_scatter(v, f, bins=20, cmap="viridis", alpha=0.8, edgecolors="none
def plot_tune_results(csv_file="tune_results.csv"):
"""
Plot the evolution results stored in an 'tune_results.csv' file. The function generates a scatter plot for each key
Plot the evolution results stored in a 'tune_results.csv' file. The function generates a scatter plot for each key
in the CSV, color-coded based on fitness scores. The best-performing configurations are highlighted on the plots.
Args: