diff --git a/docs/en/datasets/classify/fashion-mnist.md b/docs/en/datasets/classify/fashion-mnist.md index f9d61fd0..6c49ceeb 100644 --- a/docs/en/datasets/classify/fashion-mnist.md +++ b/docs/en/datasets/classify/fashion-mnist.md @@ -37,6 +37,7 @@ The Fashion-MNIST dataset is split into two subsets: Each training and test example is assigned to one of the following labels: +``` 0. T-shirt/top 1. Trouser 2. Pullover @@ -47,6 +48,7 @@ Each training and test example is assigned to one of the following labels: 7. Sneaker 8. Bag 9. Ankle boot +``` ## Applications diff --git a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md index a6f91db7..ff850c08 100644 --- a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md +++ b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md @@ -209,7 +209,7 @@ Yes, you can export your Ultralytics YOLO11 model to be compatible with the Cora For more information, refer to the [Export Mode](../modes/export.md) documentation. -### What should I do if TensorFlow is already installed on my Raspberry Pi but I want to use tflite-runtime instead? +### What should I do if TensorFlow is already installed on my Raspberry Pi, but I want to use tflite-runtime instead? If you have TensorFlow installed on your Raspberry Pi and need to switch to `tflite-runtime`, you'll need to uninstall TensorFlow first using: diff --git a/docs/en/guides/model-evaluation-insights.md b/docs/en/guides/model-evaluation-insights.md index 5bd8bede..5b16a99b 100644 --- a/docs/en/guides/model-evaluation-insights.md +++ b/docs/en/guides/model-evaluation-insights.md @@ -31,7 +31,7 @@ Evaluating how well a model performs helps us understand how effectively it work The confidence score represents the model's certainty that a detected object belongs to a particular class. It ranges from 0 to 1, with higher scores indicating greater confidence. The confidence score helps filter predictions; only detections with confidence scores above a specified threshold are considered valid. -_Quick Tip:_ When running inferences, if you aren't seeing any predictions and you've checked everything else, try lowering the confidence score. Sometimes, the threshold is too high, causing the model to ignore valid predictions. Lowering the score allows the model to consider more possibilities. This might not meet your project goals, but it's a good way to see what the model can do and decide how to fine-tune it. +_Quick Tip:_ When running inferences, if you aren't seeing any predictions, and you've checked everything else, try lowering the confidence score. Sometimes, the threshold is too high, causing the model to ignore valid predictions. Lowering the score allows the model to consider more possibilities. This might not meet your project goals, but it's a good way to see what the model can do and decide how to fine-tune it. ### Intersection over Union diff --git a/docs/en/guides/model-monitoring-and-maintenance.md b/docs/en/guides/model-monitoring-and-maintenance.md index 1f4f8ed8..2933de45 100644 --- a/docs/en/guides/model-monitoring-and-maintenance.md +++ b/docs/en/guides/model-monitoring-and-maintenance.md @@ -23,7 +23,7 @@ Regular model monitoring helps developers track the [model's performance](./mode Here are some best practices to keep in mind while monitoring your computer vision model in production: - **Track Performance Regularly**: Continuously monitor the model's performance to detect changes over time. -- **Double Check the Data Quality**: Check for missing values or anomalies in the data. +- **Double-Check the Data Quality**: Check for missing values or anomalies in the data. - **Use Diverse Data Sources**: Monitor data from various sources to get a comprehensive view of the model's performance. - **Combine Monitoring Techniques**: Use a mix of drift detection algorithms and rule-based approaches to identify a wide range of issues. - **Monitor Inputs and Outputs**: Keep an eye on both the data the model processes and the results it produces to make sure everything is functioning correctly. diff --git a/docs/en/help/FAQ.md b/docs/en/help/FAQ.md index bde16d98..0272d8b7 100644 --- a/docs/en/help/FAQ.md +++ b/docs/en/help/FAQ.md @@ -195,22 +195,22 @@ Performing inference with a trained Ultralytics YOLO model is straightforward: 1. Load the Model: -```python -from ultralytics import YOLO + ```python + from ultralytics import YOLO -model = YOLO("path/to/your/model.pt") -``` + model = YOLO("path/to/your/model.pt") + ``` 2. Run Inference: -```python -results = model("path/to/image.jpg") + ```python + results = model("path/to/image.jpg") -for r in results: - print(r.boxes) # print bounding box predictions - print(r.masks) # print mask predictions - print(r.probs) # print class probabilities -``` + for r in results: + print(r.boxes) # print bounding box predictions + print(r.masks) # print mask predictions + print(r.probs) # print class probabilities + ``` For advanced inference techniques, including batch processing, video inference, and custom preprocessing, refer to the detailed [prediction guide](https://docs.ultralytics.com/modes/predict/). diff --git a/docs/en/integrations/ibm-watsonx.md b/docs/en/integrations/ibm-watsonx.md index 9b820e7f..16ebaa2a 100644 --- a/docs/en/integrations/ibm-watsonx.md +++ b/docs/en/integrations/ibm-watsonx.md @@ -12,7 +12,7 @@ You can train [Ultralytics YOLO11 models](https://github.com/ultralytics/ultraly ## What is IBM Watsonx? -[Watsonx](https://www.ibm.com/watsonx) is IBM's cloud-based platform designed for commercial [generative AI](https://www.ultralytics.com/glossary/generative-ai) and scientific data. IBM Watsonx's three components - watsonx.ai, watsonx.data, and watsonx.governance - come together to create an end-to-end, trustworthy AI platform that can accelerate AI projects aimed at solving business problems. It provides powerful tools for building, training, and [deploying machine learning models](../guides/model-deployment-options.md) and makes it easy to connect with various data sources. +[Watsonx](https://www.ibm.com/watsonx) is IBM's cloud-based platform designed for commercial [generative AI](https://www.ultralytics.com/glossary/generative-ai) and scientific data. IBM Watsonx's three components - `watsonx.ai`, `watsonx.data`, and `watsonx.governance` - come together to create an end-to-end, trustworthy AI platform that can accelerate AI projects aimed at solving business problems. It provides powerful tools for building, training, and [deploying machine learning models](../guides/model-deployment-options.md) and makes it easy to connect with various data sources.
@@ -22,7 +22,7 @@ Its user-friendly interface and collaborative capabilities streamline the develo
## Key Features of IBM Watsonx
-IBM Watsonx is made of three main components: watsonx.ai, watsonx.data, and watsonx.governance. Each component offers features that cater to different aspects of AI and data management. Let's take a closer look at them.
+IBM Watsonx is made of three main components: `watsonx.ai`, `watsonx.data`, and `watsonx.governance`. Each component offers features that cater to different aspects of AI and data management. Let's take a closer look at them.
### [Watsonx.ai](https://www.ibm.com/products/watsonx-ai)
diff --git a/docs/en/integrations/kaggle.md b/docs/en/integrations/kaggle.md
index cee6b847..40c928fa 100644
--- a/docs/en/integrations/kaggle.md
+++ b/docs/en/integrations/kaggle.md
@@ -62,7 +62,7 @@ Next, let's understand the features Kaggle offers that make it an excellent plat
- **Datasets**: Kaggle hosts a massive collection of datasets on various topics. You can easily search and use these datasets in your projects, which is particularly handy for training and testing your YOLO11 models.
- **Competitions**: Known for its exciting competitions, Kaggle allows data scientists and machine learning enthusiasts to solve real-world problems. Competing helps you improve your skills, learn new techniques, and gain recognition in the community.
- **Free Access to TPUs**: Kaggle provides free access to powerful TPUs, which are essential for training complex machine learning models. This means you can speed up processing and boost the performance of your YOLO11 projects without incurring extra costs.
-- **Integration with Github**: Kaggle allows you to easily connect your GitHub repository to upload notebooks and save your work. This integration makes it convenient to manage and access your files.
+- **Integration with GitHub**: Kaggle allows you to easily connect your GitHub repository to upload notebooks and save your work. This integration makes it convenient to manage and access your files.
- **Community and Discussions**: Kaggle boasts a strong community of data scientists and machine learning practitioners. The discussion forums and shared notebooks are fantastic resources for learning and troubleshooting. You can easily find help, share your knowledge, and collaborate with others.
## Why Should You Use Kaggle for Your YOLO11 Projects?
@@ -81,7 +81,7 @@ If you want to learn more about Kaggle, here are some helpful resources to guide
- [**Kaggle Learn**](https://www.kaggle.com/learn): Discover a variety of free, interactive tutorials on Kaggle Learn. These courses cover essential data science topics and provide hands-on experience to help you master new skills.
- [**Getting Started with Kaggle**](https://www.kaggle.com/code/alexisbcook/getting-started-with-kaggle): This comprehensive guide walks you through the basics of using Kaggle, from joining competitions to creating your first notebook. It's a great starting point for newcomers.
-- [**Kaggle Medium Page**](https://medium.com/@kaggleteam): Explore tutorials, updates, and community contributions on Kaggle's Medium page. It's an excellent source for staying up-to-date with the latest trends and gaining deeper insights into data science.
+- [**Kaggle Medium Page**](https://medium.com/@kaggleteam): Explore tutorials, updates, and community contributions to Kaggle's Medium page. It's an excellent source for staying up-to-date with the latest trends and gaining deeper insights into data science.
## Summary
diff --git a/docs/en/integrations/ncnn.md b/docs/en/integrations/ncnn.md
index c3f7b992..9dc13f96 100644
--- a/docs/en/integrations/ncnn.md
+++ b/docs/en/integrations/ncnn.md
@@ -101,7 +101,7 @@ For more details about supported export options, visit the [Ultralytics document
## Deploying Exported YOLO11 NCNN Models
-After successfully exporting your Ultralytics YOLO11 models to NCNN format, you can now deploy them. The primary and recommended first step for running a NCNN model is to utilize the YOLO("./model_ncnn_model") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your NCNN models in various other settings, take a look at the following resources:
+After successfully exporting your Ultralytics YOLO11 models to NCNN format, you can now deploy them. The primary and recommended first step for running a NCNN model is to utilize the YOLO("yolo11n_ncnn_model/") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your NCNN models in various other settings, take a look at the following resources:
- **[Android](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-android)**: This blog explains how to use NCNN models for performing tasks like [object detection](https://www.ultralytics.com/glossary/object-detection) through Android applications.
diff --git a/docs/en/integrations/weights-biases.md b/docs/en/integrations/weights-biases.md
index 55eee2ee..e1f5eff1 100644
--- a/docs/en/integrations/weights-biases.md
+++ b/docs/en/integrations/weights-biases.md
@@ -168,26 +168,26 @@ To integrate Weights & Biases with Ultralytics YOLO11:
1. Install the required packages:
-```bash
-pip install -U ultralytics wandb
-```
+ ```bash
+ pip install -U ultralytics wandb
+ ```
2. Log in to your Weights & Biases account:
-```python
-import wandb
+ ```python
+ import wandb
-wandb.login(key="