From 064e2fd282ed37acb6339afa631e16270e4823e6 Mon Sep 17 00:00:00 2001
From: Glenn Jocher
diff --git a/docs/en/guides/data-collection-and-annotation.md b/docs/en/guides/data-collection-and-annotation.md
index cf0b18bb..c12fb065 100644
--- a/docs/en/guides/data-collection-and-annotation.md
+++ b/docs/en/guides/data-collection-and-annotation.md
@@ -8,7 +8,7 @@ keywords: What is Data Annotation, Data Annotation Tools, Annotating Data, Avoid
## Introduction
-The key to success in any [computer vision project](./steps-of-a-cv-project.md) starts with effective data collection and annotation strategies. The quality of the data directly impacts model performance, so it’s important to understand the best practices related to data collection and data annotation.
+The key to success in any [computer vision project](./steps-of-a-cv-project.md) starts with effective data collection and annotation strategies. The quality of the data directly impacts model performance, so it's important to understand the best practices related to data collection and data annotation.
Every consideration regarding the data should closely align with [your project's goals](./defining-project-goals.md). Changes in your annotation strategies could shift the project's focus or effectiveness and vice versa. With this in mind, let's take a closer look at the best ways to approach data collection and annotation.
@@ -22,7 +22,7 @@ One of the first questions when starting a computer vision project is how many c
For example, if you want to monitor traffic, your classes might include "car," "truck," "bus," "motorcycle," and "bicycle." On the other hand, for tracking items in a store, your classes could be "fruits," "vegetables," "beverages," and "snacks." Defining classes based on your project goals helps keep your dataset relevant and focused.
-When you define your classes, another important distinction to make is whether to choose coarse or fine class counts. ‘Count' refers to the number of distinct classes you are interested in. This decision influences the granularity of your data and the complexity of your model. Here are the considerations for each approach:
+When you define your classes, another important distinction to make is whether to choose coarse or fine class counts. 'Count' refers to the number of distinct classes you are interested in. This decision influences the granularity of your data and the complexity of your model. Here are the considerations for each approach:
- **Coarse Class-Count**: These are broader, more inclusive categories, such as "vehicle" and "non-vehicle." They simplify annotation and require fewer computational resources but provide less detailed information, potentially limiting the model's effectiveness in complex scenarios.
- **Fine Class-Count**: More categories with finer distinctions, such as "sedan," "SUV," "pickup truck," and "motorcycle." They capture more detailed information, improving model accuracy and performance. However, they are more time-consuming and labor-intensive to annotate and require more computational resources.
@@ -67,9 +67,9 @@ Depending on the specific requirements of a [computer vision task](../tasks/inde
### Common Annotation Formats
-After selecting a type of annotation, it’s important to choose the appropriate format for storing and sharing annotations.
+After selecting a type of annotation, it's important to choose the appropriate format for storing and sharing annotations.
-Commonly used formats include [COCO](../datasets/detect/coco.md), which supports various annotation types like object detection, keypoint detection, stuff segmentation, panoptic segmentation, and image captioning, stored in JSON. [Pascal VOC](../datasets/detect/voc.md)) uses XML files and is popular for object detection tasks. YOLO, on the other hand, creates a .txt file for each image, containing annotations like object class, coordinates, height, and width, making it suitable for object detection.
+Commonly used formats include [COCO](../datasets/detect/coco.md), which supports various annotation types like object detection, keypoint detection, stuff segmentation, panoptic segmentation, and image captioning, stored in JSON. [Pascal VOC](../datasets/detect/voc.md) uses XML files and is popular for object detection tasks. YOLO, on the other hand, creates a .txt file for each image, containing annotations like object class, coordinates, height, and width, making it suitable for object detection.
### Techniques of Annotation
@@ -78,7 +78,7 @@ Now, assuming you've chosen a type of annotation and format, it's time to establ
- **Clarity and Detail**: Make sure your instructions are clear. Use examples and illustrations to understand what's expected.
- **Consistency**: Keep your annotations uniform. Set standard criteria for annotating different types of data, so all annotations follow the same rules.
- **Reducing Bias**: Stay neutral. Train yourself to be objective and minimize personal biases to ensure fair annotations.
-- **Efficiency**: Work smarter, not harder. Use tools and workflows that automate repetitive tasks, making the annotation process faster and more efficient..
+- **Efficiency**: Work smarter, not harder. Use tools and workflows that automate repetitive tasks, making the annotation process faster and more efficient.
Regularly reviewing and updating your labeling rules will help keep your annotations accurate, consistent, and aligned with your project goals.
@@ -86,7 +86,7 @@ Regularly reviewing and updating your labeling rules will help keep your annotat
Let's say you are ready to annotate now. There are several open-source tools available to help streamline the data annotation process. Here are some useful open annotation tools:
-- **[LabeI Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool that supports a wide range of annotation tasks and includes features for managing projects and quality control.
+- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool that supports a wide range of annotation tasks and includes features for managing projects and quality control.
- **[CVAT](https://github.com/cvat-ai/cvat)**: A powerful tool that supports various annotation formats and customizable workflows, making it suitable for complex projects.
- **[Labelme](https://github.com/labelmeai/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks.
diff --git a/docs/en/guides/defining-project-goals.md b/docs/en/guides/defining-project-goals.md
index 16ef6dc6..6827dee7 100644
--- a/docs/en/guides/defining-project-goals.md
+++ b/docs/en/guides/defining-project-goals.md
@@ -10,13 +10,13 @@ keywords: Computer Vision Project, Defining Problems, Setting Objectives, SMART
The first step in any computer vision project is defining what you want to achieve. It's crucial to have a clear roadmap from the start, which includes everything from data collection to deploying your model.
-If you need a quick refresher on the basics of a computer vision project, take a moment to read our guide on [the key steps in a computer vision project](./steps-of-a-cv-project.md). It’ll give you a solid overview of the whole process. Once you’re caught up, come back here so we can dive into how exactly you can define and refine the goals for your project.
+If you need a quick refresher on the basics of a computer vision project, take a moment to read our guide on [the key steps in a computer vision project](./steps-of-a-cv-project.md). It'll give you a solid overview of the whole process. Once you're caught up, come back here to dive into how exactly you can define and refine the goals for your project.
-Now, let’s get to the heart of defining a clear problem statement for your project and exploring the key decisions you’ll need to make along the way.
+Now, let's get to the heart of defining a clear problem statement for your project and exploring the key decisions you'll need to make along the way.
## Defining A Clear Problem Statement
-Setting clear goals and objectives for your project is the first big step toward finding the most effective solutions. Let’s understand how you can clearly define your project’s problem statement:
+Setting clear goals and objectives for your project is the first big step toward finding the most effective solutions. Let's understand how you can clearly define your project's problem statement:
- **Identify the Core Issue:** Pinpoint the specific challenge your computer vision project aims to solve.
- **Determine the Scope:** Define the boundaries of your problem.
@@ -25,7 +25,7 @@ Setting clear goals and objectives for your project is the first big step toward
### Example of a Business Problem Statement
-Let’s walk through an example.
+Let's walk through an example.
Consider a computer vision project where you want to [estimate the speed of vehicles](./speed-estimation.md) on a highway. The core issue is that current speed monitoring methods are inefficient and error-prone due to outdated radar systems and manual processes. The project aims to develop a real-time computer vision system that can replace legacy [speed estimation](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) systems.
@@ -56,7 +56,7 @@ For example, if your problem is monitoring vehicle speeds on a highway, the rele
-
+
@@ -34,9 +34,9 @@ Before discussing the details of each step involved in a computer vision project
Now that we know what to expect, let's dive right into the steps and get your project moving forward.
-## Step 1: Defining Your Project’s Goals
+## Step 1: Defining Your Project's Goals
-The first step in any computer vision project is clearly defining the problem you’re trying to solve. Knowing the end goal helps you start to build a solution. This is especially true when it comes to computer vision because your project’s objective will directly affect which computer vision task you need to focus on.
+The first step in any computer vision project is clearly defining the problem you're trying to solve. Knowing the end goal helps you start to build a solution. This is especially true when it comes to computer vision because your project's objective will directly affect which computer vision task you need to focus on.
Here are some examples of project objectives and the computer vision tasks that can be used to reach these objectives:
@@ -55,17 +55,17 @@ After understanding the project objective and suitable computer vision tasks, an
Depending on the objective, you might choose to select the model first or after seeing what data you are able to collect in Step 2. For example, suppose your project is highly dependent on the availability of specific types of data. In that case, it may be more practical to gather and analyze the data first before selecting a model. On the other hand, if you have a clear understanding of the model requirements, you can choose the model first and then collect data that fits those specifications.
-Choosing between training from scratch or using transfer learning affects how you prepare your data. Training from scratch requires a diverse dataset to build the model’s understanding from the ground up. Transfer learning, on the other hand, allows you to use a pre-trained model and adapt it with a smaller, more specific dataset. Also, choosing a specific model to train will determine how you need to prepare your data, such as resizing images or adding annotations, according to the model’s specific requirements.
+Choosing between training from scratch or using transfer learning affects how you prepare your data. Training from scratch requires a diverse dataset to build the model's understanding from the ground up. Transfer learning, on the other hand, allows you to use a pre-trained model and adapt it with a smaller, more specific dataset. Also, choosing a specific model to train will determine how you need to prepare your data, such as resizing images or adding annotations, according to the model's specific requirements.
@@ -119,7 +119,7 @@ By properly [understanding, splitting, and augmenting your data](./preprocessing
Once your dataset is ready for training, you can focus on setting up the necessary environment, managing your datasets, and training your model.
-First, you’ll need to make sure your environment is configured correctly. Typically, this includes the following:
+First, you'll need to make sure your environment is configured correctly. Typically, this includes the following:
- Installing essential libraries and frameworks like TensorFlow, PyTorch, or [Ultralytics](../quickstart.md).
- If you are using a GPU, installing libraries like CUDA and cuDNN will help enable GPU acceleration and speed up the training process.
@@ -132,9 +132,9 @@ It's important to keep in mind that proper dataset management is vital for effic
## Step 5: Model Evaluation and Model Finetuning
-It’s important to assess your model's performance using various metrics and refine it to improve accuracy. [Evaluating](../modes/val.md) helps identify areas where the model excels and where it may need improvement. Fine-tuning ensures the model is optimized for the best possible performance.
+It's important to assess your model's performance using various metrics and refine it to improve accuracy. [Evaluating](../modes/val.md) helps identify areas where the model excels and where it may need improvement. Fine-tuning ensures the model is optimized for the best possible performance.
-- **[Performance Metrics](./yolo-performance-metrics.md):** Use metrics like accuracy, precision, recall, and F1-score to evaluate your model’s performance. These metrics provide insights into how well your model is making predictions.
+- **[Performance Metrics](./yolo-performance-metrics.md):** Use metrics like accuracy, precision, recall, and F1-score to evaluate your model's performance. These metrics provide insights into how well your model is making predictions.
- **[Hyperparameter Tuning](./hyperparameter-tuning.md):** Adjust hyperparameters to optimize model performance. Techniques like grid search or random search can help find the best hyperparameter values.
- Fine-Tuning: Make small adjustments to the model architecture or training process to enhance performance. This might involve tweaking learning rates, batch sizes, or other model parameters.
@@ -159,7 +159,7 @@ Once your model has been thoroughly tested, it's time to deploy it. Deployment i
## Step 8: Monitoring, Maintenance, and Documentation
-Once your model is deployed, it’s important to continuously monitor its performance, maintain it to handle any issues, and document the entire process for future reference and improvements.
+Once your model is deployed, it's important to continuously monitor its performance, maintain it to handle any issues, and document the entire process for future reference and improvements.
Monitoring tools can help you track key performance indicators (KPIs) and detect anomalies or drops in accuracy. By monitoring the model, you can be aware of model drift, where the model's performance declines over time due to changes in the input data. Periodically retrain the model with updated data to maintain accuracy and relevance.
@@ -174,12 +174,12 @@ In addition to monitoring and maintenance, documentation is also key. Thoroughly
Here are some common questions that might arise during a computer vision project:
- **Q1:** How do the steps change if I already have a dataset or data when starting a computer vision project?
- - **A1:** Starting with a pre-existing dataset or data affects the initial steps of your project. In Step 1, along with deciding the computer vision task and model, you’ll also need to explore your dataset thoroughly. Understanding its quality, variety, and limitations will guide your choice of model and training approach. Your approach should align closely with the data's characteristics for more effective outcomes. Depending on your data or dataset, you may be able to skip Step 2 as well.
+ - **A1:** Starting with a pre-existing dataset or data affects the initial steps of your project. In Step 1, along with deciding the computer vision task and model, you'll also need to explore your dataset thoroughly. Understanding its quality, variety, and limitations will guide your choice of model and training approach. Your approach should align closely with the data's characteristics for more effective outcomes. Depending on your data or dataset, you may be able to skip Step 2 as well.
-- **Q2:** I’m not sure what computer vision project to start my AI learning journey with.
+- **Q2:** I'm not sure what computer vision project to start my AI learning journey with.
- **A2:** Check out our [guides on Real-World Projects](./index.md) for inspiration and guidance.
-- **Q3:** I don’t want to train a model. I just want to try running a model on an image. How can I do that?
+- **Q3:** I don't want to train a model. I just want to try running a model on an image. How can I do that?
- **A3:** You can use a pre-trained model to run predictions on an image without training a new model. Check out the [YOLOv8 predict docs page](../modes/predict.md) for instructions on how to use a pre-trained YOLOv8 model to make predictions on your images.
- **Q4:** Where can I find more detailed articles and updates about computer vision applications and YOLOv8?
diff --git a/docs/en/guides/yolo-common-issues.md b/docs/en/guides/yolo-common-issues.md
index bca0e3cd..427b6d37 100644
--- a/docs/en/guides/yolo-common-issues.md
+++ b/docs/en/guides/yolo-common-issues.md
@@ -183,7 +183,7 @@ This section will address common issues faced during model prediction.
**Solution**:
-- Coordinate Format: YOLOv8 provides bounding box coordinates in absolute pixel values. To convert these to relative coordinates (ranging from 0 to 1), you need to divide by the image dimensions. For example, let’s say your image size is 640x640. Then you would do the following:
+- Coordinate Format: YOLOv8 provides bounding box coordinates in absolute pixel values. To convert these to relative coordinates (ranging from 0 to 1), you need to divide by the image dimensions. For example, let's say your image size is 640x640. Then you would do the following:
```python
# Convert absolute coordinates to relative coordinates
@@ -268,7 +268,7 @@ Engaging with a community of like-minded individuals can significantly enhance y
### Forums and Channels for Getting Help
-**GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it’s a great place to get help with specific problems.
+**GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it's a great place to get help with specific problems.
**Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and the developers.
diff --git a/docs/en/guides/yolo-performance-metrics.md b/docs/en/guides/yolo-performance-metrics.md
index a34f161e..5024fa4c 100644
--- a/docs/en/guides/yolo-performance-metrics.md
+++ b/docs/en/guides/yolo-performance-metrics.md
@@ -23,7 +23,7 @@ Performance metrics are key tools to evaluate the accuracy and efficiency of obj
## Object Detection Metrics
-Let’s start by discussing some metrics that are not only important to YOLOv8 but are broadly applicable across different object detection models.
+Let's start by discussing some metrics that are not only important to YOLOv8 but are broadly applicable across different object detection models.
- **Intersection over Union (IoU):** IoU is a measure that quantifies the overlap between a predicted bounding box and a ground truth bounding box. It plays a fundamental role in evaluating the accuracy of object localization.
@@ -115,7 +115,7 @@ For real-time applications, speed metrics like FPS (Frames Per Second) and laten
## Interpretation of Results
-It’s important to understand the metrics. Here's what some of the commonly observed lower scores might suggest:
+It's important to understand the metrics. Here's what some of the commonly observed lower scores might suggest:
- **Low mAP:** Indicates the model may need general refinements.
@@ -157,7 +157,7 @@ Tapping into a community of enthusiasts and experts can amplify your journey wit
### Engage with the Broader Community
-- **GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it’s a great place to get help with specific problems.
+- **GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it's a great place to get help with specific problems.
- **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and the developers.
diff --git a/docs/en/help/code_of_conduct.md b/docs/en/help/code_of_conduct.md
index 46313c4e..ca895fe8 100644
--- a/docs/en/help/code_of_conduct.md
+++ b/docs/en/help/code_of_conduct.md
@@ -1,6 +1,6 @@
---
comments: true
-description: Explore Ultralytics community’s Code of Conduct, ensuring a supportive, inclusive environment for contributors & members at all levels. Find our guidelines on acceptable behavior & enforcement.
+description: Explore Ultralytics community's Code of Conduct, ensuring a supportive, inclusive environment for contributors & members at all levels. Find our guidelines on acceptable behavior & enforcement.
keywords: Ultralytics, code of conduct, community, contribution, behavior guidelines, enforcement, open source contributions
---
diff --git a/docs/en/help/environmental-health-safety.md b/docs/en/help/environmental-health-safety.md
index 9fee240b..2cf5d8bb 100644
--- a/docs/en/help/environmental-health-safety.md
+++ b/docs/en/help/environmental-health-safety.md
@@ -1,6 +1,6 @@
---
comments: false
-description: Discover Ultralytics’ EHS policy principles and implementation measures. Committed to safety, environment, and continuous improvement for a sustainable future.
+description: Discover Ultralytics' EHS policy principles and implementation measures. Committed to safety, environment, and continuous improvement for a sustainable future.
keywords: Ultralytics policy, EHS, environment, health and safety, compliance, prevention, continuous improvement, risk management, emergency preparedness, resource allocation, communication
---
diff --git a/docs/en/hub/pro.md b/docs/en/hub/pro.md
index c301e7bd..30887aa5 100644
--- a/docs/en/hub/pro.md
+++ b/docs/en/hub/pro.md
@@ -47,7 +47,7 @@ That's it!
The account balance is used to pay for [Ultralytics Cloud Training](./cloud-training.md) resources.
-In order to top-up your account balance, simply click on the **Top-Up** button.
+In order to top up your account balance, simply click on the **Top-Up** button.

@@ -57,4 +57,4 @@ Next, set the amount you want to top-up.
That's it!
-
\ No newline at end of file
+
diff --git a/docs/en/integrations/amazon-sagemaker.md b/docs/en/integrations/amazon-sagemaker.md
index c57d95c4..ef071d75 100644
--- a/docs/en/integrations/amazon-sagemaker.md
+++ b/docs/en/integrations/amazon-sagemaker.md
@@ -6,7 +6,7 @@ keywords: YOLOv8, Amazon SageMaker, deploy YOLOv8, AWS deployment, machine learn
# A Guide to Deploying YOLOv8 on Amazon SageMaker Endpoints
-Deploying advanced computer vision models like [Ultralytics’ YOLOv8](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various machine learning applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLOv8 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
+Deploying advanced computer vision models like [Ultralytics' YOLOv8](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various machine learning applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLOv8 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
This guide will take you through the process of deploying YOLOv8 PyTorch models on Amazon SageMaker Endpoints step by step. You'll learn the essentials of preparing your AWS environment, configuring the model appropriately, and using tools like AWS CloudFormation and the AWS Cloud Development Kit (CDK) for deployment.
@@ -32,7 +32,7 @@ First, ensure you have the following prerequisites in place:
- An AWS Account: If you don't already have one, sign up for an AWS account.
-- Configured IAM Roles: You’ll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
+- Configured IAM Roles: You'll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
- AWS CLI: If not already installed, download and install the AWS Command Line Interface (CLI) and configure it with your account details. Follow [the AWS CLI instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for installation.
@@ -144,7 +144,7 @@ Now that your YOLOv8 model is deployed, it's important to test its performance a
- Open the Test Notebook: In the same Jupyter environment, locate and open the 2_TestEndpoint.ipynb notebook, also in the sm-notebook directory.
-- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you’ll plot the output to visualize the model’s performance and accuracy, as shown below.
+- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you'll plot the output to visualize the model's performance and accuracy, as shown below.
diff --git a/docs/en/integrations/clearml.md b/docs/en/integrations/clearml.md
index b59a7828..6b480729 100644
--- a/docs/en/integrations/clearml.md
+++ b/docs/en/integrations/clearml.md
@@ -41,7 +41,7 @@ For detailed instructions and best practices related to the installation process
Once you have installed the necessary packages, the next step is to initialize and configure your ClearML SDK. This involves setting up your ClearML account and obtaining the necessary credentials for a seamless connection between your development environment and the ClearML server.
-Begin by initializing the ClearML SDK in your environment. The ‘clearml-init’ command starts the setup process and prompts you for the necessary credentials.
+Begin by initializing the ClearML SDK in your environment. The 'clearml-init' command starts the setup process and prompts you for the necessary credentials.
!!! Tip "Initial SDK Setup"
@@ -86,7 +86,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
### Understanding the Code
-Let’s understand the steps showcased in the usage code snippet above.
+Let's understand the steps showcased in the usage code snippet above.
**Step 1: Creating a ClearML Task**: A new task is initialized in ClearML, specifying your project and task names. This task will track and manage your model's training.
diff --git a/docs/en/integrations/comet.md b/docs/en/integrations/comet.md
index e395d202..5ce8a875 100644
--- a/docs/en/integrations/comet.md
+++ b/docs/en/integrations/comet.md
@@ -37,7 +37,7 @@ To install the required packages, run:
## Configuring Comet ML
-After installing the required packages, you’ll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
+After installing the required packages, you'll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
!!! Tip "Configuring Comet ML"
@@ -89,7 +89,7 @@ Comet automatically logs the following data with no additional configuration: me
## Understanding Your Model's Performance with Comet ML Visualizations
-Let's dive into what you'll see on the Comet ML dashboard once your YOLOv8 model begins training. The dashboard is where all the action happens, presenting a range of automatically logged information through visuals and statistics. Here’s a quick tour:
+Let's dive into what you'll see on the Comet ML dashboard once your YOLOv8 model begins training. The dashboard is where all the action happens, presenting a range of automatically logged information through visuals and statistics. Here's a quick tour:
**Experiment Panels**
diff --git a/docs/en/integrations/coreml.md b/docs/en/integrations/coreml.md
index 8c76cd61..71cf08d8 100644
--- a/docs/en/integrations/coreml.md
+++ b/docs/en/integrations/coreml.md
@@ -40,7 +40,7 @@ Apple's CoreML framework offers robust features for on-device machine learning.
## CoreML Deployment Options
-Before we look at the code for exporting YOLOv8 models to the CoreML format, let’s understand where CoreML models are usually used.
+Before we look at the code for exporting YOLOv8 models to the CoreML format, let's understand where CoreML models are usually used.
CoreML offers various deployment options for machine learning models, including:
@@ -50,7 +50,7 @@ CoreML offers various deployment options for machine learning models, including:
- **Downloaded Models**: These models are fetched from a server as needed. This approach is suitable for larger models or those needing regular updates. It helps keep the app bundle size smaller.
-- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It’s ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
+- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It's ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
## Exporting YOLOv8 Models to CoreML
@@ -123,4 +123,4 @@ In this guide, we went over how to export Ultralytics YOLOv8 models to CoreML fo
For further details on usage, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
-Also, if you’d like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.
+Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.
diff --git a/docs/en/integrations/dvc.md b/docs/en/integrations/dvc.md
index b723a1a2..087079a0 100644
--- a/docs/en/integrations/dvc.md
+++ b/docs/en/integrations/dvc.md
@@ -166,6 +166,6 @@ Based on your analysis, iterate on your experiments. Adjust model configurations
This guide has led you through the process of integrating DVCLive with Ultralytics' YOLOv8. You have learned how to harness the power of DVCLive for detailed experiment monitoring, effective visualization, and insightful analysis in your machine learning endeavors.
-For further details on usage, visit [DVCLive’s official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
+For further details on usage, visit [DVCLive's official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a collection of great resources and insights.
diff --git a/docs/en/integrations/edge-tpu.md b/docs/en/integrations/edge-tpu.md
index 98a22309..715e0475 100644
--- a/docs/en/integrations/edge-tpu.md
+++ b/docs/en/integrations/edge-tpu.md
@@ -32,7 +32,7 @@ Here are the key features that make TFLite Edge TPU a great model format choice
## Deployment Options with TFLite Edge TPU
-Before we jump into how to export YOLOv8 models to the TFLite Edge TPU format, let’s understand where TFLite Edge TPU models are usually used.
+Before we jump into how to export YOLOv8 models to the TFLite Edge TPU format, let's understand where TFLite Edge TPU models are usually used.
TFLite Edge TPU offers various deployment options for machine learning models, including:
@@ -76,7 +76,7 @@ Before diving into the usage instructions, it's important to note that while all
model = YOLO("yolov8n.pt")
# Export the model to TFLite Edge TPU format
- model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite’
+ model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite'
# Load the exported TFLite Edge TPU model
edgetpu_model = YOLO("yolov8n_full_integer_quant_edgetpu.tflite")
@@ -111,7 +111,7 @@ However, for in-depth instructions on deploying your TFLite Edge TPU models, tak
## Summary
-In this guide, we’ve learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
+In this guide, we've learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
For further details on usage, visit the [Edge TPU official website](https://cloud.google.com/edge-tpu).
diff --git a/docs/en/integrations/google-colab.md b/docs/en/integrations/google-colab.md
index 610eb9fc..73af1032 100644
--- a/docs/en/integrations/google-colab.md
+++ b/docs/en/integrations/google-colab.md
@@ -6,15 +6,15 @@ keywords: Ultralytics YOLOv8, Google Colab, CPU, GPU, TPU, Browser-based, Hardwa
# Accelerating YOLOv8 Projects with Google Colab
-Many developers lack the powerful computing resources needed to build deep learning models. Acquiring high-end hardware or renting a decent GPU can be expensive. Google Colab is a great solution to this. It’s a browser-based platform that allows you to work with large datasets, develop complex models, and share your work with others without a huge cost.
+Many developers lack the powerful computing resources needed to build deep learning models. Acquiring high-end hardware or renting a decent GPU can be expensive. Google Colab is a great solution to this. It's a browser-based platform that allows you to work with large datasets, develop complex models, and share your work with others without a huge cost.
-You can use Google Colab to work on projects related to [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models. Google Colab’s user-friendly environment is well suited for efficient model development and experimentation. Let’s learn more about Google Colab, its key features, and how you can use it to train YOLOv8 models.
+You can use Google Colab to work on projects related to [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models. Google Colab's user-friendly environment is well suited for efficient model development and experimentation. Let's learn more about Google Colab, its key features, and how you can use it to train YOLOv8 models.
## Google Colaboratory
Google Colaboratory, commonly known as Google Colab, was developed by Google Research in 2017. It is a free online cloud-based Jupyter Notebook environment that allows you to train your machine learning and deep learning models on CPUs, GPUs, and TPUs. The motivation behind developing Google Colab was Google's broader goals to advance AI technology and educational tools, and encourage the use of cloud services.
-You can use Google Colab regardless of the specifications and configurations of your local computer. All you need is a Google account and a web browser, and you’re good to go.
+You can use Google Colab regardless of the specifications and configurations of your local computer. All you need is a Google account and a web browser, and you're good to go.
## Training YOLOv8 Using Google Colaboratory
@@ -39,10 +39,10 @@ Learn how to train a YOLOv8 model with custom data on YouTube with Nicolai. Chec
### Common Questions While Working with Google Colab
-When working with Google Colab, you might have a few common questions. Let’s answer them.
+When working with Google Colab, you might have a few common questions. Let's answer them.
**Q: Why does my Google Colab session timeout?**
-A: Google Colab sessions can timeout due to inactivity, especially for free users who have a limited session duration.
+A: Google Colab sessions can time out due to inactivity, especially for free users who have a limited session duration.
**Q: Can I increase the session duration in Google Colab?**
A: Free users face limits, but Google Colab Pro offers extended session durations.
@@ -85,7 +85,7 @@ There are many options for training and evaluating YOLOv8 models, so what makes
- **Integration with Google Drive:** Colab seamlessly integrates with Google Drive to make data storage, access, and management simple. Datasets and models can be stored and retrieved directly from Google Drive.
-- **Markdown Support:** You can use markdown format for enhanced documentation within notebooks.
+- **Markdown Support:** You can use Markdown format for enhanced documentation within notebooks.
- **Scheduled Execution:** Developers can set notebooks to run automatically at specified times.
@@ -93,18 +93,18 @@ There are many options for training and evaluating YOLOv8 models, so what makes
## Keep Learning about Google Colab
-If you’d like to dive deeper into Google Colab, here are a few resources to guide you.
+If you'd like to dive deeper into Google Colab, here are a few resources to guide you.
- **[Training Custom Datasets with Ultralytics YOLOv8 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab)**: Learn how to train custom datasets with Ultralytics YOLOv8 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages.
- **[Curated Notebooks](https://colab.google/notebooks/)**: Here you can explore a series of organized and educational notebooks, each grouped by specific topic areas.
-- **[Google Colab’s Medium Page](https://medium.com/google-colab)**: You can find tutorials, updates, and community contributions here that can help you better understand and utilize this tool.
+- **[Google Colab's Medium Page](https://medium.com/google-colab)**: You can find tutorials, updates, and community contributions here that can help you better understand and utilize this tool.
## Summary
-We’ve discussed how you can easily experiment with Ultralytics YOLOv8 models on Google Colab. You can use Google Colab to train and evaluate your models on GPUs and TPUs with a few clicks.
+We've discussed how you can easily experiment with Ultralytics YOLOv8 models on Google Colab. You can use Google Colab to train and evaluate your models on GPUs and TPUs with a few clicks.
-For more details, visit [Google Colab’s FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
+For more details, visit [Google Colab's FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
Interested in more YOLOv8 integrations? Visit the [Ultralytics integration guide page](index.md) to explore additional tools and capabilities that can improve your machine-learning projects.
diff --git a/docs/en/integrations/ncnn.md b/docs/en/integrations/ncnn.md
index a2841bc7..e6ae1e6e 100644
--- a/docs/en/integrations/ncnn.md
+++ b/docs/en/integrations/ncnn.md
@@ -34,7 +34,7 @@ NCNN models offer a wide range of key features that enable on-device machine lea
## Deployment Options with NCNN
-Before we look at the code for exporting YOLOv8 models to the NCNN format, let’s understand how NCNN models are normally used.
+Before we look at the code for exporting YOLOv8 models to the NCNN format, let's understand how NCNN models are normally used.
NCNN models, designed for efficiency and performance, are compatible with a variety of deployment platforms:
diff --git a/docs/en/integrations/neural-magic.md b/docs/en/integrations/neural-magic.md
index 3e9e0e38..16fbf892 100644
--- a/docs/en/integrations/neural-magic.md
+++ b/docs/en/integrations/neural-magic.md
@@ -1,26 +1,26 @@
---
comments: true
-description: Learn how to deploy your YOLOv8 models rapidly using Neural Magic’s DeepSparse. This guide focuses on integrating Ultralytics YOLOv8 with the DeepSparse Engine for high-speed, CPU-based inference, leveraging advanced neural network sparsity techniques.
+description: Learn how to deploy your YOLOv8 models rapidly using Neural Magic's DeepSparse. This guide focuses on integrating Ultralytics YOLOv8 with the DeepSparse Engine for high-speed, CPU-based inference, leveraging advanced neural network sparsity techniques.
keywords: YOLOv8, DeepSparse Engine, Ultralytics, CPU Inference, Neural Network Sparsity, Object Detection, Model Optimization
---
-# Optimizing YOLOv8 Inferences with Neural Magic’s DeepSparse Engine
+# Optimizing YOLOv8 Inferences with Neural Magic's DeepSparse Engine
-When deploying object detection models like [Ultralytics YOLOv8](https://ultralytics.com) on various hardware, you can bump into unique issues like optimization. This is where YOLOv8’s integration with Neural Magic’s DeepSparse Engine steps in. It transforms the way YOLOv8 models are executed and enables GPU-level performance directly on CPUs.
+When deploying object detection models like [Ultralytics YOLOv8](https://ultralytics.com) on various hardware, you can bump into unique issues like optimization. This is where YOLOv8's integration with Neural Magic's DeepSparse Engine steps in. It transforms the way YOLOv8 models are executed and enables GPU-level performance directly on CPUs.
This guide shows you how to deploy YOLOv8 using Neural Magic's DeepSparse, how to run inferences, and also how to benchmark performance to ensure it is optimized.
-## Neural Magic’s DeepSparse
+## Neural Magic's DeepSparse
-
+
-Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PA**rallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
+Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PArallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
It offers tools and resources similar to popular frameworks like TensorFlow and PyTorch, making it accessible for developers of all experience levels. From farming and factories to service businesses, PaddlePaddle's large developer community of over 4.77 million is helping create and deploy AI applications.
-By exporting your Ultralytics YOLOv8 models to PaddlePaddle format, you can tap into PaddlePaddle’s strengths in performance optimization. PaddlePaddle prioritizes efficient model execution and reduced memory usage. As a result, your YOLOv8 models can potentially achieve even better performance, delivering top-notch results in practical scenarios.
+By exporting your Ultralytics YOLOv8 models to PaddlePaddle format, you can tap into PaddlePaddle's strengths in performance optimization. PaddlePaddle prioritizes efficient model execution and reduced memory usage. As a result, your YOLOv8 models can potentially achieve even better performance, delivering top-notch results in practical scenarios.
## Key Features of PaddlePaddle Models
diff --git a/docs/en/integrations/paperspace.md b/docs/en/integrations/paperspace.md
index 7563125f..3ce8a1b3 100644
--- a/docs/en/integrations/paperspace.md
+++ b/docs/en/integrations/paperspace.md
@@ -46,12 +46,12 @@ Explore more capabilities of YOLOv8 and Paperspace Gradient in a discussion with
allowfullscreen>
-[TensorRT](https://developer.nvidia.com/tensorrt), developed by NVIDIA, is an advanced software development kit (SDK) designed for high-speed deep learning inference. It’s well-suited for real-time applications like object detection.
+[TensorRT](https://developer.nvidia.com/tensorrt), developed by NVIDIA, is an advanced software development kit (SDK) designed for high-speed deep learning inference. It's well-suited for real-time applications like object detection.
This toolkit optimizes deep learning models for NVIDIA GPUs and results in faster and more efficient operations. TensorRT models undergo TensorRT optimization, which includes techniques like layer fusion, precision calibration (INT8 and FP16), dynamic tensor memory management, and kernel auto-tuning. Converting deep learning models into the TensorRT format allows developers to realize the potential of NVIDIA GPUs fully.
@@ -40,7 +40,7 @@ TensorRT models offer a range of key features that contribute to their efficienc
## Deployment Options in TensorRT
-Before we look at the code for exporting YOLOv8 models to the TensorRT format, let’s understand where TensorRT models are normally used.
+Before we look at the code for exporting YOLOv8 models to the TensorRT format, let's understand where TensorRT models are normally used.
TensorRT offers several deployment options, and each option balances ease of integration, performance optimization, and flexibility differently:
@@ -205,7 +205,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
- **Increased development times:** Finding the "optimal" settings for INT8 calibration for dataset and device can take a significant amount of testing.
-- **Hardware dependency:** Calibration and performance gains could be highly hardware dependent and model weights are less transferrable.
+- **Hardware dependency:** Calibration and performance gains could be highly hardware dependent and model weights are less transferable.
## Ultralytics YOLO TensorRT Export Performance
diff --git a/docs/en/integrations/tf-graphdef.md b/docs/en/integrations/tf-graphdef.md
index 95d6f392..eb396ac3 100644
--- a/docs/en/integrations/tf-graphdef.md
+++ b/docs/en/integrations/tf-graphdef.md
@@ -107,7 +107,7 @@ For more details about supported export options, visit the [Ultralytics document
## Deploying Exported YOLOv8 TF GraphDef Models
-Once you’ve exported your YOLOv8 model to the TF GraphDef format, the next step is deployment. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("model.pb") method, as previously shown in the usage code snippet.
+Once you've exported your YOLOv8 model to the TF GraphDef format, the next step is deployment. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("model.pb") method, as previously shown in the usage code snippet.
However, for more information on deploying your TF GraphDef models, take a look at the following resources:
diff --git a/docs/en/integrations/tf-savedmodel.md b/docs/en/integrations/tf-savedmodel.md
index 4fdbd001..e77a95ce 100644
--- a/docs/en/integrations/tf-savedmodel.md
+++ b/docs/en/integrations/tf-savedmodel.md
@@ -42,7 +42,7 @@ TF SavedModel provides a range of options to deploy your machine learning models
- **Mobile and Embedded Devices:** TensorFlow Lite, a lightweight solution for running machine learning models on mobile, embedded, and IoT devices, supports converting TF SavedModels to the TensorFlow Lite format. This allows you to deploy your models on a wide range of devices, from smartphones and tablets to microcontrollers and edge devices.
-- **TensorFlow Runtime:** TensorFlow Runtime (tfrt) is a high-performance runtime for executing TensorFlow graphs. It provides lower-level APIs for loading and running TF SavedModels in C++ environments. TensorFlow Runtime offers better performance compared to the standard TensorFlow runtime. It is suitable for deployment scenarios that require low-latency inference and tight integration with existing C++ codebases.
+- **TensorFlow Runtime:** TensorFlow Runtime (`tfrt`) is a high-performance runtime for executing TensorFlow graphs. It provides lower-level APIs for loading and running TF SavedModels in C++ environments. TensorFlow Runtime offers better performance compared to the standard TensorFlow runtime. It is suitable for deployment scenarios that require low-latency inference and tight integration with existing C++ codebases.
## Exporting YOLOv8 Models to TF SavedModel
@@ -105,7 +105,7 @@ Now that you have exported your YOLOv8 model to the TF SavedModel format, the ne
However, for in-depth instructions on deploying your TF SavedModel models, take a look at the following resources:
-- **[TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving)**: Here’s the developer documentation for how to deploy your TF SavedModel models using TensorFlow Serving.
+- **[TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving)**: Here's the developer documentation for how to deploy your TF SavedModel models using TensorFlow Serving.
- **[Run a TensorFlow SavedModel in Node.js](https://blog.tensorflow.org/2020/01/run-tensorflow-savedmodel-in-nodejs-directly-without-conversion.html)**: A TensorFlow blog post on running a TensorFlow SavedModel in Node.js directly without conversion.
diff --git a/docs/en/integrations/tfjs.md b/docs/en/integrations/tfjs.md
index 513adefb..474cd617 100644
--- a/docs/en/integrations/tfjs.md
+++ b/docs/en/integrations/tfjs.md
@@ -6,9 +6,9 @@ keywords: Ultralytics YOLOv8, TensorFlow.js, TF.js, Model Deployment, Node.js, M
# Export to TF.js Model Format From a YOLOv8 Model Format
-Deploying machine learning models directly in the browser or on Node.js can be tricky. You’ll need to make sure your model format is optimized for faster performance so that the model can be used to run interactive applications locally on the user’s device. The TensorFlow.js, or TF.js, model format is designed to use minimal power while delivering fast performance.
+Deploying machine learning models directly in the browser or on Node.js can be tricky. You'll need to make sure your model format is optimized for faster performance so that the model can be used to run interactive applications locally on the user's device. The TensorFlow.js, or TF.js, model format is designed to use minimal power while delivering fast performance.
-The ‘export to TF.js model format’ feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and locally-run object detection inference. In this guide, we'll walk you through converting your models to the TF.js format, making it easier for your models to perform well on various local browsers and Node.js applications.
+The 'export to TF.js model format' feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and locally-run object detection inference. In this guide, we'll walk you through converting your models to the TF.js format, making it easier for your models to perform well on various local browsers and Node.js applications.
## Why Should You Export to TF.js?
@@ -103,7 +103,7 @@ Now that you have exported your YOLOv8 model to the TF.js format, the next step
However, for in-depth instructions on deploying your TF.js models, take a look at the following resources:
-- **[Chrome Extension](https://www.tensorflow.org/js/tutorials/deployment/web_ml_in_chrome)**: Here’s the developer documentation for how to deploy your TF.js models to a Chrome extension.
+- **[Chrome Extension](https://www.tensorflow.org/js/tutorials/deployment/web_ml_in_chrome)**: Here's the developer documentation for how to deploy your TF.js models to a Chrome extension.
- **[Run TensorFlow.js in Node.js](https://www.tensorflow.org/js/guide/nodejs)**: A TensorFlow blog post on running TensorFlow.js in Node.js directly.
diff --git a/docs/en/integrations/tflite.md b/docs/en/integrations/tflite.md
index 5a39185b..d88223c4 100644
--- a/docs/en/integrations/tflite.md
+++ b/docs/en/integrations/tflite.md
@@ -34,7 +34,7 @@ TFLite models offer a wide range of key features that enable on-device machine l
## Deployment Options in TFLite
-Before we look at the code for exporting YOLOv8 models to the TFLite format, let’s understand how TFLite models are normally used.
+Before we look at the code for exporting YOLOv8 models to the TFLite format, let's understand how TFLite models are normally used.
TFLite offers various on-device deployment options for machine learning models, including:
@@ -117,6 +117,6 @@ After successfully exporting your Ultralytics YOLOv8 models to TFLite format, yo
In this guide, we focused on how to export to TFLite format. By converting your Ultralytics YOLOv8 models to TFLite model format, you can improve the efficiency and speed of YOLOv8 models, making them more effective and suitable for edge computing environments.
-For further details on usage, visit [TFLite’s official documentation](https://www.tensorflow.org/lite/guide).
+For further details on usage, visit the [TFLite official documentation](https://www.tensorflow.org/lite/guide).
Also, if you're curious about other Ultralytics YOLOv8 integrations, make sure to check out our [integration guide page](../integrations/index.md). You'll find tons of helpful info and insights waiting for you there.
diff --git a/docs/en/integrations/torchscript.md b/docs/en/integrations/torchscript.md
index 61ba35af..2f536a85 100644
--- a/docs/en/integrations/torchscript.md
+++ b/docs/en/integrations/torchscript.md
@@ -30,11 +30,11 @@ TorchScript, a key part of the PyTorch ecosystem, provides powerful features for
Here are the key features that make TorchScript a valuable tool for developers:
-- **Static Graph Execution**: TorchScript uses a static graph representation of the model’s computation, which is different from PyTorch’s dynamic graph execution. In static graph execution, the computational graph is defined and compiled once before the actual execution, resulting in improved performance during inference.
+- **Static Graph Execution**: TorchScript uses a static graph representation of the model's computation, which is different from PyTorch's dynamic graph execution. In static graph execution, the computational graph is defined and compiled once before the actual execution, resulting in improved performance during inference.
- **Model Serialization**: TorchScript allows you to serialize PyTorch models into a platform-independent format. Serialized models can be loaded without requiring the original Python code, enabling deployment in different runtime environments.
-- **JIT Compilation**: TorchScript uses Just-In-Time (JIT) compilation to convert PyTorch models into an optimized intermediate representation. JIT compiles the model’s computational graph, enabling efficient execution on target devices.
+- **JIT Compilation**: TorchScript uses Just-In-Time (JIT) compilation to convert PyTorch models into an optimized intermediate representation. JIT compiles the model's computational graph, enabling efficient execution on target devices.
- **Cross-Language Integration**: With TorchScript, you can export PyTorch models to other languages such as C++, Java, and JavaScript. This makes it easier to integrate PyTorch models into existing software systems written in different languages.
@@ -42,7 +42,7 @@ Here are the key features that make TorchScript a valuable tool for developers:
## Deployment Options in TorchScript
-Before we look at the code for exporting YOLOv8 models to the TorchScript format, let’s understand where TorchScript models are normally used.
+Before we look at the code for exporting YOLOv8 models to the TorchScript format, let's understand where TorchScript models are normally used.
TorchScript offers various deployment options for machine learning models, such as:
@@ -121,6 +121,6 @@ After successfully exporting your Ultralytics YOLOv8 models to TorchScript forma
In this guide, we explored the process of exporting Ultralytics YOLOv8 models to the TorchScript format. By following the provided instructions, you can optimize YOLOv8 models for performance and gain the flexibility to deploy them across various platforms and environments.
-For further details on usage, visit [TorchScript’s official documentation](https://pytorch.org/docs/stable/jit.html).
+For further details on usage, visit [TorchScript's official documentation](https://pytorch.org/docs/stable/jit.html).
-Also, if you’d like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
+Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
diff --git a/docs/en/integrations/weights-biases.md b/docs/en/integrations/weights-biases.md
index 847172ca..edb9b15a 100644
--- a/docs/en/integrations/weights-biases.md
+++ b/docs/en/integrations/weights-biases.md
@@ -8,7 +8,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Weights & Biases, Model Trainin
Object detection models like [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) have become integral to many computer vision applications. However, training, evaluating, and deploying these complex models introduces several challenges. Tracking key training metrics, comparing model variants, analyzing model behavior, and detecting issues require substantial instrumentation and experiment management.
-This guide showcases Ultralytics YOLOv8 integration with Weights & Biases’ for enhanced experiment tracking, model-checkpointing, and visualization of model performance. It also includes instructions for setting up the integration, training, fine-tuning, and visualizing results using Weights & Biases’ interactive features.
+This guide showcases Ultralytics YOLOv8 integration with Weights & Biases' for enhanced experiment tracking, model-checkpointing, and visualization of model performance. It also includes instructions for setting up the integration, training, fine-tuning, and visualizing results using Weights & Biases' interactive features.
## Weights & Biases
@@ -93,7 +93,7 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
### Understanding the Code
-Let’s understand the steps showcased in the usage code snippet above.
+Let's understand the steps showcased in the usage code snippet above.
- **Step 1: Initialize a Weights & Biases Run**: Start by initializing a Weights & Biases run, specifying the project name and the job type. This run will track and manage the training and validation processes of your model.
@@ -114,7 +114,7 @@ Let’s understand the steps showcased in the usage code snippet above.
Upon running the usage code snippet above, you can expect the following key outputs:
- The setup of a new run with its unique ID, indicating the start of the training process.
-- A concise summary of the model’s structure, including the number of layers and parameters.
+- A concise summary of the model's structure, including the number of layers and parameters.
- Regular updates on important metrics such as box loss, cls loss, dfl loss, precision, recall, and mAP scores during each training epoch.
- At the end of training, detailed metrics including the model's inference speed, and overall accuracy metrics are displayed.
- Links to the Weights & Biases dashboard for in-depth analysis and visualization of the training process, along with information on local log file locations.
@@ -141,15 +141,15 @@ After running the usage code snippet, you can access the Weights & Biases (W&B)
- **Model Artifacts Management**: Access and share model checkpoints, facilitating easy deployment and collaboration.
-- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases’ image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays).
+- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases' image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays).
-
+
By using these features, you can effectively track, analyze, and optimize your YOLOv8 model's training, ensuring the best possible performance and efficiency.
## Summary
-This guide helped you explore Ultralytics’ YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
+This guide helped you explore Ultralytics' YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
For further details on usage, visit [Weights & Biases' official documentation](https://docs.wandb.ai/guides/integrations/ultralytics).
diff --git a/docs/en/models/rtdetr.md b/docs/en/models/rtdetr.md
index e93d036f..5dd55567 100644
--- a/docs/en/models/rtdetr.md
+++ b/docs/en/models/rtdetr.md
@@ -1,6 +1,6 @@
---
comments: true
-description: Discover the features and benefits of RT-DETR, Baidu’s efficient and adaptable real-time object detector powered by Vision Transformers, including pre-trained models.
+description: Discover the features and benefits of RT-DETR, Baidu's efficient and adaptable real-time object detector powered by Vision Transformers, including pre-trained models.
keywords: RT-DETR, Baidu, Vision Transformers, object detection, real-time performance, CUDA, TensorRT, IoU-aware query selection, Ultralytics, Python API, PaddlePaddle
---
diff --git a/docs/en/modes/val.md b/docs/en/modes/val.md
index 98f6fd00..006937ba 100644
--- a/docs/en/modes/val.md
+++ b/docs/en/modes/val.md
@@ -47,7 +47,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
## Usage Examples
-Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
+Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! Example
diff --git a/docs/en/reference/data/annotator.md b/docs/en/reference/data/annotator.md
index 8e9309ec..568fbe9d 100644
--- a/docs/en/reference/data/annotator.md
+++ b/docs/en/reference/data/annotator.md
@@ -1,5 +1,5 @@
---
-description: Enhance your machine learning model with Ultralytics’ auto_annotate function. Simplify data annotation for improved model training.
+description: Enhance your machine learning model with Ultralytics' auto_annotate function. Simplify data annotation for improved model training.
keywords: Ultralytics, Auto-Annotate, Machine Learning, AI, Annotation, Data Processing, Model Training
---
diff --git a/docs/en/reference/data/utils.md b/docs/en/reference/data/utils.md
index 7ac3add6..a157ce8c 100644
--- a/docs/en/reference/data/utils.md
+++ b/docs/en/reference/data/utils.md
@@ -1,5 +1,5 @@
---
-description: Uncover a detailed guide to Ultralytics data utilities. Learn functions from img2label_paths to autosplit, all boosting your YOLO model’s efficiency.
+description: Uncover a detailed guide to Ultralytics data utilities. Learn functions from img2label_paths to autosplit, all boosting your YOLO model's efficiency.
keywords: Ultralytics, data utils, YOLO, img2label_paths, exif_size, polygon2mask, polygons2masks_overlap, check_cls_dataset, delete_dsstore, autosplit
---
diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md
index a9cfbe2a..ae13a00d 100644
--- a/docs/en/tasks/detect.md
+++ b/docs/en/tasks/detect.md
@@ -82,7 +82,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
## Val
-Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
+Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
!!! Example
diff --git a/docs/en/usage/cli.md b/docs/en/usage/cli.md
index 35dd45da..596c624a 100644
--- a/docs/en/usage/cli.md
+++ b/docs/en/usage/cli.md
@@ -110,7 +110,7 @@ Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full
## Val
-Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
+Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
!!! Example "Example"
diff --git a/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md b/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
index 45754445..1574dedd 100644
--- a/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
+++ b/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
@@ -6,15 +6,15 @@ keywords: YOLOv5, Google Cloud Platform, GCP, Deep Learning VM, ML model trainin
# Mastering YOLOv5 🚀 Deployment on Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) ⭐
-Embarking on the journey of artificial intelligence and machine learning can be exhilarating, especially when you leverage the power and flexibility of a cloud platform. Google Cloud Platform (GCP) offers robust tools tailored for machine learning enthusiasts and professionals alike. One such tool is the Deep Learning VM that is preconfigured for data science and ML tasks. In this tutorial, we will navigate through the process of setting up YOLOv5 on a GCP Deep Learning VM. Whether you’re taking your first steps in ML or you’re a seasoned practitioner, this guide is designed to provide you with a clear pathway to implementing object detection models powered by YOLOv5.
+Embarking on the journey of artificial intelligence and machine learning can be exhilarating, especially when you leverage the power and flexibility of a cloud platform. Google Cloud Platform (GCP) offers robust tools tailored for machine learning enthusiasts and professionals alike. One such tool is the Deep Learning VM that is preconfigured for data science and ML tasks. In this tutorial, we will navigate through the process of setting up YOLOv5 on a GCP Deep Learning VM. Whether you're taking your first steps in ML or you're a seasoned practitioner, this guide is designed to provide you with a clear pathway to implementing object detection models powered by YOLOv5.
-🆓 Plus, if you're a fresh GCP user, you’re in luck with a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial) to kickstart your projects.
+🆓 Plus, if you're a fresh GCP user, you're in luck with a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial) to kickstart your projects.
In addition to GCP, explore other accessible quickstart options for YOLOv5, like our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb)