diff --git a/docs/README.md b/docs/README.md
index 565a0010..b3766abe 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -107,7 +107,7 @@ Choose a hosting provider and deployment method for your MkDocs documentation:
- Update the "Custom domain" in your repository's settings for a personalized URL.
-
+
- For detailed deployment guidance, consult the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/).
@@ -115,7 +115,7 @@ Choose a hosting provider and deployment method for your MkDocs documentation:
We cherish the community's input as it drives Ultralytics open-source initiatives. Dive into the [Contributing Guide](https://docs.ultralytics.com/help/contributing) and share your thoughts via our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A heartfelt thank you 🙏 to each contributor!
-
+
## 📜 License
diff --git a/docs/en/datasets/classify/caltech101.md b/docs/en/datasets/classify/caltech101.md
index 6a75f66a..7029c5e6 100644
--- a/docs/en/datasets/classify/caltech101.md
+++ b/docs/en/datasets/classify/caltech101.md
@@ -53,7 +53,7 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
The Caltech-101 dataset contains high-quality color images of various objects, providing a well-structured dataset for object recognition tasks. Here are some examples of images from the dataset:
-
+
The example showcases the variety and complexity of the objects in the Caltech-101 dataset, emphasizing the significance of a diverse dataset for training robust object recognition models.
diff --git a/docs/en/datasets/classify/caltech256.md b/docs/en/datasets/classify/caltech256.md
index c7b367cc..a2551b9a 100644
--- a/docs/en/datasets/classify/caltech256.md
+++ b/docs/en/datasets/classify/caltech256.md
@@ -64,7 +64,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
The Caltech-256 dataset contains high-quality color images of various objects, providing a comprehensive dataset for object recognition tasks. Here are some examples of images from the dataset ([credit](https://ml4a.github.io/demos/tsne_viewer.html)):
-
+
The example showcases the diversity and complexity of the objects in the Caltech-256 dataset, emphasizing the importance of a varied dataset for training robust object recognition models.
diff --git a/docs/en/datasets/classify/cifar10.md b/docs/en/datasets/classify/cifar10.md
index 54f9e9c2..39762681 100644
--- a/docs/en/datasets/classify/cifar10.md
+++ b/docs/en/datasets/classify/cifar10.md
@@ -67,7 +67,7 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
The CIFAR-10 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
-
+
The example showcases the variety and complexity of the objects in the CIFAR-10 dataset, highlighting the importance of a diverse dataset for training robust image classification models.
diff --git a/docs/en/datasets/classify/cifar100.md b/docs/en/datasets/classify/cifar100.md
index 4a8ba4bd..722eccf9 100644
--- a/docs/en/datasets/classify/cifar100.md
+++ b/docs/en/datasets/classify/cifar100.md
@@ -56,7 +56,7 @@ To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size
The CIFAR-100 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
-
+
The example showcases the variety and complexity of the objects in the CIFAR-100 dataset, highlighting the importance of a diverse dataset for training robust image classification models.
diff --git a/docs/en/datasets/classify/fashion-mnist.md b/docs/en/datasets/classify/fashion-mnist.md
index 656473ed..674e0858 100644
--- a/docs/en/datasets/classify/fashion-mnist.md
+++ b/docs/en/datasets/classify/fashion-mnist.md
@@ -81,7 +81,7 @@ To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image s
The Fashion-MNIST dataset contains grayscale images of Zalando's article images, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
-
+
The example showcases the variety and complexity of the images in the Fashion-MNIST dataset, highlighting the importance of a diverse dataset for training robust image classification models.
diff --git a/docs/en/datasets/classify/imagenet.md b/docs/en/datasets/classify/imagenet.md
index 53aabcce..6ec3f920 100644
--- a/docs/en/datasets/classify/imagenet.md
+++ b/docs/en/datasets/classify/imagenet.md
@@ -66,7 +66,7 @@ To train a deep learning model on the ImageNet dataset for 100 epochs with an im
The ImageNet dataset contains high-resolution images spanning thousands of object categories, providing a diverse and extensive dataset for training and evaluating computer vision models. Here are some examples of images from the dataset:
-
+
The example showcases the variety and complexity of the images in the ImageNet dataset, highlighting the importance of a diverse dataset for training robust computer vision models.
diff --git a/docs/en/datasets/classify/imagenet10.md b/docs/en/datasets/classify/imagenet10.md
index a079986c..cc9c9ec7 100644
--- a/docs/en/datasets/classify/imagenet10.md
+++ b/docs/en/datasets/classify/imagenet10.md
@@ -52,7 +52,7 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
The ImageNet10 dataset contains a subset of images from the original ImageNet dataset. These images are chosen to represent the first 10 classes in the dataset, providing a diverse yet compact dataset for quick testing and evaluation.
- The example showcases the variety and complexity of the images in the ImageNet10 dataset, highlighting its usefulness for sanity checks and quick testing of computer vision models.
+ The example showcases the variety and complexity of the images in the ImageNet10 dataset, highlighting its usefulness for sanity checks and quick testing of computer vision models.
## Citations and Acknowledgments
diff --git a/docs/en/datasets/classify/imagenette.md b/docs/en/datasets/classify/imagenette.md
index 9a2a128f..aea183f3 100644
--- a/docs/en/datasets/classify/imagenette.md
+++ b/docs/en/datasets/classify/imagenette.md
@@ -54,7 +54,7 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
The ImageNette dataset contains colored images of various objects and scenes, providing a diverse dataset for image classification tasks. Here are some examples of images from the dataset:
-
+
The example showcases the variety and complexity of the images in the ImageNette dataset, highlighting the importance of a diverse dataset for training robust image classification models.
diff --git a/docs/en/datasets/classify/imagewoof.md b/docs/en/datasets/classify/imagewoof.md
index e6668dfc..0d768b07 100644
--- a/docs/en/datasets/classify/imagewoof.md
+++ b/docs/en/datasets/classify/imagewoof.md
@@ -89,7 +89,7 @@ It's important to note that using smaller images will likely yield lower perform
The ImageWoof dataset contains colorful images of various dog breeds, providing a challenging dataset for image classification tasks. Here are some examples of images from the dataset:
-
+
The example showcases the subtle differences and similarities among the different dog breeds in the ImageWoof dataset, highlighting the complexity and difficulty of the classification task.
diff --git a/docs/en/datasets/detect/african-wildlife.md b/docs/en/datasets/detect/african-wildlife.md
index 2c5b346a..bdd392cd 100644
--- a/docs/en/datasets/detect/african-wildlife.md
+++ b/docs/en/datasets/detect/african-wildlife.md
@@ -91,7 +91,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
The African wildlife dataset comprises a wide variety of images showcasing diverse animal species and their natural habitats. Below are examples of images from the dataset, each accompanied by its corresponding annotations.
-
+
- **Mosaiced Image**: Here, we present a training batch consisting of mosaiced dataset images. Mosaicing, a training technique, combines multiple images into one, enriching batch diversity. This method helps enhance the model's ability to generalize across different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/detect/argoverse.md b/docs/en/datasets/detect/argoverse.md
index 985ceca1..56023b6b 100644
--- a/docs/en/datasets/detect/argoverse.md
+++ b/docs/en/datasets/detect/argoverse.md
@@ -70,7 +70,7 @@ To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image s
The Argoverse dataset contains a diverse set of sensor data, including camera images, LiDAR point clouds, and HD map information, providing rich context for autonomous driving tasks. Here are some examples of data from the dataset, along with their corresponding annotations:
-
+
- **Argoverse 3D Tracking**: This image demonstrates an example of 3D object tracking, where objects are annotated with 3D bounding boxes. The dataset provides LiDAR point clouds and camera images to facilitate the development of models for this task.
diff --git a/docs/en/datasets/detect/brain-tumor.md b/docs/en/datasets/detect/brain-tumor.md
index a36fab40..4ec217d5 100644
--- a/docs/en/datasets/detect/brain-tumor.md
+++ b/docs/en/datasets/detect/brain-tumor.md
@@ -90,7 +90,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
The brain tumor dataset encompasses a wide array of images featuring diverse object categories and intricate scenes. Presented below are examples of images from the dataset, accompanied by their respective annotations
-
+
- **Mosaiced Image**: Displayed here is a training batch comprising mosaiced dataset images. Mosaicing, a training technique, consolidates multiple images into one, enhancing batch diversity. This approach aids in improving the model's capacity to generalize across various object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/detect/coco.md b/docs/en/datasets/detect/coco.md
index 733c3a1d..d3b0589e 100644
--- a/docs/en/datasets/detect/coco.md
+++ b/docs/en/datasets/detect/coco.md
@@ -87,7 +87,7 @@ To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size o
The COCO dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations:
-
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/detect/coco8.md b/docs/en/datasets/detect/coco8.md
index 6577ab0f..cae9e673 100644
--- a/docs/en/datasets/detect/coco8.md
+++ b/docs/en/datasets/detect/coco8.md
@@ -62,7 +62,7 @@ To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size
Here are some examples of images from the COCO8 dataset, along with their corresponding annotations:
-
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/detect/globalwheat2020.md b/docs/en/datasets/detect/globalwheat2020.md
index 28c95c10..a8e255b5 100644
--- a/docs/en/datasets/detect/globalwheat2020.md
+++ b/docs/en/datasets/detect/globalwheat2020.md
@@ -65,7 +65,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an
The Global Wheat Head Dataset contains a diverse set of outdoor field images, capturing the natural variability in wheat head appearances, environments, and conditions. Here are some examples of data from the dataset, along with their corresponding annotations:
-
+
- **Wheat Head Detection**: This image demonstrates an example of wheat head detection, where wheat heads are annotated with bounding boxes. The dataset provides a variety of images to facilitate the development of models for this task.
diff --git a/docs/en/datasets/detect/index.md b/docs/en/datasets/detect/index.md
index 97806cb0..934fe385 100644
--- a/docs/en/datasets/detect/index.md
+++ b/docs/en/datasets/detect/index.md
@@ -34,15 +34,15 @@ names:
Labels for this format should be exported to YOLO format with one `*.txt` file per image. If there are no objects in an image, no `*.txt` file is required. The `*.txt` file should be formatted with one row per object in `class x_center y_center width height` format. Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, you should divide `x_center` and `width` by image width, and `y_center` and `height` by image height. Class numbers should be zero-indexed (start with 0).
-





-
+
-
+
-
+
-
+
@@ -41,19 +41,19 @@ Semantic search is a technique for finding similar images to a given image. It i For example: In this VOC Exploration dashboard, user selects a couple airplane images like this:
-
+
-
+
-
+
-
+
-
+
-
+
-
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/obb/index.md b/docs/en/datasets/obb/index.md
index f7708a10..10631703 100644
--- a/docs/en/datasets/obb/index.md
+++ b/docs/en/datasets/obb/index.md
@@ -20,7 +20,7 @@ class_index x1 y1 x2 y2 x3 y3 x4 y4
Internally, YOLO processes losses and outputs in the `xywhr` format, which represents the bounding box's center point (xy), width, height, and rotation.
-

+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/pose/tiger-pose.md b/docs/en/datasets/pose/tiger-pose.md
index d1e338cc..457e8fef 100644
--- a/docs/en/datasets/pose/tiger-pose.md
+++ b/docs/en/datasets/pose/tiger-pose.md
@@ -64,7 +64,7 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
Here are some examples of images from the Tiger-Pose dataset, along with their corresponding annotations:
-
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/segment/carparts-seg.md b/docs/en/datasets/segment/carparts-seg.md
index 60890e06..d5799954 100644
--- a/docs/en/datasets/segment/carparts-seg.md
+++ b/docs/en/datasets/segment/carparts-seg.md
@@ -72,7 +72,7 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
The Carparts Segmentation dataset includes a diverse array of images and videos taken from various perspectives. Below, you'll find examples of data from the dataset along with their corresponding annotations:
-
+
- This image illustrates object segmentation within a sample, featuring annotated bounding boxes with masks surrounding identified objects. The dataset consists of a varied set of images captured in various locations, environments, and densities, serving as a comprehensive resource for crafting models specific to this task.
- This instance highlights the diversity and complexity inherent in the dataset, emphasizing the crucial role of high-quality data in computer vision tasks, particularly in the realm of car parts segmentation.
diff --git a/docs/en/datasets/segment/coco.md b/docs/en/datasets/segment/coco.md
index e02b6771..bb88a232 100644
--- a/docs/en/datasets/segment/coco.md
+++ b/docs/en/datasets/segment/coco.md
@@ -76,7 +76,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
COCO-Seg, like its predecessor COCO, contains a diverse set of images with various object categories and complex scenes. However, COCO-Seg introduces more detailed instance segmentation masks for each object in the images. Here are some examples of images from the dataset, along with their corresponding instance segmentation masks:
-
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This aids the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/segment/coco8-seg.md b/docs/en/datasets/segment/coco8-seg.md
index bcca4a26..f22d6a68 100644
--- a/docs/en/datasets/segment/coco8-seg.md
+++ b/docs/en/datasets/segment/coco8-seg.md
@@ -51,7 +51,7 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an ima
Here are some examples of images from the COCO8-Seg dataset, along with their corresponding annotations:
-
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
diff --git a/docs/en/datasets/segment/crack-seg.md b/docs/en/datasets/segment/crack-seg.md
index 83f01987..5fa99dfb 100644
--- a/docs/en/datasets/segment/crack-seg.md
+++ b/docs/en/datasets/segment/crack-seg.md
@@ -61,7 +61,7 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epo
The Crack Segmentation dataset comprises a varied collection of images and videos captured from multiple perspectives. Below are instances of data from the dataset, accompanied by their respective annotations:
-
+
- This image presents an example of image object segmentation, featuring annotated bounding boxes with masks outlining identified objects. The dataset includes a diverse array of images taken in different locations, environments, and densities, making it a comprehensive resource for developing models designed for this particular task.
diff --git a/docs/en/datasets/segment/package-seg.md b/docs/en/datasets/segment/package-seg.md
index 86fad9e9..bf88410f 100644
--- a/docs/en/datasets/segment/package-seg.md
+++ b/docs/en/datasets/segment/package-seg.md
@@ -61,7 +61,7 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 e
The Package Segmentation dataset comprises a varied collection of images and videos captured from multiple perspectives. Below are instances of data from the dataset, accompanied by their respective annotations:
-
+
- This image displays an instance of image object detection, featuring annotated bounding boxes with masks outlining recognized objects. The dataset incorporates a diverse collection of images taken in different locations, environments, and densities. It serves as a comprehensive resource for developing models specific to this task.
- The example emphasizes the diversity and complexity present in the VisDrone dataset, underscoring the significance of high-quality sensor data for computer vision tasks involving drones.
diff --git a/docs/en/guides/analytics.md b/docs/en/guides/analytics.md
index 96cadb86..a29e6abe 100644
--- a/docs/en/guides/analytics.md
+++ b/docs/en/guides/analytics.md
@@ -12,9 +12,9 @@ This guide provides a comprehensive overview of three fundamental types of data
### Visual Samples
-| Line Graph | Bar Plot | Pie Chart |
-| :----------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------: |
-|  |  |  |
+| Line Graph | Bar Plot | Pie Chart |
+| :------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------: |
+|  |  |  |
### Why Graphs are Important
diff --git a/docs/en/guides/azureml-quickstart.md b/docs/en/guides/azureml-quickstart.md
index 0ffaa45d..92e3d837 100644
--- a/docs/en/guides/azureml-quickstart.md
+++ b/docs/en/guides/azureml-quickstart.md
@@ -33,7 +33,7 @@ Before you can get started, make sure you have access to an AzureML workspace. I
From your AzureML workspace, select Compute > Compute instances > New, select the instance with the resources you need.
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
!!! Note
@@ -168,7 +168,7 @@ deepstream-app -c deepstream_app_config.txt
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
-

-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
!!! Note
@@ -287,7 +287,7 @@ YOLOv8 benchmarks were run by the Ultralytics team on 10 different model formats
Even though all model exports are working with NVIDIA Jetson, we have only included **PyTorch, TorchScript, TensorRT** for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart.
## Next Steps
diff --git a/docs/en/guides/object-counting.md b/docs/en/guides/object-counting.md
index 033b845f..00aa9174 100644
--- a/docs/en/guides/object-counting.md
+++ b/docs/en/guides/object-counting.md
@@ -41,10 +41,10 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
## Real World Applications
-| Logistics | Aquaculture |
-| :-----------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------: |
-|  |  |
-| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
+| Logistics | Aquaculture |
+| :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|  |  |
+| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
!!! Example "Object Counting using YOLOv8 Example"
diff --git a/docs/en/guides/object-cropping.md b/docs/en/guides/object-cropping.md
index d314cee0..3efaba93 100644
--- a/docs/en/guides/object-cropping.md
+++ b/docs/en/guides/object-cropping.md
@@ -29,10 +29,10 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
## Visuals
-| Airport Luggage |
-| :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
-|  |
-| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
+| Airport Luggage |
+| :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|  |
+| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
!!! Example "Object Cropping using YOLOv8 Example"
diff --git a/docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md b/docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md
index b6886d5d..a9acfb12 100644
--- a/docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md
+++ b/docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md
@@ -6,7 +6,7 @@ keywords: Ultralytics YOLO, OpenVINO optimization, deep learning, model inferenc
# Optimizing OpenVINO Inference for Ultralytics YOLO Models: A Comprehensive Guide
-
## Introduction
diff --git a/docs/en/guides/parking-management.md b/docs/en/guides/parking-management.md
index cc42fb9b..e25936fb 100644
--- a/docs/en/guides/parking-management.md
+++ b/docs/en/guides/parking-management.md
@@ -29,10 +29,10 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
## Real World Applications
-| Parking Management System | Parking Management System |
-| :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: |
-|  |  |
-| Parking management Aerial View using Ultralytics YOLOv8 | Parking management Top View using Ultralytics YOLOv8 |
+| Parking Management System | Parking Management System |
+| :----------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|  |  |
+| Parking management Aerial View using Ultralytics YOLOv8 | Parking management Top View using Ultralytics YOLOv8 |
## Parking Management System Code Workflow
@@ -61,7 +61,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
- After defining the parking areas with polygons, click `save` to store a JSON file with the data in your working directory.
-
+
### Python Code for Parking Management
diff --git a/docs/en/guides/preprocessing_annotated_data.md b/docs/en/guides/preprocessing_annotated_data.md
index 8935d7d8..ef771a28 100644
--- a/docs/en/guides/preprocessing_annotated_data.md
+++ b/docs/en/guides/preprocessing_annotated_data.md
@@ -73,7 +73,7 @@ Here are some other benefits of data augmentation:
Common augmentation techniques include flipping, rotation, scaling, and color adjustments. Several libraries, such as Albumentations, Imgaug, and TensorFlow's ImageDataGenerator, can generate these augmentations.
-
+
-
+
-
+
-
+
-
+
-
+
-
+




The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanced computer vision capabilities to enhance security measures. YOLOv8, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages:
@@ -175,7 +175,7 @@ That's it! When you execute the code, you'll receive a single notification on yo
#### Email Received Sample
-
## FAQ
diff --git a/docs/en/guides/speed-estimation.md b/docs/en/guides/speed-estimation.md
index ee42bfe6..9e9ddce5 100644
--- a/docs/en/guides/speed-estimation.md
+++ b/docs/en/guides/speed-estimation.md
@@ -33,10 +33,10 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
## Real World Applications
-| Transportation | Transportation |
-| :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: |
-|  |  |
-| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
+| Transportation | Transportation |
+| :------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|  |  |
+| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
!!! Example "Speed Estimation using YOLOv8 Example"
diff --git a/docs/en/guides/steps-of-a-cv-project.md b/docs/en/guides/steps-of-a-cv-project.md
index a1fbdb5e..3b98171d 100644
--- a/docs/en/guides/steps-of-a-cv-project.md
+++ b/docs/en/guides/steps-of-a-cv-project.md
@@ -40,7 +40,7 @@ Before discussing the details of each step involved in a computer vision project
- Finally, you'd deploy your model into the real world and update it based on new insights and feedback.
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
+
## Table of Contents
diff --git a/docs/en/hub/app/android.md b/docs/en/hub/app/android.md
index 847ae916..c3c19b0c 100644
--- a/docs/en/hub/app/android.md
+++ b/docs/en/hub/app/android.md
@@ -7,7 +7,7 @@ keywords: Ultralytics, Android app, real-time object detection, YOLO models, Ten
# Ultralytics Android App: Real-time Object Detection with YOLO Models
-
+
diff --git a/docs/en/hub/app/index.md b/docs/en/hub/app/index.md
index 7d1731d4..9266a0bd 100644
--- a/docs/en/hub/app/index.md
+++ b/docs/en/hub/app/index.md
@@ -7,7 +7,7 @@ keywords: Ultralytics HUB, YOLO models, mobile app, iOS, Android, hardware accel
# Ultralytics HUB App
-
+
diff --git a/docs/en/hub/app/ios.md b/docs/en/hub/app/ios.md
index 2bbeb465..5468633f 100644
--- a/docs/en/hub/app/ios.md
+++ b/docs/en/hub/app/ios.md
@@ -7,7 +7,7 @@ keywords: Ultralytics, iOS App, YOLO models, real-time object detection, Apple N
# Ultralytics iOS App: Real-time Object Detection with YOLO Models
-
+
diff --git a/docs/en/hub/cloud-training.md b/docs/en/hub/cloud-training.md
index ab22a767..9d09a18f 100644
--- a/docs/en/hub/cloud-training.md
+++ b/docs/en/hub/cloud-training.md
@@ -26,13 +26,13 @@ In order to train models using Ultralytics Cloud Training, you need to [upgrade]
Follow the [Train Model](./models.md#train-model) instructions from the [Models](./models.md) page until you reach the third step ([Train](./models.md#3-train)) of the **Train Model** dialog. Once you are on this step, simply select the training duration (Epochs or Timed), the training instance, the payment method, and click the **Start Training** button. That's it!
-
+
??? note "Note"
When you are on this step, you have the option to close the **Train Model** dialog and start training your model from the Model page later.
- 
+ 
Most of the times, you will use the Epochs training. The number of epochs can be adjusted on this step (if the training didn't start yet) and represents the number of times your dataset needs to go through the cycle of train, label, and test. The exact pricing based on the number of epochs is hard to determine, reason why we only allow the [Account Balance](./pro.md#account-balance) payment method.
@@ -40,7 +40,7 @@ Most of the times, you will use the Epochs training. The number of epochs can be
When using the Epochs training, your [account balance](./pro.md#account-balance) needs to be at least US$5.00 to start training. In case you have a low balance, you can top-up directly from this step.
- 
+ 
!!! note "Note"
@@ -48,21 +48,21 @@ Most of the times, you will use the Epochs training. The number of epochs can be
Also, after every epoch, we check if you have enough [account balance](./pro.md#account-balance) for the next epoch. In case you don't have enough [account balance](./pro.md#account-balance) for the next epoch, we will stop the training session, allowing you to resume training your model from the last checkpoint saved.
- 
+ 
Alternatively, you can use the Timed training. This option allows you to set the training duration. In this case, we can determine the exact pricing. You can pay upfront or using your [account balance](./pro.md#account-balance).
If you have enough [account balance](./pro.md#account-balance), you can use the [Account Balance](./pro.md#account-balance) payment method.
-
+
If you don't have enough [account balance](./pro.md#account-balance), you won't be able to use the [Account Balance](./pro.md#account-balance) payment method. You can pay upfront or top-up directly from this step.
-
+
Before the training session starts, the initialization process spins up a dedicated instance equipped with GPU resources, which can sometimes take a while depending on the current demand and availability of GPU resources.
-
+
!!! note "Note"
@@ -72,13 +72,13 @@ After the training session starts, you can monitor each step of the progress.
If needed, you can stop the training by clicking on the **Stop Training** button.
-
+
!!! note "Note"
You can resume training your model from the last checkpoint saved.
- 
+ 
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. @@ -56,13 +56,13 @@ check_dataset("path/to/dataset.zip", task="detect") Once your dataset ZIP is ready, navigate to the [Datasets](https://hub.ultralytics.com/datasets) page by clicking on the **Datasets** button in the sidebar and click on the **Upload Dataset** button on the top right of the page. - + ??? tip "Tip" You can upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page. -  +  This action will trigger the **Upload Dataset** dialog. @@ -72,43 +72,43 @@ You have the additional option to set a custom name and description for your [Ul When you're happy with your dataset configuration, click **Upload**. - + After your dataset is uploaded and processed, you will be able to access it from the [Datasets](https://hub.ultralytics.com/datasets) page. - + You can view the images in your dataset grouped by splits (Train, Validation, Test). - + ??? tip "Tip" Each image can be enlarged for better visualization. -  +  -  +  Also, you can analyze your dataset by click on the **Overview** tab. - + Next, [train a model](./models.md#train-model) on your dataset. - + ## Download Dataset Navigate to the Dataset page of the dataset you want to download, open the dataset actions dropdown and click on the **Download** option. This action will start downloading your dataset. - + ??? tip "Tip" You can download a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. -  +  ## Share Dataset @@ -124,17 +124,17 @@ Navigate to the Dataset page of the dataset you want to download, open the datas Navigate to the Dataset page of the dataset you want to share, open the dataset actions dropdown and click on the **Share** option. This action will trigger the **Share Dataset** dialog. - + ??? tip "Tip" You can share a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. -  +  Set the general access to "Unlisted" and click **Save**. - + Now, anyone who has the direct link to your dataset can view it. @@ -142,38 +142,38 @@ Now, anyone who has the direct link to your dataset can view it. You can easily click on the dataset's link shown in the **Share Dataset** dialog to copy it. -  +  ## Edit Dataset Navigate to the Dataset page of the dataset you want to edit, open the dataset actions dropdown and click on the **Edit** option. This action will trigger the **Update Dataset** dialog. - + ??? tip "Tip" You can edit a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. -  +  Apply the desired modifications to your dataset and then confirm the changes by clicking **Save**. - + ## Delete Dataset Navigate to the Dataset page of the dataset you want to delete, open the dataset actions dropdown and click on the **Delete** option. This action will delete the dataset. - + ??? tip "Tip" You can delete a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. -  +  !!! note "Note" If you change your mind, you can restore the dataset from the [Trash](https://hub.ultralytics.com/trash) page. -  +  diff --git a/docs/en/hub/index.md b/docs/en/hub/index.md index 17a1333f..6ae7c0a4 100644 --- a/docs/en/hub/index.md +++ b/docs/en/hub/index.md @@ -7,7 +7,7 @@ keywords: Ultralytics HUB, YOLO models, train YOLO, YOLOv5, YOLOv8, object detec # Ultralytics HUB
+
中文 |
한국어 |
日本語 |
diff --git a/docs/en/hub/inference-api.md b/docs/en/hub/inference-api.md
index 44069e28..38223521 100644
--- a/docs/en/hub/inference-api.md
+++ b/docs/en/hub/inference-api.md
@@ -8,7 +8,7 @@ keywords: Ultralytics, HUB, Inference API, Python, cURL, REST API, YOLO, image p
After you [train a model](./models.md#train-model), you can use the [Shared Inference API](#shared-inference-api) for free. If you are a [Pro](./pro.md) user, you can access the [Dedicated Inference API](#dedicated-inference-api). The [Ultralytics HUB](https://ultralytics.com/hub) Inference API allows you to run inference through our REST API without the need to install and set up the Ultralytics YOLO environment locally.
-
+
- **Overview of Baidu's RT-DETR.** The RT-DETR model architecture diagram shows the last three stages of the backbone {S3, S4, S5} as the input to the encoder. The efficient hybrid encoder transforms multiscale features into a sequence of image features through intrascale feature interaction (AIFI) and cross-scale feature-fusion module (CCFM). The IoU-aware query selection is employed to select a fixed number of image features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object queries to generate boxes and confidence scores ([source](https://arxiv.org/pdf/2304.08069.pdf)). + **Overview of Baidu's RT-DETR.** The RT-DETR model architecture diagram shows the last three stages of the backbone {S3, S4, S5} as the input to the encoder. The efficient hybrid encoder transforms multiscale features into a sequence of image features through intrascale feature interaction (AIFI) and cross-scale feature-fusion module (CCFM). The IoU-aware query selection is employed to select a fixed number of image features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object queries to generate boxes and confidence scores ([source](https://arxiv.org/pdf/2304.08069.pdf)). ### Key Features diff --git a/docs/en/models/sam-2.md b/docs/en/models/sam-2.md index 4285d73c..ac60ec14 100644 --- a/docs/en/models/sam-2.md +++ b/docs/en/models/sam-2.md @@ -8,7 +8,7 @@ keywords: SAM 2, Segment Anything, video segmentation, image segmentation, promp SAM 2, the successor to Meta's [Segment Anything Model (SAM)](sam.md), is a cutting-edge tool designed for comprehensive object segmentation in both images and videos. It excels in handling complex visual data through a unified, promptable model architecture that supports real-time processing and zero-shot generalization. - + ## Key Features diff --git a/docs/en/models/sam.md b/docs/en/models/sam.md index 60603616..b19b9680 100644 --- a/docs/en/models/sam.md +++ b/docs/en/models/sam.md @@ -14,7 +14,7 @@ The Segment Anything Model, or SAM, is a cutting-edge image segmentation model t SAM's advanced design allows it to adapt to new image distributions and tasks without prior knowledge, a feature known as zero-shot transfer. Trained on the expansive [SA-1B dataset](https://ai.facebook.com/datasets/segment-anything/), which contains more than 1 billion masks spread over 11 million carefully curated images, SAM has displayed impressive zero-shot performance, surpassing previous fully supervised results in many cases. - **SA-1B Example images.** Dataset images overlaid masks from the newly introduced SA-1B dataset. SA-1B contains 11M diverse, high-resolution, licensed, and privacy protecting images and 1.1B high-quality segmentation masks. These masks were annotated fully automatically by SAM, and as verified by human ratings and numerous experiments, are of high quality and diversity. Images are grouped by number of masks per image for visualization (there are ∼100 masks per image on average). + **SA-1B Example images.** Dataset images overlaid masks from the newly introduced SA-1B dataset. SA-1B contains 11M diverse, high-resolution, licensed, and privacy protecting images and 1.1B high-quality segmentation masks. These masks were annotated fully automatically by SAM, and as verified by human ratings and numerous experiments, are of high quality and diversity. Images are grouped by number of masks per image for visualization (there are ∼100 masks per image on average). ## Key Features of the Segment Anything Model (SAM) diff --git a/docs/en/models/yolo-nas.md b/docs/en/models/yolo-nas.md index 8cee8dc8..5e0b1e73 100644 --- a/docs/en/models/yolo-nas.md +++ b/docs/en/models/yolo-nas.md @@ -10,7 +10,7 @@ keywords: YOLO-NAS, Deci AI, object detection, deep learning, Neural Architectur Developed by Deci AI, YOLO-NAS is a groundbreaking object detection foundational model. It is the product of advanced Neural Architecture Search technology, meticulously designed to address the limitations of previous YOLO models. With significant improvements in quantization support and accuracy-latency trade-offs, YOLO-NAS represents a major leap in object detection. - **Overview of YOLO-NAS.** YOLO-NAS employs quantization-aware blocks and selective quantization for optimal performance. The model, when converted to its INT8 quantized version, experiences a minimal precision drop, a significant improvement over other models. These advancements culminate in a superior architecture with unprecedented object detection capabilities and outstanding performance. + **Overview of YOLO-NAS.** YOLO-NAS employs quantization-aware blocks and selective quantization for optimal performance. The model, when converted to its INT8 quantized version, experiences a minimal precision drop, a significant improvement over other models. These advancements culminate in a superior architecture with unprecedented object detection capabilities and outstanding performance. ### Key Features diff --git a/docs/en/models/yolo-world.md b/docs/en/models/yolo-world.md index f1f576bc..e45f3915 100644 --- a/docs/en/models/yolo-world.md +++ b/docs/en/models/yolo-world.md @@ -19,7 +19,7 @@ The YOLO-World Model introduces an advanced, real-time [Ultralytics](https://ult Watch: YOLO World training workflow on custom dataset - + ## Overview @@ -195,7 +195,7 @@ Object tracking with YOLO-World model on a video/images is streamlined as follow ### Set prompts - + The YOLO-World framework allows for the dynamic specification of classes through custom prompts, empowering users to tailor the model to their specific needs **without retraining**. This feature is particularly useful for adapting the model to new domains or specific tasks that were not originally part of the training data. By setting custom prompts, users can essentially guide the model's focus towards objects of interest, enhancing the relevance and accuracy of the detection results. diff --git a/docs/en/models/yolov10.md b/docs/en/models/yolov10.md index e8e4a286..482c5cda 100644 --- a/docs/en/models/yolov10.md +++ b/docs/en/models/yolov10.md @@ -8,7 +8,7 @@ keywords: YOLOv10, real-time object detection, NMS-free, deep learning, Tsinghua YOLOv10, built on the [Ultralytics](https://ultralytics.com) [Python package](https://pypi.org/project/ultralytics/) by researchers at [Tsinghua University](https://www.tsinghua.edu.cn/en/), introduces a new approach to real-time object detection, addressing both the post-processing and model architecture deficiencies found in previous YOLO versions. By eliminating non-maximum suppression (NMS) and optimizing various model components, YOLOv10 achieves state-of-the-art performance with significantly reduced computational overhead. Extensive experiments demonstrate its superior accuracy-latency trade-offs across multiple model scales. - +
@@ -91,7 +91,7 @@ YOLOv10 has been extensively tested on standard benchmarks like COCO, demonstrat
## Comparisons
-
+
Compared to other state-of-the-art detectors:
diff --git a/docs/en/models/yolov3.md b/docs/en/models/yolov3.md
index a8151fca..168e590d 100644
--- a/docs/en/models/yolov3.md
+++ b/docs/en/models/yolov3.md
@@ -16,7 +16,7 @@ This document presents an overview of three closely related object detection mod
3. **YOLOv3u:** This is an updated version of YOLOv3-Ultralytics that incorporates the anchor-free, objectness-free split head used in YOLOv8 models. YOLOv3u maintains the same backbone and neck architecture as YOLOv3 but with the updated detection head from YOLOv8.
-
+
## Key Features
diff --git a/docs/en/models/yolov4.md b/docs/en/models/yolov4.md
index c9702086..2137adab 100644
--- a/docs/en/models/yolov4.md
+++ b/docs/en/models/yolov4.md
@@ -8,7 +8,7 @@ keywords: YOLOv4, object detection, real-time detection, Alexey Bochkovskiy, neu
Welcome to the Ultralytics documentation page for YOLOv4, a state-of-the-art, real-time object detector launched in 2020 by Alexey Bochkovskiy at [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet). YOLOv4 is designed to provide the optimal balance between speed and accuracy, making it an excellent choice for many applications.
- **YOLOv4 architecture diagram**. Showcasing the intricate network design of YOLOv4, including the backbone, neck, and head components, and their interconnected layers for optimal real-time object detection.
+ **YOLOv4 architecture diagram**. Showcasing the intricate network design of YOLOv4, including the backbone, neck, and head components, and their interconnected layers for optimal real-time object detection.
## Introduction
diff --git a/docs/en/models/yolov5.md b/docs/en/models/yolov5.md
index 8a961354..9927d06c 100644
--- a/docs/en/models/yolov5.md
+++ b/docs/en/models/yolov5.md
@@ -10,7 +10,7 @@ keywords: YOLOv5, YOLOv5u, object detection, Ultralytics, anchor-free, pre-train
YOLOv5u represents an advancement in object detection methodologies. Originating from the foundational architecture of the [YOLOv5](https://github.com/ultralytics/yolov5) model developed by Ultralytics, YOLOv5u integrates the anchor-free, objectness-free split head, a feature previously introduced in the [YOLOv8](yolov8.md) models. This adaptation refines the model's architecture, leading to an improved accuracy-speed tradeoff in object detection tasks. Given the empirical results and its derived features, YOLOv5u provides an efficient alternative for those seeking robust solutions in both research and practical applications.
-
+
## Key Features
diff --git a/docs/en/models/yolov6.md b/docs/en/models/yolov6.md
index 35ddb124..11d016e6 100644
--- a/docs/en/models/yolov6.md
+++ b/docs/en/models/yolov6.md
@@ -10,8 +10,8 @@ keywords: Meituan YOLOv6, object detection, real-time applications, BiC module,
[Meituan](https://about.meituan.com/) YOLOv6 is a cutting-edge object detector that offers remarkable balance between speed and accuracy, making it a popular choice for real-time applications. This model introduces several notable enhancements on its architecture and training scheme, including the implementation of a Bi-directional Concatenation (BiC) module, an anchor-aided training (AAT) strategy, and an improved backbone and neck design for state-of-the-art accuracy on the COCO dataset.
-
- **Overview of YOLOv6.** Model architecture diagram showing the redesigned network components and training strategies that have led to significant performance improvements. (a) The neck of YOLOv6 (N and S are shown). Note for M/L, RepBlocks is replaced with CSPStackRep. (b) The structure of a BiC module. (c) A SimCSPSPPF block. ([source](https://arxiv.org/pdf/2301.05586.pdf)).
+
+ **Overview of YOLOv6.** Model architecture diagram showing the redesigned network components and training strategies that have led to significant performance improvements. (a) The neck of YOLOv6 (N and S are shown). Note for M/L, RepBlocks is replaced with CSPStackRep. (b) The structure of a BiC module. (c) A SimCSPSPPF block. ([source](https://arxiv.org/pdf/2301.05586.pdf)).
### Key Features
diff --git a/docs/en/models/yolov7.md b/docs/en/models/yolov7.md
index 05445c1f..54e9ea19 100644
--- a/docs/en/models/yolov7.md
+++ b/docs/en/models/yolov7.md
@@ -8,7 +8,7 @@ keywords: YOLOv7, real-time object detection, Ultralytics, AI, computer vision,
YOLOv7 is a state-of-the-art real-time object detector that surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS. It has the highest accuracy (56.8% AP) among all known real-time object detectors with 30 FPS or higher on GPU V100. Moreover, YOLOv7 outperforms other object detectors such as YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, and many others in speed and accuracy. The model is trained on the MS COCO dataset from scratch without using any other datasets or pre-trained weights. Source code for YOLOv7 is available on GitHub.
-
+
## Comparison of SOTA object detectors
diff --git a/docs/en/models/yolov8.md b/docs/en/models/yolov8.md
index 0ead14f2..72ee2750 100644
--- a/docs/en/models/yolov8.md
+++ b/docs/en/models/yolov8.md
@@ -10,7 +10,7 @@ keywords: YOLOv8, real-time object detection, YOLO series, Ultralytics, computer
YOLOv8 is the latest iteration in the YOLO series of real-time object detectors, offering cutting-edge performance in terms of accuracy and speed. Building upon the advancements of previous YOLO versions, YOLOv8 introduces new features and optimizations that make it an ideal choice for various object detection tasks in a wide range of applications.
-
+
diff --git a/docs/en/models/yolov9.md b/docs/en/models/yolov9.md
index 57201ebb..3cefff6f 100644
--- a/docs/en/models/yolov9.md
+++ b/docs/en/models/yolov9.md
@@ -19,7 +19,7 @@ YOLOv9 marks a significant advancement in real-time object detection, introducin
Watch: YOLOv9 Training on Custom Data using Ultralytics | Industrial Package Dataset
+
## Introduction
diff --git a/docs/en/modes/export.md b/docs/en/modes/export.md
index c7789d6f..70305351 100644
--- a/docs/en/modes/export.md
+++ b/docs/en/modes/export.md
@@ -6,7 +6,7 @@ keywords: YOLOv8, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine
# Model Export with Ultralytics YOLO
-
+
## Introduction
diff --git a/docs/en/modes/index.md b/docs/en/modes/index.md
index 83ff68f5..63871ee6 100644
--- a/docs/en/modes/index.md
+++ b/docs/en/modes/index.md
@@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv8, machine learning, model training, validation, pre
# Ultralytics YOLOv8 Modes
-
+
## Introduction
diff --git a/docs/en/modes/predict.md b/docs/en/modes/predict.md
index d13bd5ec..c5e8bdb1 100644
--- a/docs/en/modes/predict.md
+++ b/docs/en/modes/predict.md
@@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv8, model prediction, inference, predict mode, real-t
# Model Prediction with Ultralytics YOLO
-
+
## Introduction
diff --git a/docs/en/modes/track.md b/docs/en/modes/track.md
index a2e4c87a..d79e7c7a 100644
--- a/docs/en/modes/track.md
+++ b/docs/en/modes/track.md
@@ -6,7 +6,7 @@ keywords: multi-object tracking, Ultralytics YOLO, video analytics, real-time tr
# Multi-Object Tracking with Ultralytics YOLO
-
+
+
## Introduction
diff --git a/docs/en/modes/val.md b/docs/en/modes/val.md
index 57babf9b..b24f82de 100644
--- a/docs/en/modes/val.md
+++ b/docs/en/modes/val.md
@@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv8, model validation, machine learning, object detect
# Model Validation with Ultralytics YOLO
-
+
## Introduction
diff --git a/docs/en/quickstart.md b/docs/en/quickstart.md
index 42473281..957d6c9b 100644
--- a/docs/en/quickstart.md
+++ b/docs/en/quickstart.md
@@ -146,7 +146,7 @@ See the `ultralytics` [pyproject.toml](https://github.com/ultralytics/ultralytic
PyTorch requirements vary by operating system and CUDA requirements, so it's recommended to install PyTorch first following instructions at [https://pytorch.org/get-started/locally](https://pytorch.org/get-started/locally).
-
+
## Use Ultralytics with CLI
diff --git a/docs/en/solutions/index.md b/docs/en/solutions/index.md
index 496578cf..b627fed0 100644
--- a/docs/en/solutions/index.md
+++ b/docs/en/solutions/index.md
@@ -8,7 +8,7 @@ keywords: Ultralytics, YOLOv8, object counting, object blurring, security system
Ultralytics Solutions provide cutting-edge applications of YOLO models, offering real-world solutions like object counting, blurring, and security systems, enhancing efficiency and accuracy in diverse industries. Discover the power of YOLOv8 for practical, impactful implementations.
-
+
## Solutions
diff --git a/docs/en/tasks/classify.md b/docs/en/tasks/classify.md
index 2aa25c14..6fa0fabb 100644
--- a/docs/en/tasks/classify.md
+++ b/docs/en/tasks/classify.md
@@ -7,7 +7,7 @@ model_name: yolov8n-cls
# Image Classification
-
+
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of predefined classes.
diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md
index b3411f96..42f88843 100644
--- a/docs/en/tasks/detect.md
+++ b/docs/en/tasks/detect.md
@@ -6,7 +6,7 @@ keywords: object detection, YOLOv8, pretrained models, training, validation, pre
# Object Detection
-
+
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
diff --git a/docs/en/tasks/index.md b/docs/en/tasks/index.md
index 1376959e..e52fd537 100644
--- a/docs/en/tasks/index.md
+++ b/docs/en/tasks/index.md
@@ -7,7 +7,7 @@ keywords: Ultralytics YOLOv8, detection, segmentation, classification, oriented
# Ultralytics YOLOv8 Tasks
+
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to perform [detection](detect.md), [segmentation](segment.md), [obb](obb.md), [classification](classify.md), and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
diff --git a/docs/en/tasks/obb.md b/docs/en/tasks/obb.md
index bc8c5725..d49a289b 100644
--- a/docs/en/tasks/obb.md
+++ b/docs/en/tasks/obb.md
@@ -44,9 +44,9 @@ The output of an oriented object detector is a set of rotated bounding boxes tha
## Visual Samples
-| Ships Detection using OBB | Vehicle Detection using OBB |
-| :-----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------: |
-|  |  |
+| Ships Detection using OBB | Vehicle Detection using OBB |
+| :------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------: |
+|  |  |
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
diff --git a/docs/en/tasks/pose.md b/docs/en/tasks/pose.md
index b2c4bd22..ac6fc7a1 100644
--- a/docs/en/tasks/pose.md
+++ b/docs/en/tasks/pose.md
@@ -7,7 +7,7 @@ model_name: yolov8n-pose
# Pose Estimation
-
+
Pose estimation is a task that involves identifying the location of specific points in an image, usually referred to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]` coordinates.
diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md
index b64993d6..d6eaf7a0 100644
--- a/docs/en/tasks/segment.md
+++ b/docs/en/tasks/segment.md
@@ -7,7 +7,7 @@ model_name: yolov8n-seg
# Instance Segmentation
-
+
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image and segmenting them from the rest of the image.
diff --git a/docs/en/usage/simple-utilities.md b/docs/en/usage/simple-utilities.md
index 2f10dc44..ed59f8db 100644
--- a/docs/en/usage/simple-utilities.md
+++ b/docs/en/usage/simple-utilities.md
@@ -7,7 +7,7 @@ keywords: Ultralytics, utilities, data processing, auto annotation, YOLO, datase
# Simple Utilities
-
+
## Open a Terminal
Now from the Notebooks view, open a Terminal and select your compute.
-
+
## Setup and run YOLOv5
diff --git a/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md b/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
index fbfc946b..5618fed5 100644
--- a/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
+++ b/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
@@ -68,4 +68,4 @@ python detect.py --weights yolov5s.pt --source path/to/images
python export.py --weights yolov5s.pt --include onnx coreml tflite
```
-

-
+
+
To conclude, YOLOv5 is not only a state-of-the-art tool for object detection but also a testament to the power of machine learning in transforming the way we interact with the world through visual understanding. As you progress through this guide and begin applying YOLOv5 to your projects, remember that you are at the forefront of a technological revolution, capable of achieving remarkable feats. Should you need further insights or support from fellow visionaries, you're invited to our [GitHub repository](https://github.com/ultralytics/yolov5) home to a thriving community of developers and researchers. Keep exploring, keep innovating, and enjoy the marvels of YOLOv5. Happy detecting! 🌠🔍
diff --git a/docs/en/yolov5/tutorials/architecture_description.md b/docs/en/yolov5/tutorials/architecture_description.md
index 08d36ccc..bb9ade69 100644
--- a/docs/en/yolov5/tutorials/architecture_description.md
+++ b/docs/en/yolov5/tutorials/architecture_description.md
@@ -18,7 +18,7 @@ YOLOv5's architecture consists of three main parts:
The structure of the model is depicted in the image below. The model structure details can be found in `yolov5l.yaml`.
-
+
YOLOv5 introduces some minor changes compared to its predecessors:
@@ -108,29 +108,29 @@ YOLOv5 employs various data augmentation techniques to improve the model's abili
- **Mosaic Augmentation**: An image processing technique that combines four training images into one in ways that encourage object detection models to better handle various object scales and translations.
- 
+ 
- **Copy-Paste Augmentation**: An innovative data augmentation method that copies random patches from an image and pastes them onto another randomly chosen image, effectively generating a new training sample.
- 
+ 
- **Random Affine Transformations**: This includes random rotation, scaling, translation, and shearing of the images.
- 
+ 
- **MixUp Augmentation**: A method that creates composite images by taking a linear combination of two images and their associated labels.
- 
+ 
- **Albumentations**: A powerful library for image augmenting that supports a wide variety of augmentation techniques.
- **HSV Augmentation**: Random changes to the Hue, Saturation, and Value of the images.
- 
+ 
- **Random Horizontal Flip**: An augmentation method that randomly flips images horizontally.
- 
+ 
## 3. Training Strategies
diff --git a/docs/en/yolov5/tutorials/clearml_logging_integration.md b/docs/en/yolov5/tutorials/clearml_logging_integration.md
index 42f14a59..90012d19 100644
--- a/docs/en/yolov5/tutorials/clearml_logging_integration.md
+++ b/docs/en/yolov5/tutorials/clearml_logging_integration.md
@@ -27,7 +27,7 @@ And so much more. It's up to you how many of these tools you want to use, you ca
+
## Try out an Example!
@@ -179,11 +179,11 @@ python train.py \
--upload_dataset
```
-You can find the uploaded dataset in the Artifacts tab in your Comet Workspace
+You can find the uploaded dataset in the Artifacts tab in your Comet Workspace
-You can preview the data directly in the Comet UI.
+You can preview the data directly in the Comet UI.
-Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file
+Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file
### Using a saved Artifact
@@ -205,7 +205,7 @@ python train.py \
--weights yolov5s.pt
```
-Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset.
+Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset.
## Resuming a Training Run
@@ -253,4 +253,4 @@ comet optimizer -j
+
diff --git a/docs/en/yolov5/tutorials/hyperparameter_evolution.md b/docs/en/yolov5/tutorials/hyperparameter_evolution.md
index 78ddeb20..174f818c 100644
--- a/docs/en/yolov5/tutorials/hyperparameter_evolution.md
+++ b/docs/en/yolov5/tutorials/hyperparameter_evolution.md
@@ -147,7 +147,7 @@ We recommend a minimum of 300 generations of evolution for best results. Note th
`evolve.csv` is plotted as `evolve.png` by `utils.plots.plot_evolve()` after evolution finishes with one subplot per hyperparameter showing fitness (y-axis) vs hyperparameter values (x-axis). Yellow indicates higher concentrations. Vertical distributions indicate that a parameter has been disabled and does not mutate. This is user selectable in the `meta` dictionary in train.py, and is useful for fixing parameters and preventing them from evolving.
-
+
## Supported Environments
diff --git a/docs/en/yolov5/tutorials/model_ensembling.md b/docs/en/yolov5/tutorials/model_ensembling.md
index 0ea00268..625fe0a4 100644
--- a/docs/en/yolov5/tutorials/model_ensembling.md
+++ b/docs/en/yolov5/tutorials/model_ensembling.md
@@ -128,7 +128,7 @@ Results saved to runs/detect/exp2
Done. (0.223s)
```
-
+
## Supported Environments
diff --git a/docs/en/yolov5/tutorials/model_export.md b/docs/en/yolov5/tutorials/model_export.md
index 81e0a3f3..05b5a53a 100644
--- a/docs/en/yolov5/tutorials/model_export.md
+++ b/docs/en/yolov5/tutorials/model_export.md
@@ -135,11 +135,11 @@ Visualize: https://netron.app/
The 3 exported models will be saved alongside the original PyTorch model:
-



+
30% pruned output:
diff --git a/docs/en/yolov5/tutorials/multi_gpu_training.md b/docs/en/yolov5/tutorials/multi_gpu_training.md
index 4a51007e..df269b0c 100644
--- a/docs/en/yolov5/tutorials/multi_gpu_training.md
+++ b/docs/en/yolov5/tutorials/multi_gpu_training.md
@@ -24,7 +24,7 @@ pip install -r requirements.txt # install
Select a pretrained model to start training from. Here we select [YOLOv5s](https://github.com/ultralytics/yolov5/blob/master/models/yolov5s.yaml), the smallest and fastest model available. See our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints) for a full comparison of all models. We will train this model with Multi-GPU on the [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset.
-

-
+
-
+
-
+
-
+
+
For all inference options see YOLOv5 `AutoShape()` forward [method](https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252).
diff --git a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
index 4ea5297a..d154b5c5 100644
--- a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
+++ b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
@@ -25,13 +25,13 @@ You can upload your data to Roboflow via [web UI](https://docs.roboflow.com/addi
After uploading data to Roboflow, you can label your data and review previous labels.
-[](https://roboflow.com/annotate)
+[](https://roboflow.com/annotate)
## Versioning
You can make versions of your dataset with different preprocessing and offline augmentation options. YOLOv5 does online augmentations natively, so be intentional when layering Roboflow's offline augmentations on top.
-
+
## Exporting Data
@@ -54,7 +54,7 @@ We have released a custom training tutorial demonstrating all of the above capab
The real world is messy and your model will invariably encounter situations your dataset didn't anticipate. Using [active learning](https://blog.roboflow.com/what-is-active-learning/) is an important strategy to iteratively improve your dataset and model. With the Roboflow and YOLOv5 integration, you can quickly make improvements on your model deployments by using a battle tested machine learning pipeline.
-
+
## Supported Environments
diff --git a/docs/en/yolov5/tutorials/test_time_augmentation.md b/docs/en/yolov5/tutorials/test_time_augmentation.md
index c0c69823..17c9dd58 100644
--- a/docs/en/yolov5/tutorials/test_time_augmentation.md
+++ b/docs/en/yolov5/tutorials/test_time_augmentation.md
@@ -121,7 +121,7 @@ Results saved to runs/detect/exp
Done. (0.156s)
```
-
+
### PyTorch Hub TTA
diff --git a/docs/en/yolov5/tutorials/tips_for_best_training_results.md b/docs/en/yolov5/tutorials/tips_for_best_training_results.md
index 8f1b27a2..c24435d9 100644
--- a/docs/en/yolov5/tutorials/tips_for_best_training_results.md
+++ b/docs/en/yolov5/tutorials/tips_for_best_training_results.md
@@ -22,13 +22,13 @@ We've put together a full guide for users looking to get the best results on the
- **Label verification.** View `train_batch*.jpg` on train start to verify your labels appear correct, i.e. see [example](./train_custom_data.md#local-logging) mosaic.
- **Background images.** Background images are images with no objects that are added to a dataset to reduce False Positives (FP). We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference, 1% of the total). No labels are required for background images.
-
+
## Model Selection
Larger models like YOLOv5x and [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/tag/v5.0) will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For **mobile** deployments we recommend YOLOv5s/m, for **cloud** deployments we recommend YOLOv5l/x. See our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints) for a full comparison of all models.
-

+














+
### ClearML Logging and Automation 🌟 NEW
@@ -177,7 +177,7 @@ You'll get all the great expected features from an experiment manager: live upda
You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the [ClearML Tutorial](./clearml_logging_integration.md) for details!
-
+
### Local Logging
@@ -185,7 +185,7 @@ Training results are automatically logged with [Tensorboard](https://www.tensorf
This directory contains train and val statistics, mosaics, labels, predictions and augmented mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices.
-
+
Results file `results.csv` is updated after each epoch, and then plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:
@@ -195,7 +195,7 @@ from utils.plots import plot_results
plot_results("path/to/results.csv") # plot 'results.csv' as 'results.png'
```
-

+
### GPU Utilization Comparison
Interestingly, the more modules are frozen the less GPU memory is required to train, and the lower GPU utilization. This indicates that larger models, or models trained at larger --image-size may benefit from freezing in order to train faster.
-
+
-
+
## Supported Environments