diff --git a/docs/en/datasets/detect/african-wildlife.md b/docs/en/datasets/detect/african-wildlife.md
index 3970ffa5..586df884 100644
--- a/docs/en/datasets/detect/african-wildlife.md
+++ b/docs/en/datasets/detect/african-wildlife.md
@@ -75,7 +75,6 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
# Start prediction with a finetuned *.pt model
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/african-wildlife-sample.jpg"
```
-
## Sample Images and Annotations
@@ -89,4 +88,4 @@ This example illustrates the variety and complexity of images in the African wil
## Citations and Acknowledgments
-The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
\ No newline at end of file
+The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
diff --git a/docs/en/datasets/detect/brain-tumor.md b/docs/en/datasets/detect/brain-tumor.md
index 1507e747..81723239 100644
--- a/docs/en/datasets/detect/brain-tumor.md
+++ b/docs/en/datasets/detect/brain-tumor.md
@@ -74,7 +74,6 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
# Start prediction with a finetuned *.pt model
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/brain-tumor-sample.jpg"
```
-
## Sample Images and Annotations
@@ -88,4 +87,4 @@ This example highlights the diversity and intricacy of images within the brain t
## Citations and Acknowledgments
-The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
\ No newline at end of file
+The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
diff --git a/docs/en/datasets/detect/lvis.md b/docs/en/datasets/detect/lvis.md
index ccb29794..2ddf49d9 100644
--- a/docs/en/datasets/detect/lvis.md
+++ b/docs/en/datasets/detect/lvis.md
@@ -29,7 +29,6 @@ The LVIS dataset is split into three subsets:
3. **Minival**: This subset is exactly the same as COCO val2017 set which has 5k images used for validation purposes during model training.
4. **Test**: This subset consists of 20k images used for testing and benchmarking the trained models. Ground truth annotations for this subset are not publicly available, and the results are submitted to the [LVIS evaluation server](https://eval.ai/web/challenges/challenge-page/675/overview) for performance evaluation.
-
## Applications
The LVIS dataset is widely used for training and evaluating deep learning models in object detection (such as YOLO, Faster R-CNN, and SSD), instance segmentation (such as Mask R-CNN). The dataset's diverse set of object categories, large number of annotated images, and standardized evaluation metrics make it an essential resource for computer vision researchers and practitioners.
diff --git a/docs/en/datasets/index.md b/docs/en/datasets/index.md
index 1ac05fc8..db27ba82 100644
--- a/docs/en/datasets/index.md
+++ b/docs/en/datasets/index.md
@@ -36,7 +36,7 @@ Bounding box object detection is a computer vision technique that involves detec
- [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
- [COCO](detect/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning with over 200K labeled images.
-- [LVIS](lvis.md): A large-scale object detection, segmentation, and captioning dataset with 1203 object categories.
+- [LVIS](detect/lvis.md): A large-scale object detection, segmentation, and captioning dataset with 1203 object categories.
- [COCO8](detect/coco8.md): Contains the first 4 images from COCO train and COCO val, suitable for quick tests.
- [Global Wheat 2020](detect/globalwheat2020.md): A dataset of wheat head images collected from around the world for object detection and localization tasks.
- [Objects365](detect/objects365.md): A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images.
diff --git a/docs/en/guides/nvidia-jetson.md b/docs/en/guides/nvidia-jetson.md
index 5e2597cd..b8d90dff 100644
--- a/docs/en/guides/nvidia-jetson.md
+++ b/docs/en/guides/nvidia-jetson.md
@@ -16,7 +16,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
## What is NVIDIA Jetson?
-NVIDIA Jetson is a series of embedded computing boards designed to bring accelerated AI (artificial intelligence) computing to edge devices. These compact and powerful devices are built around NVIDIA's GPU architecture and are capable of running complex AI algorithms and deep learning models directly on the device, without needing to rely on cloud computing resources. Jetson boards are often used in robotics, autonomous vehicles, industrial automation, and other applications where AI inference needs to be performed locally with low latency and high efficiency. Additionally these boards are based on the ARM64 architecture and runs on lower power compared to traditional GPU computing devices.
+NVIDIA Jetson is a series of embedded computing boards designed to bring accelerated AI (artificial intelligence) computing to edge devices. These compact and powerful devices are built around NVIDIA's GPU architecture and are capable of running complex AI algorithms and deep learning models directly on the device, without needing to rely on cloud computing resources. Jetson boards are often used in robotics, autonomous vehicles, industrial automation, and other applications where AI inference needs to be performed locally with low latency and high efficiency. Additionally, these boards are based on the ARM64 architecture and runs on lower power compared to traditional GPU computing devices.
## NVIDIA Jetson Series Comparison
@@ -24,7 +24,7 @@ NVIDIA Jetson is a series of embedded computing boards designed to bring acceler
| | Jetson AGX Orin 64GB | Jetson Orin NX 16GB | Jetson Orin Nano 8GB | Jetson AGX Xavier | Jetson Xavier NX | Jetson Nano |
|-------------------|------------------------------------------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------|-------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------|
-| AI Performance | 275 TOPS | 100 TOPS | 40 TOPs | 32 TOPS | 21 TOPS | 472 GFLOPS |
+| AI Performance | 275 TOPS | 100 TOPS | 40 TOPs | 32 TOPS | 21 TOPS | 472 GFLOPS |
| GPU | 2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores | 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores | 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores | 512-core NVIDIA Volta architecture GPU with 64 Tensor Cores | 384-core NVIDIA Volta™ architecture GPU with 48 Tensor Cores | 128-core NVIDIA Maxwell™ architecture GPU |
| GPU Max Frequency | 1.3 GHz | 918 MHz | 625 MHz | 1377 MHz | 1100 MHz | 921MHz |
| CPU | 12-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3 | 8-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 2MB L2 + 4MB L3 | 6-core Arm® Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L3 | 8-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L3 | 6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L3 | Quad-Core Arm® Cortex®-A57 MPCore processor |
@@ -67,6 +67,7 @@ t=ultralytics/ultralytics:latest-jetson && sudo docker pull $t && sudo docker ru
Here we will install ultralyics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on [NVIDIA TensorRT exports](https://docs.ultralytics.com/integrations/tensorrt) because TensoRT will make sure we can get the maximum performance out of the Jetson devices.
1. Update packages list, install pip and upgrade to latest
+
```sh
sudo apt update
sudo apt install python3-pip -y
@@ -74,25 +75,29 @@ pip install -U pip
```
2. Install `ultralytics` pip package with optional dependencies
+
```sh
pip install ultralytics[export]
```
3. Reboot the device
+
```sh
sudo reboot
```
### Install PyTorch and Torchvision
-The above ultralytics installation will install Torch and Torchvision. However, these 2 packages installed via pip are not compatible to run on Jetson platform which is based on ARM64 architecture. Therefore we need to manually install pre-built PyTorch pip wheel and compile/ install Torchvision from source.
+The above ultralytics installation will install Torch and Torchvision. However, these 2 packages installed via pip are not compatible to run on Jetson platform which is based on ARM64 architecture. Therefore, we need to manually install pre-built PyTorch pip wheel and compile/ install Torchvision from source.
1. Uninstall currently installed PyTorch and Torchvision
+
```sh
pip uninstall torch torchvision
```
2. Install PyTorch 2.1.0 according to JP5.1.3
+
```sh
sudo apt-get install -y libopenblas-base libopenmpi-dev
wget https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl -O torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
@@ -100,6 +105,7 @@ pip install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
```
3. Install Torchvision v0.16.2 according to PyTorch v2.1.0
+
```sh
sudo apt install -y libjpeg-dev zlib1g-dev
git clone https://github.com/pytorch/vision torchvision
@@ -149,13 +155,13 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
## Arguments
-| Key | Value | Description |
-|----------|--------------|------------------------------------------------------|
+| Key | Value | Description |
+|----------|------------|------------------------------------------------------|
| `format` | `'engine'` | format to export to |
-| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
-| `half` | `False` | FP16 quantization |
+| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
+| `half` | `False` | FP16 quantization |
-## NVIDIA Jetson Orin YOLOv8 Benchmarks
+## NVIDIA Jetson Orin YOLOv8 Benchmarks
YOLOv8 benchmarks below were run by the Ultralytics team on 3 different model formats measuring speed and accuracy: PyTorch, TorchScript and TensorRT. Benchmarks were run on Seeed Studio reComputer J4012 powered by Jetson Orin NX 16GB device at FP32 precision with default input image size of 640.
@@ -185,7 +191,6 @@ This table represents the benchmark results for five different models (YOLOv8n,
Visit [this link](https://www.seeedstudio.com/blog/2023/03/30/yolov8-performance-benchmarks-on-nvidia-jetson-devices) to explore more benchmarking efforts by Seeed Studio running on different versions of NVIDIA Jetson hardware.
-
## Reproduce Our Results
To reproduce the above Ultralytics benchmarks on all export [formats](../modes/export.md) run this code:
@@ -212,7 +217,6 @@ To reproduce the above Ultralytics benchmarks on all export [formats](../modes/e
Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).
-
!!! Note
Currently only PyTorch, Torchscript and TensorRT are working with the benchmarking tools. We will update it to support other exports in the future.
@@ -237,7 +241,7 @@ When using NVIDIA Jetson, there are a couple of best practices to follow in orde
3. Install Jetson Stats Application
- We can use jetson stats application to monitor the temperatures of the system components and check other system details such as view CPU, GPU, RAM utilizations, change power modes, set to max clocks, check JetPack information
+ We can use jetson stats application to monitor the temperatures of the system components and check other system details such as view CPU, GPU, RAM utilization, change power modes, set to max clocks, check JetPack information
```sh
sudo apt update
sudo pip install jetson-stats
@@ -249,4 +253,4 @@ When using NVIDIA Jetson, there are a couple of best practices to follow in orde
## Next Steps
-Congratulations on successfully setting up YOLOv8 on your NVIDIA Jetson! For further learning and support, visit more guide at [Ultralytics YOLOv8 Docs](../)!
\ No newline at end of file
+Congratulations on successfully setting up YOLOv8 on your NVIDIA Jetson! For further learning and support, visit more guide at [Ultralytics YOLOv8 Docs](../index.md)!
diff --git a/docs/en/guides/queue-management.md b/docs/en/guides/queue-management.md
index 9e72fd25..e02da630 100644
--- a/docs/en/guides/queue-management.md
+++ b/docs/en/guides/queue-management.md
@@ -10,7 +10,6 @@ keywords: Ultralytics, YOLOv8, Queue Management, Object Counting, Object Trackin
Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves organizing and controlling lines of people or vehicles to reduce wait times and enhance efficiency. It's about optimizing queues to improve customer satisfaction and system performance in various settings like retail, banks, airports, and healthcare facilities.
-
## Advantages of Queue Management?
- **Reduced Waiting Times:** Queue management systems efficiently organize queues, minimizing wait times for customers. This leads to improved satisfaction levels as customers spend less time waiting and more time engaging with products or services.
@@ -23,7 +22,6 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
|  |  |
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
-
!!! Example "Queue Management using YOLOv8 Example"
=== "Queue Manager"
@@ -126,20 +124,20 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
### Optional Arguments `set_args`
-| Name | Type | Default | Description |
-|-----------------------|-------------|----------------------------|---------------------------------------------|
-| `view_img` | `bool` | `False` | Display frames with counts |
-| `view_queue_counts` | `bool` | `True` | Display Queue counts only on video frame |
-| `line_thickness` | `int` | `2` | Increase bounding boxes thickness |
-| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
-| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
-| `region_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line |
-| `track_thickness` | `int` | `2` | Thickness of Tracking Lines |
-| `draw_tracks` | `bool` | `False` | Enable drawing Track lines |
-| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line |
-| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text |
-| `region_thickness` | `int` | `5` | Thickness for object counter region or line |
-| `fontsize` | `float` | `0.6` | Font size of counting text |
+| Name | Type | Default | Description |
+|---------------------|-------------|----------------------------|---------------------------------------------|
+| `view_img` | `bool` | `False` | Display frames with counts |
+| `view_queue_counts` | `bool` | `True` | Display Queue counts only on video frame |
+| `line_thickness` | `int` | `2` | Increase bounding boxes thickness |
+| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
+| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
+| `region_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line |
+| `track_thickness` | `int` | `2` | Thickness of Tracking Lines |
+| `draw_tracks` | `bool` | `False` | Enable drawing Track lines |
+| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line |
+| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text |
+| `region_thickness` | `int` | `5` | Thickness for object counter region or line |
+| `fontsize` | `float` | `0.6` | Font size of counting text |
### Arguments `model.track`
diff --git a/docs/en/help/CI.md b/docs/en/help/CI.md
index 033cf717..62c8d3a8 100644
--- a/docs/en/help/CI.md
+++ b/docs/en/help/CI.md
@@ -22,13 +22,13 @@ Here's a brief description of our CI actions:
Below is the table showing the status of these CI tests for our main repositories:
-| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPi and Docs Publishing |
-|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [yolov3](https://github.com/ultralytics/yolov3) | [](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | |
-| [yolov5](https://github.com/ultralytics/yolov5) | [](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | |
-| [ultralytics](https://github.com/ultralytics/ultralytics) | [](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml) |
-| [hub](https://github.com/ultralytics/hub) | [](https://github.com/ultralytics/hub/actions/workflows/ci.yaml) | | [](https://github.com/ultralytics/hub/actions/workflows/links.yml) | | |
-| [docs](https://github.com/ultralytics/docs) | | | [](https://github.com/ultralytics/docs/actions/workflows/links.yml)[](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml) | | [](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment) |
+| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPi and Docs Publishing |
+|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [yolov3](https://github.com/ultralytics/yolov3) | [](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | |
+| [yolov5](https://github.com/ultralytics/yolov5) | [](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | |
+| [ultralytics](https://github.com/ultralytics/ultralytics) | [](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml) | [](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml) |
+| [hub](https://github.com/ultralytics/hub) | [](https://github.com/ultralytics/hub/actions/workflows/ci.yaml) | | [](https://github.com/ultralytics/hub/actions/workflows/links.yml) | | |
+| [docs](https://github.com/ultralytics/docs) | | | [](https://github.com/ultralytics/docs/actions/workflows/links.yml)[](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml) | | [](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment) |
Each badge shows the status of the last run of the corresponding CI test on the `main` branch of the respective repository. If a test fails, the badge will display a "failing" status, and if it passes, it will display a "passing" status.
diff --git a/docs/en/integrations/edge-tpu.md b/docs/en/integrations/edge-tpu.md
index aa83c449..ec2ec4a8 100644
--- a/docs/en/integrations/edge-tpu.md
+++ b/docs/en/integrations/edge-tpu.md
@@ -24,7 +24,7 @@ The Edge TPU works with quantized models. Quantization makes models smaller and
Here are the key features that make TFLite Edge TPU a great model format choice for developers:
-- **Optimized Performance on Edge Devices**: The TFLite Edge TPU achieves high-speed neural networking performance through quantization, model optimization, hardware acceleration, and compiler optimization. Its minimalistic architecture contributes to its smaller size and cost-efficiency.
+- **Optimized Performance on Edge Devices**: The TFLite Edge TPU achieves high-speed neural networking performance through quantization, model optimization, hardware acceleration, and compiler optimization. Its minimalistic architecture contributes to its smaller size and cost-efficiency.
- **High Computational Throughput**: TFLite Edge TPU combines specialized hardware acceleration and efficient runtime execution to achieve high computational throughput. It is well-suited for deploying machine learning models with stringent performance requirements on edge devices.
@@ -38,9 +38,9 @@ TFLite Edge TPU offers various deployment options for machine learning models, i
- **On-Device Deployment**: TensorFlow Edge TPU models can be directly deployed on mobile and embedded devices. On-device deployment allows the models to execute directly on the hardware, eliminating the need for cloud connectivity.
-- **Edge Computing with Cloud TensorFlow TPUs**: In scenarios where edge devices have limited processing capabilities, TensorFlow Edge TPUs can offload inference tasks to cloud servers equipped with TPUs.
+- **Edge Computing with Cloud TensorFlow TPUs**: In scenarios where edge devices have limited processing capabilities, TensorFlow Edge TPUs can offload inference tasks to cloud servers equipped with TPUs.
-- **Hybrid Deployment**: A hybrid approach combines on-device and cloud deployment and offers a versatile and scalable solution for deploying machine learning models. Advantages include on-device processing for quick responses and cloud computing for more complex computations.
+- **Hybrid Deployment**: A hybrid approach combines on-device and cloud deployment and offers a versatile and scalable solution for deploying machine learning models. Advantages include on-device processing for quick responses and cloud computing for more complex computations.
## Exporting YOLOv8 Models to TFLite Edge TPU
@@ -99,7 +99,7 @@ For more details about supported export options, visit the [Ultralytics document
## Deploying Exported YOLOv8 TFLite Edge TPU Models
-After successfully exporting your Ultralytics YOLOv8 models to TFLite Edge TPU format, you can now deploy them. The primary and recommended first step for running a TFLite Edge TPU model is to use the YOLO("model_edgetpu.tflite") method, as outlined in the previous usage code snippet.
+After successfully exporting your Ultralytics YOLOv8 models to TFLite Edge TPU format, you can now deploy them. The primary and recommended first step for running a TFLite Edge TPU model is to use the YOLO("model_edgetpu.tflite") method, as outlined in the previous usage code snippet.
However, for in-depth instructions on deploying your TFLite Edge TPU models, take a look at the following resources:
@@ -111,7 +111,7 @@ However, for in-depth instructions on deploying your TFLite Edge TPU models, tak
## Summary
-In this guide, we’ve learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
+In this guide, we’ve learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
For further details on usage, visit the [Edge TPU official website](https://cloud.google.com/edge-tpu).
diff --git a/docs/en/integrations/index.md b/docs/en/integrations/index.md
index c16a10ec..cc8fb7a0 100644
--- a/docs/en/integrations/index.md
+++ b/docs/en/integrations/index.md
@@ -64,7 +64,7 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure model deployment.
- [TF SavedModel](tf-savedmodel.md): Developed by [Google](https://www.google.com), TF SavedModel is a universal serialization format for TensorFlow models, enabling easy sharing and deployment across a wide range of platforms, from servers to edge devices.
-
+
- [TF GraphDef](tf-graphdef.md): Developed by [Google](https://www.google.com), GraphDef is TensorFlow's format for representing computation graphs, enabling optimized execution of machine learning models across diverse hardware.
- [TFLite](tflite.md): Developed by [Google](https://www.google.com), TFLite is a lightweight framework for deploying machine learning models on mobile and edge devices, ensuring fast, efficient inference with minimal memory footprint.
@@ -72,7 +72,7 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient edge computing.
- [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications.
-
+
- [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms.
### Export Formats
diff --git a/docs/en/integrations/paddlepaddle.md b/docs/en/integrations/paddlepaddle.md
index f41116fb..bc8ccead 100644
--- a/docs/en/integrations/paddlepaddle.md
+++ b/docs/en/integrations/paddlepaddle.md
@@ -16,7 +16,7 @@ The ability to export to PaddlePaddle model format allows you to optimize your [