Optimize Docs images (#15900)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Muhammad Rizwan Munawar 2024-08-30 05:52:10 +05:00 committed by GitHub
parent 0f9f7b806c
commit cfebb5f26b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
174 changed files with 537 additions and 537 deletions

View file

@ -6,7 +6,7 @@ keywords: YOLOv8, OpenVINO, model export, Intel, AI inference, CPU speedup, GPU
# Intel OpenVINO Export
<img width="1024" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/2b181f68-aa91-4514-ba09-497cc3c83b00" alt="OpenVINO Ecosystem">
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/openvino-ecosystem.avif" alt="OpenVINO Ecosystem">
In this guide, we cover exporting YOLOv8 models to the [OpenVINO](https://docs.openvino.ai/) format, which can provide up to 3x [CPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html) speedup, as well as accelerating YOLO inference on Intel [GPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html) and [NPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.html) hardware.
@ -118,7 +118,7 @@ The Intel® Data Center GPU Flex Series is a versatile and robust solution desig
Benchmarks below run on Intel® Data Center GPU Flex 170 at FP32 precision.
<div align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/253741543-62659bf8-1765-4d0b-b71c-8a4f9885506a.jpg" alt="Flex GPU benchmarks">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/flex-gpu-benchmarks.avif" alt="Flex GPU benchmarks">
</div>
| Model | Format | Status | Size (MB) | mAP50-95(B) | Inference time (ms/im) |
@ -157,7 +157,7 @@ Early reviews have praised the Arc™ series, particularly the integrated A770M
Benchmarks below run on Intel® Arc 770 GPU at FP32 precision.
<div align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/253741545-8530388f-8fd1-44f7-a4ae-f875d59dc282.jpg" alt="Arc GPU benchmarks">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/arc-gpu-benchmarks.avif" alt="Arc GPU benchmarks">
</div>
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
@ -192,7 +192,7 @@ Notably, Xeon® CPUs deliver high compute density and scalability, making them i
Benchmarks below run on 4th Gen Intel® Xeon® Scalable CPU at FP32 precision.
<div align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/253741546-dcd8e52a-fc38-424f-b87e-c8365b6f28dc.jpg" alt="Xeon CPU benchmarks">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/xeon-cpu-benchmarks.avif" alt="Xeon CPU benchmarks">
</div>
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
@ -225,7 +225,7 @@ The Intel® Core® series is a range of high-performance processors by Intel. Th
Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
<div align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/254559985-727bfa43-93fa-4fec-a417-800f869f3f9e.jpg" alt="Core CPU benchmarks">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/core-cpu-benchmarks.avif" alt="Core CPU benchmarks">
</div>
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |