diff --git a/docs/en/integrations/albumentations.md b/docs/en/integrations/albumentations.md
index e7b0d02c..fe7b081c 100644
--- a/docs/en/integrations/albumentations.md
+++ b/docs/en/integrations/albumentations.md
@@ -158,3 +158,42 @@ If you are interested in learning more about Albumentations, check out the follo
In this guide, we explored the key aspects of Albumentations, a great Python library for image augmentation. We discussed its wide range of transformations, optimized performance, and how you can use it in your next YOLO11 project.
Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our [integration guide page](../integrations/index.md). You'll find valuable resources and insights there.
+
+## FAQ
+
+### How can I integrate Albumentations with YOLO11 for improved data augmentation?
+
+Albumentations integrates seamlessly with YOLO11 and applies automatically during training if you have the package installed. Here's how to get started:
+
+```python
+# Install required packages
+# !pip install albumentations ultralytics
+from ultralytics import YOLO
+
+# Load and train model with automatic augmentations
+model = YOLO("yolo11n.pt")
+model.train(data="coco8.yaml", epochs=100)
+```
+
+The integration includes optimized augmentations like blur, median blur, grayscale conversion, and CLAHE with carefully tuned probabilities to enhance model performance.
+
+### What are the key benefits of using Albumentations over other augmentation libraries?
+
+Albumentations stands out for several reasons:
+
+1. Performance: Built on OpenCV and NumPy with SIMD optimization for superior speed
+2. Flexibility: Supports 70+ transformations across pixel-level, spatial-level, and mixing-level augmentations
+3. Compatibility: Works seamlessly with popular frameworks like [PyTorch](../integrations/torchscript.md) and [TensorFlow](../integrations/tensorboard.md)
+4. Reliability: Extensive test suite prevents silent data corruption
+5. Ease of use: Single unified API for all augmentation types
+
+### What types of computer vision tasks can benefit from Albumentations augmentation?
+
+Albumentations enhances various [computer vision tasks](../tasks/index.md) including:
+
+- [Object Detection](../tasks/detect.md): Improves model robustness to lighting, scale, and orientation variations
+- [Instance Segmentation](../tasks/segment.md): Enhances mask prediction accuracy through diverse transformations
+- [Classification](../tasks/classify.md): Increases model generalization with color and geometric augmentations
+- [Pose Estimation](../tasks/pose.md): Helps models adapt to different viewpoints and lighting conditions
+
+The library's diverse augmentation options make it valuable for any vision task requiring robust model performance.
diff --git a/docs/en/integrations/sony-imx500.md b/docs/en/integrations/sony-imx500.md
new file mode 100644
index 00000000..43dbc133
--- /dev/null
+++ b/docs/en/integrations/sony-imx500.md
@@ -0,0 +1,325 @@
+---
+comments: true
+description: Learn to export Ultralytics YOLOv8 models to Sony's IMX500 format to optimize your models for efficient deployment.
+keywords: Sony, IMX500, IMX 500, Atrios, MCT, model export, quantization, pruning, deep learning optimization, Raspberry Pi AI Camera, edge AI, PyTorch, IMX
+---
+
+# IMX500 Export for Ultralytics YOLOv8
+
+This guide covers exporting and deploying Ultralytics YOLOv8 models to Raspberry Pi AI Cameras that feature the Sony IMX500 sensor.
+
+Deploying computer vision models on devices with limited computational power, such as [Raspberry Pi AI Camera](https://www.raspberrypi.com/products/ai-camera/), can be tricky. Using a model format optimized for faster performance makes a huge difference.
+
+The IMX500 model format is designed to use minimal power while delivering fast performance for neural networks. It allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and low-power inferencing. In this guide, we'll walk you through exporting and deploying your models to the IMX500 format while making it easier for your models to perform well on the [Raspberry Pi AI Camera](https://www.raspberrypi.com/products/ai-camera/).
+
+
+
+
+
+## Why Should You Export to IMX500
+
+Sony's [IMX500 Intelligent Vision Sensor](https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera) is a game-changing piece of hardware in edge AI processing. It's the world's first intelligent vision sensor with on-chip AI capabilities. This sensor helps overcome many challenges in edge AI, including data processing bottlenecks, privacy concerns, and performance limitations.
+While other sensors merely pass along images and frames, the IMX500 tells a whole story. It processes data directly on the sensor, allowing devices to generate insights in real-time.
+
+## Sony's IMX500 Export for YOLOv8 Models
+
+The IMX500 is designed to transform how devices handle data directly on the sensor, without needing to send it off to the cloud for processing.
+
+The IMX500 works with quantized models. Quantization makes models smaller and faster without losing much [accuracy](https://www.ultralytics.com/glossary/accuracy). It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.
+
+**IMX500 Key Features:**
+
+- **Metadata Output:** Instead of transmitting full images, the IMX500 outputs only metadata, minimizing data size, reducing bandwidth, and lowering costs.
+- **Addresses Privacy Concerns:** By processing data on the device, the IMX500 addresses privacy concerns, ideal for human-centric applications like person counting and occupancy tracking.
+- **Real-time Processing:** Fast, on-sensor processing supports real-time decisions, perfect for edge AI applications such as autonomous systems.
+
+**Before You Begin:** For best results, ensure your YOLOv8 model is well-prepared for export by following our [Model Training Guide](https://docs.ultralytics.com/modes/train/), [Data Preparation Guide](https://docs.ultralytics.com/datasets/), and [Hyperparameter Tuning Guide](https://docs.ultralytics.com/guides/hyperparameter-tuning/).
+
+## Usage Examples
+
+Export an Ultralytics YOLOv8 model to IMX500 format and run inference with the exported model.
+
+!!! note
+
+ Here we perform inference just to make sure the model works as expected. However, for deployment and inference on the Raspberry Pi AI Camera, please jump to [Using IMX500 Export in Deployment](#using-imx500-export-in-deployment) section.
+
+!!! example
+
+ === "Python"
+
+ ```python
+ from ultralytics import YOLO
+
+ # Load a YOLOv8n PyTorch model
+ model = YOLO("yolov8n.pt")
+
+ # Export the model
+ model.export(format="imx") # exports with PTQ quantization by default
+
+ # Load the exported model
+ imx_model = YOLO("yolov8n_imx_model")
+
+ # Run inference
+ results = imx_model("https://ultralytics.com/images/bus.jpg")
+ ```
+
+ === "CLI"
+
+ ```bash
+ # Export a YOLOv8n PyTorch model to imx format with Post-Training Quantization (PTQ)
+ yolo export model=yolov8n.pt format=imx
+
+ # Run inference with the exported model
+ yolo predict model=yolov8n_imx_model source='https://ultralytics.com/images/bus.jpg'
+ ```
+
+The export process will create an ONNX model for quantization validation, along with a directory named `_imx_model`. This directory will include the `packerOut.zip` file, which is essential for packaging the model for deployment on the IMX500 hardware. Additionally, the `_imx_model` folder will contain a text file (`labels.txt`) listing all the labels associated with the model.
+
+```bash
+yolov8n_imx_model
+├── dnnParams.xml
+├── labels.txt
+├── packerOut.zip
+├── yolov8n_imx.onnx
+├── yolov8n_imx500_model_MemoryReport.json
+└── yolov8n_imx500_model.pbtxt
+```
+
+## Arguments
+
+When exporting a model to IMX500 format, you can specify various arguments:
+
+| Key | Value | Description |
+| -------- | ------ | -------------------------------------------------------- |
+| `format` | `imx` | Format to export to (imx) |
+| `int8` | `True` | Enable INT8 quantization for the model (default: `True`) |
+| `imgsz` | `640` | Image size for the model input (default: `640`) |
+
+## Using IMX500 Export in Deployment
+
+After exporting Ultralytics YOLOv8n model to IMX500 format, it can be deployed to Raspberry Pi AI Camera for inference.
+
+### Hardware Prerequisites
+
+Make sure you have the below hardware:
+
+1. Raspberry Pi 5 or Raspberry Pi 4 Model B
+2. Raspberry Pi AI Camera
+
+Connect the Raspberry Pi AI camera to the 15-pin MIPI CSI connector on the Raspberry Pi and power on the Raspberry Pi
+
+### Software Prerequisites
+
+!!! note
+
+ This guide has been tested with Raspberry Pi OS Bookworm running on a Raspberry Pi 5
+
+Step 1: Open a terminal window and execute the following commands to update the Raspberry Pi software to the latest version.
+
+```bash
+sudo apt update && sudo apt full-upgrade
+```
+
+Step 2: Install IMX500 firmware which is required to operate the IMX500 sensor along with a packager tool.
+
+```bash
+sudo apt install imx500-all imx500-tools
+```
+
+Step 3: Install prerequisites to run `picamera2` application. We will use this application later for the deployment process.
+
+```bash
+sudo apt install python3-opencv python3-munkres
+```
+
+Step 4: Reboot Raspberry Pi for the changes to take into effect
+
+```bash
+sudo reboot
+```
+
+### Package Model and Deploy to AI Camera
+
+After obtaining `packerOut.zip` from the IMX500 conversion process, you can pass this file into the packager tool to obtain an RPK file. This file can then be deployed directly to the AI Camera using `picamera2`.
+
+Step 1: Package the model into RPK file
+
+```bash
+imx500-package -i -o