diff --git a/docs/en/models/yolo12.md b/docs/en/models/yolo12.md
index c0ec8abe..9614e5ac 100644
--- a/docs/en/models/yolo12.md
+++ b/docs/en/models/yolo12.md
@@ -10,7 +10,16 @@ keywords: YOLO12, attention-centric object detection, YOLO series, Ultralytics,
YOLO12 introduces an attention-centric architecture that departs from the traditional CNN-based approaches used in previous YOLO models, yet retains the real-time inference speed essential for many applications. This model achieves state-of-the-art object detection accuracy through novel methodological innovations in attention mechanisms and overall network architecture, while maintaining real-time performance.
-
+
+
+
+
+ Watch: How to Use YOLO12 for Object Detection with the Ultralytics Package | Is YOLO12 Fast or Slow? 🚀
+
## Key Features
@@ -29,6 +38,8 @@ YOLO12 introduces an attention-centric architecture that departs from the tradit
- **Enhanced Efficiency**: Achieves higher accuracy with fewer parameters compared to many prior models, demonstrating an improved balance between speed and accuracy.
- **Flexible Deployment**: Designed for deployment across diverse platforms, from edge devices to cloud infrastructure.
+
+
## Supported Tasks and Modes
YOLO12 supports a variety of computer vision tasks. The table below shows task support and the operational modes (Inference, Validation, Training, and Export) enabled for each:
diff --git a/docs/model_data.py b/docs/model_data.py
index 4d2e40ca..825515da 100644
--- a/docs/model_data.py
+++ b/docs/model_data.py
@@ -4,7 +4,7 @@ data = {
"YOLO12": {
"author": "Yunjie Tian, Qixiang Ye, David Doermann",
"org": "University at Buffalo and University of Chinese Academy of Sciences",
- "date": "2024-02-18",
+ "date": "2025-02-18",
"arxiv": "https://arxiv.org/abs/2502.12524",
"github": "https://github.com/sunsmarterjie/yolov12",
"docs": "https://docs.ultralytics.com/models/yolo12/",