ultralytics 8.3.65 Rockchip RKNN Integration for Ultralytics YOLO models (#16308)

Signed-off-by: Francesco Mattioli <Francesco.mttl@gmail.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Burhan <62214284+Burhan-Q@users.noreply.github.com>
Co-authored-by: Lakshantha Dissanayake <lakshantha@ultralytics.com>
Co-authored-by: Burhan <Burhan-Q@users.noreply.github.com>
Co-authored-by: Laughing-q <1185102784@qq.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
Co-authored-by: Ultralytics Assistant <135830346+UltralyticsAssistant@users.noreply.github.com>
Co-authored-by: Lakshantha Dissanayake <lakshanthad@yahoo.com>
Co-authored-by: Francesco Mattioli <Francesco.mttl@gmail.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Ivor Zhu 2025-01-20 20:25:54 -05:00 committed by GitHub
parent 617dea8e25
commit b5e0cee943
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
41 changed files with 390 additions and 118 deletions

2
.gitignore vendored
View file

@ -124,6 +124,7 @@ venv.bak/
# VSCode project settings # VSCode project settings
.vscode/ .vscode/
.devcontainer/
# Rope project settings # Rope project settings
.ropeproject .ropeproject
@ -165,6 +166,7 @@ weights/
*_ncnn_model/ *_ncnn_model/
*_imx_model/ *_imx_model/
pnnx* pnnx*
*.rknn
# Autogenerated files for tests # Autogenerated files for tests
/ultralytics/assets/ /ultralytics/assets/

View file

@ -95,6 +95,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [SONY IMX500](sony-imx500.md): Optimize and deploy [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) models on Raspberry Pi AI Cameras with the IMX500 sensor for fast, low-power performance. - [SONY IMX500](sony-imx500.md): Optimize and deploy [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) models on Raspberry Pi AI Cameras with the IMX500 sensor for fast, low-power performance.
- [Rockchip RKNN](rockchip-rknn.md): Developed by [Rockchip](https://www.rock-chips.com/), RKNN is a specialized neural network inference framework optimized for Rockchip's hardware platforms, particularly their NPUs. It facilitates efficient deployment of AI models on edge devices, enabling high-performance inference in real-time applications.
### Export Formats ### Export Formats
We also support a variety of model export formats for deployment in different environments. Here are the available formats: We also support a variety of model export formats for deployment in different environments. Here are the available formats:

View file

@ -0,0 +1,158 @@
---
comments: true
description: Learn how to export YOLO11 models to RKNN format for efficient deployment on Rockchip platforms with enhanced performance.
keywords: YOLO11, RKNN, model export, Ultralytics, Rockchip, machine learning, model deployment, computer vision, deep learning
---
# RKNN Export for Ultralytics YOLO11 Models
When deploying computer vision models on embedded devices, especially those powered by Rockchip processors, having a compatible model format is essential. Exporting [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models to RKNN format ensures optimized performance and compatibility with Rockchip's hardware. This guide will walk you through converting your YOLO11 models to RKNN format, enabling efficient deployment on Rockchip platforms.
!!! note
This guide has been tested with [Radxa Rock 5B](https://radxa.com/products/rock5/5b) which is based on Rockchip RK3588 and [Radxa Zero 3W](https://radxa.com/products/zeros/zero3w) which is based on Rockchip RK3566. It is expected to work across other Rockchip-based devices which supports [rknn-toolkit2](https://github.com/airockchip/rknn-toolkit2) such as RK3576, RK3568, RK3562, RV1103, RV1106, RV1103B, RV1106B and RK2118.
<p align="center">
<img width="100%" src="https://www.rock-chips.com/Images/web/solution/AI/chip_s.png" alt="RKNN">
</p>
## What is Rockchip?
Renowned for delivering versatile and power-efficient solutions, Rockchip designs advanced System-on-Chips (SoCs) that power a wide range of consumer electronics, industrial applications, and AI technologies. With ARM-based architecture, built-in Neural Processing Units (NPUs), and high-resolution multimedia support, Rockchip SoCs enable cutting-edge performance for devices like tablets, smart TVs, IoT systems, and edge AI applications. Companies like Radxa, ASUS, Pine64, Orange Pi, Odroid, Khadas, and Banana Pi offer a variety of products based on Rockchip SoCs, further extending their reach and impact across diverse markets.
## RKNN Toolkit
The [RKNN Toolkit](https://github.com/airockchip/rknn-toolkit2) is a set of tools and libraries provided by Rockchip to facilitate the deployment of deep learning models on their hardware platforms. RKNN, or Rockchip Neural Network, is the proprietary format used by these tools. RKNN models are designed to take full advantage of the hardware acceleration provided by Rockchip's NPU (Neural Processing Unit), ensuring high performance in AI tasks on devices like RK3588, RK3566, RV1103, RV1106, and other Rockchip-powered systems.
## Key Features of RKNN Models
RKNN models offer several advantages for deployment on Rockchip platforms:
- **Optimized for NPU**: RKNN models are specifically optimized to run on Rockchip's NPUs, ensuring maximum performance and efficiency.
- **Low Latency**: The RKNN format minimizes inference latency, which is critical for real-time applications on edge devices.
- **Platform-Specific Customization**: RKNN models can be tailored to specific Rockchip platforms, enabling better utilization of hardware resources.
## Flash OS to Rockchip hardware
The first step after getting your hands on a Rockchip-based device is to flash an OS so that that the hardware can boot into a working environment. In this guide we will point to getting started guides of the two devices that we tested which are Radxa Rock 5B and Radxa Zero 3W.
- [Radxa Rock 5B Getting Started Guide](https://docs.radxa.com/en/rock5/rock5b)
- [Radxa Zero 3W Getting Started Guide](https://docs.radxa.com/en/zero/zero3)
## Export to RKNN: Converting Your YOLO11 Model
Export an Ultralytics YOLO11 model to RKNN format and run inference with the exported model.
!!! note
Make sure to use an X86-based Linux PC to export the model to RKNN because exporting on Rockchip-based devices (ARM64) are not supported.
### Installation
To install the required packages, run:
!!! Tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLO11
pip install ultralytics
```
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
### Usage
!!! note
Export is currently only supported for detection models. More model support will be coming in the future.
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to RKNN format
# Here name can be one of rk3588, rk3576, rk3566, rk3568, rk3562, rv1103, rv1106, rv1103b, rv1106b, rk2118
model.export(format="rknn", args={"name": "rk3588"}) # creates '/yolo11n_rknn_model'
```
=== "CLI"
```bash
# Export a YOLO11n PyTorch model to RKNN format
# Here name can be one of rk3588, rk3576, rk3566, rk3568, rk3562, rv1103, rv1106, rv1103b, rv1106b, rk2118
yolo export model=yolo11n.pt format=rknn name=rk3588 # creates '/yolo11n_rknn_model'
```
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
## Deploying Exported YOLO11 RKNN Models
Once you've successfully exported your Ultralytics YOLO11 models to RKNN format, the next step is deploying these models on Rockchip-based devices.
### Installation
To install the required packages, run:
!!! Tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLO11
pip install ultralytics
```
### Usage
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the exported RKNN model
rknn_model = YOLO("./yolo11n_rknn_model")
# Run inference
results = rknn_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Run inference with the exported model
yolo predict model='./yolo11n_rknn_model' source='https://ultralytics.com/images/bus.jpg'
```
## Benchmarks
YOLO11 benchmarks below were run by the Ultralytics team on Radxa Rock 5B based on Rockchip RK3588 with `rknn` model format measuring speed and accuracy.
| Model | Format | Status | Size (MB) | mAP50-95(B) | Inference time (ms/im) |
| ------- | ------ | ------ | --------- | ----------- | ---------------------- |
| YOLO11n | rknn | ✅ | 7.4 | 0.61 | 99.5 |
| YOLO11s | rknn | ✅ | 20.7 | 0.741 | 122.3 |
| YOLO11m | rknn | ✅ | 41.9 | 0.764 | 298.0 |
| YOLO11l | rknn | ✅ | 53.3 | 0.72 | 319.6 |
| YOLO11x | rknn | ✅ | 114.6 | 0.828 | 632.1 |
!!! note
Validation for the above benchmark was done using coco8 dataset
## Summary
In this guide, you've learned how to export Ultralytics YOLO11 models to RKNN format to enhance their deployment on Rockchip platforms. You were also introduced to the RKNN Toolkit and the specific advantages of using RKNN models for edge AI applications.
For further details on usage, visit the [RKNN official documentation](https://github.com/airockchip/rknn-toolkit2).
Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.

View file

@ -15,3 +15,4 @@
| [MNN](../integrations/mnn.md) | `mnn` | `{{ model_name or "yolo11n" }}.mnn` | ✅ | `imgsz`, `batch`, `int8`, `half` | | [MNN](../integrations/mnn.md) | `mnn` | `{{ model_name or "yolo11n" }}.mnn` | ✅ | `imgsz`, `batch`, `int8`, `half` |
| [NCNN](../integrations/ncnn.md) | `ncnn` | `{{ model_name or "yolo11n" }}_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` | | [NCNN](../integrations/ncnn.md) | `ncnn` | `{{ model_name or "yolo11n" }}_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
| [IMX500](../integrations/sony-imx500.md) | `imx` | `{{ model_name or "yolov8n" }}_imx_model/` | ✅ | `imgsz`, `int8` | | [IMX500](../integrations/sony-imx500.md) | `imx` | `{{ model_name or "yolov8n" }}_imx_model/` | ✅ | `imgsz`, `int8` |
| [RKNN](../integrations/rockchip-rknn.md) | `rknn` | `{{ model_name or "yolo11n" }}_rknn_model/` | ✅ | `imgsz`, `batch`, `name` |

View file

@ -111,6 +111,10 @@ keywords: Ultralytics, YOLO, utility functions, version checks, requirements, im
<br><br><hr><br> <br><br><hr><br>
## ::: ultralytics.utils.checks.is_rockchip
<br><br><hr><br>
## ::: ultralytics.utils.checks.is_sudo_available ## ::: ultralytics.utils.checks.is_sudo_available
<br><br> <br><br>

View file

@ -181,3 +181,6 @@ xinwang614@gmail.com:
zhaode.wzd@alibaba-inc.com: zhaode.wzd@alibaba-inc.com:
avatar: https://avatars.githubusercontent.com/u/8401806?v=4 avatar: https://avatars.githubusercontent.com/u/8401806?v=4
username: wangzhaode username: wangzhaode
zhushuoyu0501@gmail.com:
avatar: null
username: null

View file

@ -91,7 +91,7 @@ def mouse_callback(event, x, y, flags, param):
def run( def run(
weights="yolov8n.pt", weights="yolo11n.pt",
source=None, source=None,
device="cpu", device="cpu",
view_img=False, view_img=False,
@ -229,7 +229,7 @@ def run(
def parse_opt(): def parse_opt():
"""Parse command line arguments.""" """Parse command line arguments."""
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("--weights", type=str, default="yolov8n.pt", help="initial weights path") parser.add_argument("--weights", type=str, default="yolo11n.pt", help="initial weights path")
parser.add_argument("--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu") parser.add_argument("--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu")
parser.add_argument("--source", type=str, required=True, help="video file path") parser.add_argument("--source", type=str, required=True, help="video file path")
parser.add_argument("--view-img", action="store_true", help="show results") parser.add_argument("--view-img", action="store_true", help="show results")

View file

@ -424,6 +424,7 @@ nav:
- Weights & Biases: integrations/weights-biases.md - Weights & Biases: integrations/weights-biases.md
- Albumentations: integrations/albumentations.md - Albumentations: integrations/albumentations.md
- SONY IMX500: integrations/sony-imx500.md - SONY IMX500: integrations/sony-imx500.md
- Rockchip RKNN: integrations/rockchip-rknn.md
- HUB: - HUB:
- hub/index.md - hub/index.md
- Web: - Web:

View file

@ -210,7 +210,7 @@ def test_export_ncnn():
@pytest.mark.skipif(True, reason="Test disabled as keras and tensorflow version conflicts with tflite export.") @pytest.mark.skipif(True, reason="Test disabled as keras and tensorflow version conflicts with tflite export.")
@pytest.mark.skipif(not LINUX or MACOS, reason="Skipping test on Windows and Macos") @pytest.mark.skipif(not LINUX or MACOS, reason="Skipping test on Windows and Macos")
def test_export_imx(): def test_export_imx():
"""Test YOLOv8n exports to IMX format.""" """Test YOLO exports to IMX format."""
model = YOLO("yolov8n.pt") model = YOLO("yolov8n.pt")
file = model.export(format="imx", imgsz=32) file = model.export(format="imx", imgsz=32)
YOLO(file)(SOURCE, imgsz=32) YOLO(file)(SOURCE, imgsz=32)

View file

@ -1,6 +1,6 @@
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license # Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
__version__ = "8.3.64" __version__ = "8.3.65"
import os import os

View file

@ -19,6 +19,7 @@ PaddlePaddle | `paddle` | yolo11n_paddle_model/
MNN | `mnn` | yolo11n.mnn MNN | `mnn` | yolo11n.mnn
NCNN | `ncnn` | yolo11n_ncnn_model/ NCNN | `ncnn` | yolo11n_ncnn_model/
IMX | `imx` | yolo11n_imx_model/ IMX | `imx` | yolo11n_imx_model/
RKNN | `rknn` | yolo11n_rknn_model/
Requirements: Requirements:
$ pip install "ultralytics[export]" $ pip install "ultralytics[export]"
@ -78,11 +79,13 @@ from ultralytics.nn.tasks import DetectionModel, SegmentationModel, WorldModel
from ultralytics.utils import ( from ultralytics.utils import (
ARM64, ARM64,
DEFAULT_CFG, DEFAULT_CFG,
IS_COLAB,
IS_JETSON, IS_JETSON,
LINUX, LINUX,
LOGGER, LOGGER,
MACOS, MACOS,
PYTHON_VERSION, PYTHON_VERSION,
RKNN_CHIPS,
ROOT, ROOT,
WINDOWS, WINDOWS,
__version__, __version__,
@ -122,6 +125,7 @@ def export_formats():
["MNN", "mnn", ".mnn", True, True, ["batch", "half", "int8"]], ["MNN", "mnn", ".mnn", True, True, ["batch", "half", "int8"]],
["NCNN", "ncnn", "_ncnn_model", True, True, ["batch", "half"]], ["NCNN", "ncnn", "_ncnn_model", True, True, ["batch", "half"]],
["IMX", "imx", "_imx_model", True, True, ["int8"]], ["IMX", "imx", "_imx_model", True, True, ["int8"]],
["RKNN", "rknn", "_rknn_model", False, False, ["batch", "name"]],
] ]
return dict(zip(["Format", "Argument", "Suffix", "CPU", "GPU", "Arguments"], zip(*x))) return dict(zip(["Format", "Argument", "Suffix", "CPU", "GPU", "Arguments"], zip(*x)))
@ -226,22 +230,10 @@ class Exporter:
flags = [x == fmt for x in fmts] flags = [x == fmt for x in fmts]
if sum(flags) != 1: if sum(flags) != 1:
raise ValueError(f"Invalid export format='{fmt}'. Valid formats are {fmts}") raise ValueError(f"Invalid export format='{fmt}'. Valid formats are {fmts}")
( (jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, mnn, ncnn, imx, rknn) = (
jit, flags # export booleans
onnx, )
xml,
engine,
coreml,
saved_model,
pb,
tflite,
edgetpu,
tfjs,
paddle,
mnn,
ncnn,
imx,
) = flags # export booleans
is_tf_format = any((saved_model, pb, tflite, edgetpu, tfjs)) is_tf_format = any((saved_model, pb, tflite, edgetpu, tfjs))
# Device # Device
@ -277,6 +269,16 @@ class Exporter:
if self.args.optimize: if self.args.optimize:
assert not ncnn, "optimize=True not compatible with format='ncnn', i.e. use optimize=False" assert not ncnn, "optimize=True not compatible with format='ncnn', i.e. use optimize=False"
assert self.device.type == "cpu", "optimize=True not compatible with cuda devices, i.e. use device='cpu'" assert self.device.type == "cpu", "optimize=True not compatible with cuda devices, i.e. use device='cpu'"
if rknn:
if not self.args.name:
LOGGER.warning(
"WARNING ⚠️ Rockchip RKNN export requires a missing 'name' arg for processor type. Using default name='rk3588'."
)
self.args.name = "rk3588"
self.args.name = self.args.name.lower()
assert self.args.name in RKNN_CHIPS, (
f"Invalid processor name '{self.args.name}' for Rockchip RKNN export. Valid names are {RKNN_CHIPS}."
)
if self.args.int8 and tflite: if self.args.int8 and tflite:
assert not getattr(model, "end2end", False), "TFLite INT8 export not supported for end2end models." assert not getattr(model, "end2end", False), "TFLite INT8 export not supported for end2end models."
if edgetpu: if edgetpu:
@ -417,6 +419,8 @@ class Exporter:
f[12], _ = self.export_ncnn() f[12], _ = self.export_ncnn()
if imx: if imx:
f[13], _ = self.export_imx() f[13], _ = self.export_imx()
if rknn:
f[14], _ = self.export_rknn()
# Finish # Finish
f = [str(x) for x in f if x] # filter out '' and None f = [str(x) for x in f if x] # filter out '' and None
@ -746,7 +750,7 @@ class Exporter:
model = IOSDetectModel(self.model, self.im) if self.args.nms else self.model model = IOSDetectModel(self.model, self.im) if self.args.nms else self.model
else: else:
if self.args.nms: if self.args.nms:
LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is only available for Detect models like 'yolov8n.pt'.") LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is only available for Detect models like 'yolo11n.pt'.")
# TODO CoreML Segment and Pose model pipelining # TODO CoreML Segment and Pose model pipelining
model = self.model model = self.model
@ -1141,6 +1145,35 @@ class Exporter:
return f, None return f, None
@try_export @try_export
def export_rknn(self, prefix=colorstr("RKNN:")):
"""YOLO RKNN model export."""
LOGGER.info(f"\n{prefix} starting export with rknn-toolkit2...")
check_requirements("rknn-toolkit2")
if IS_COLAB:
# Prevent 'exit' from closing the notebook https://github.com/airockchip/rknn-toolkit2/issues/259
import builtins
builtins.exit = lambda: None
from rknn.api import RKNN
f, _ = self.export_onnx()
platform = self.args.name
export_path = Path(f"{Path(f).stem}_rknn_model")
export_path.mkdir(exist_ok=True)
rknn = RKNN(verbose=False)
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform=platform)
_ = rknn.load_onnx(model=f)
_ = rknn.build(do_quantization=False) # TODO: Add quantization support
f = f.replace(".onnx", f"-{platform}.rknn")
_ = rknn.export_rknn(f"{export_path / f}")
yaml_save(export_path / "metadata.yaml", self.metadata)
return export_path, None
def export_imx(self, prefix=colorstr("IMX:")): def export_imx(self, prefix=colorstr("IMX:")):
"""YOLO IMX export.""" """YOLO IMX export."""
gptq = False gptq = False

View file

@ -194,7 +194,7 @@ class Model(nn.Module):
(bool): True if the model string is a valid Triton Server URL, False otherwise. (bool): True if the model string is a valid Triton Server URL, False otherwise.
Examples: Examples:
>>> Model.is_triton_model("http://localhost:8000/v2/models/yolov8n") >>> Model.is_triton_model("http://localhost:8000/v2/models/yolo11n")
True True
>>> Model.is_triton_model("yolo11n.pt") >>> Model.is_triton_model("yolo11n.pt")
False False
@ -247,7 +247,7 @@ class Model(nn.Module):
Examples: Examples:
>>> model = Model() >>> model = Model()
>>> model._new("yolov8n.yaml", task="detect", verbose=True) >>> model._new("yolo11n.yaml", task="detect", verbose=True)
""" """
cfg_dict = yaml_model_load(cfg) cfg_dict = yaml_model_load(cfg)
self.cfg = cfg self.cfg = cfg
@ -283,7 +283,7 @@ class Model(nn.Module):
""" """
if weights.lower().startswith(("https://", "http://", "rtsp://", "rtmp://", "tcp://")): if weights.lower().startswith(("https://", "http://", "rtsp://", "rtmp://", "tcp://")):
weights = checks.check_file(weights, download_dir=SETTINGS["weights_dir"]) # download and return local file weights = checks.check_file(weights, download_dir=SETTINGS["weights_dir"]) # download and return local file
weights = checks.check_model_file_from_stem(weights) # add suffix, i.e. yolov8n -> yolov8n.pt weights = checks.check_model_file_from_stem(weights) # add suffix, i.e. yolo11n -> yolo11n.pt
if Path(weights).suffix == ".pt": if Path(weights).suffix == ".pt":
self.model, self.ckpt = attempt_load_one_weight(weights) self.model, self.ckpt = attempt_load_one_weight(weights)
@ -313,7 +313,7 @@ class Model(nn.Module):
Examples: Examples:
>>> model = Model("yolo11n.pt") >>> model = Model("yolo11n.pt")
>>> model._check_is_pytorch_model() # No error raised >>> model._check_is_pytorch_model() # No error raised
>>> model = Model("yolov8n.onnx") >>> model = Model("yolo11n.onnx")
>>> model._check_is_pytorch_model() # Raises TypeError >>> model._check_is_pytorch_model() # Raises TypeError
""" """
pt_str = isinstance(self.model, (str, Path)) and Path(self.model).suffix == ".pt" pt_str = isinstance(self.model, (str, Path)) and Path(self.model).suffix == ".pt"
@ -323,7 +323,7 @@ class Model(nn.Module):
f"model='{self.model}' should be a *.pt PyTorch model to run this method, but is a different format. " f"model='{self.model}' should be a *.pt PyTorch model to run this method, but is a different format. "
f"PyTorch models can train, val, predict and export, i.e. 'model.train(data=...)', but exported " f"PyTorch models can train, val, predict and export, i.e. 'model.train(data=...)', but exported "
f"formats like ONNX, TensorRT etc. only support 'predict' and 'val' modes, " f"formats like ONNX, TensorRT etc. only support 'predict' and 'val' modes, "
f"i.e. 'yolo predict model=yolov8n.onnx'.\nTo run CUDA or MPS inference please pass the device " f"i.e. 'yolo predict model=yolo11n.onnx'.\nTo run CUDA or MPS inference please pass the device "
f"argument directly in your inference command, i.e. 'model.predict(source=..., device=0)'" f"argument directly in your inference command, i.e. 'model.predict(source=..., device=0)'"
) )

View file

@ -3,7 +3,7 @@
Run prediction on images, videos, directories, globs, YouTube, webcam, streams, etc. Run prediction on images, videos, directories, globs, YouTube, webcam, streams, etc.
Usage - sources: Usage - sources:
$ yolo mode=predict model=yolov8n.pt source=0 # webcam $ yolo mode=predict model=yolo11n.pt source=0 # webcam
img.jpg # image img.jpg # image
vid.mp4 # video vid.mp4 # video
screen # screenshot screen # screenshot
@ -15,19 +15,21 @@ Usage - sources:
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP, TCP stream 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP, TCP stream
Usage - formats: Usage - formats:
$ yolo mode=predict model=yolov8n.pt # PyTorch $ yolo mode=predict model=yolo11n.pt # PyTorch
yolov8n.torchscript # TorchScript yolo11n.torchscript # TorchScript
yolov8n.onnx # ONNX Runtime or OpenCV DNN with dnn=True yolo11n.onnx # ONNX Runtime or OpenCV DNN with dnn=True
yolov8n_openvino_model # OpenVINO yolo11n_openvino_model # OpenVINO
yolov8n.engine # TensorRT yolo11n.engine # TensorRT
yolov8n.mlpackage # CoreML (macOS-only) yolo11n.mlpackage # CoreML (macOS-only)
yolov8n_saved_model # TensorFlow SavedModel yolo11n_saved_model # TensorFlow SavedModel
yolov8n.pb # TensorFlow GraphDef yolo11n.pb # TensorFlow GraphDef
yolov8n.tflite # TensorFlow Lite yolo11n.tflite # TensorFlow Lite
yolov8n_edgetpu.tflite # TensorFlow Edge TPU yolo11n_edgetpu.tflite # TensorFlow Edge TPU
yolov8n_paddle_model # PaddlePaddle yolo11n_paddle_model # PaddlePaddle
yolov8n.mnn # MNN yolo11n.mnn # MNN
yolov8n_ncnn_model # NCNN yolo11n_ncnn_model # NCNN
yolo11n_imx_model # Sony IMX
yolo11n_rknn_model # Rockchip RKNN
""" """
import platform import platform

View file

@ -1718,7 +1718,7 @@ class OBB(BaseTensor):
Examples: Examples:
>>> import torch >>> import torch
>>> from ultralytics import YOLO >>> from ultralytics import YOLO
>>> model = YOLO("yolov8n-obb.pt") >>> model = YOLO("yolo11n-obb.pt")
>>> results = model("path/to/image.jpg") >>> results = model("path/to/image.jpg")
>>> for result in results: >>> for result in results:
... obb = result.obb ... obb = result.obb

View file

@ -3,7 +3,7 @@
Train a model on a dataset. Train a model on a dataset.
Usage: Usage:
$ yolo mode=train model=yolov8n.pt data=coco8.yaml imgsz=640 epochs=100 batch=16 $ yolo mode=train model=yolo11n.pt data=coco8.yaml imgsz=640 epochs=100 batch=16
""" """
import gc import gc
@ -128,7 +128,7 @@ class BaseTrainer:
self.args.workers = 0 # faster CPU training as time dominated by inference, not dataloading self.args.workers = 0 # faster CPU training as time dominated by inference, not dataloading
# Model and Dataset # Model and Dataset
self.model = check_model_file_from_stem(self.args.model) # add suffix, i.e. yolov8n -> yolov8n.pt self.model = check_model_file_from_stem(self.args.model) # add suffix, i.e. yolo11n -> yolo11n.pt
with torch_distributed_zero_first(LOCAL_RANK): # avoid auto-downloading dataset multiple times with torch_distributed_zero_first(LOCAL_RANK): # avoid auto-downloading dataset multiple times
self.trainset, self.testset = self.get_dataset() self.trainset, self.testset = self.get_dataset()
self.ema = None self.ema = None

View file

@ -8,7 +8,7 @@ that yield the best model performance. This is particularly crucial in deep lear
where small changes in hyperparameters can lead to significant differences in model accuracy and efficiency. where small changes in hyperparameters can lead to significant differences in model accuracy and efficiency.
Example: Example:
Tune hyperparameters for YOLOv8n on COCO8 at imgsz=640 and epochs=30 for 300 tuning iterations. Tune hyperparameters for YOLO11n on COCO8 at imgsz=640 and epochs=30 for 300 tuning iterations.
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
@ -50,7 +50,7 @@ class Tuner:
Executes the hyperparameter evolution across multiple iterations. Executes the hyperparameter evolution across multiple iterations.
Example: Example:
Tune hyperparameters for YOLOv8n on COCO8 at imgsz=640 and epochs=30 for 300 tuning iterations. Tune hyperparameters for YOLO11n on COCO8 at imgsz=640 and epochs=30 for 300 tuning iterations.
```python ```python
from ultralytics import YOLO from ultralytics import YOLO

View file

@ -3,22 +3,24 @@
Check a model's accuracy on a test or val split of a dataset. Check a model's accuracy on a test or val split of a dataset.
Usage: Usage:
$ yolo mode=val model=yolov8n.pt data=coco8.yaml imgsz=640 $ yolo mode=val model=yolo11n.pt data=coco8.yaml imgsz=640
Usage - formats: Usage - formats:
$ yolo mode=val model=yolov8n.pt # PyTorch $ yolo mode=val model=yolo11n.pt # PyTorch
yolov8n.torchscript # TorchScript yolo11n.torchscript # TorchScript
yolov8n.onnx # ONNX Runtime or OpenCV DNN with dnn=True yolo11n.onnx # ONNX Runtime or OpenCV DNN with dnn=True
yolov8n_openvino_model # OpenVINO yolo11n_openvino_model # OpenVINO
yolov8n.engine # TensorRT yolo11n.engine # TensorRT
yolov8n.mlpackage # CoreML (macOS-only) yolo11n.mlpackage # CoreML (macOS-only)
yolov8n_saved_model # TensorFlow SavedModel yolo11n_saved_model # TensorFlow SavedModel
yolov8n.pb # TensorFlow GraphDef yolo11n.pb # TensorFlow GraphDef
yolov8n.tflite # TensorFlow Lite yolo11n.tflite # TensorFlow Lite
yolov8n_edgetpu.tflite # TensorFlow Edge TPU yolo11n_edgetpu.tflite # TensorFlow Edge TPU
yolov8n_paddle_model # PaddlePaddle yolo11n_paddle_model # PaddlePaddle
yolov8n.mnn # MNN yolo11n.mnn # MNN
yolov8n_ncnn_model # NCNN yolo11n_ncnn_model # NCNN
yolo11n_imx_model # Sony IMX
yolo11n_rknn_model # Rockchip RKNN
""" """
import json import json

View file

@ -21,7 +21,7 @@ class ClassificationPredictor(BasePredictor):
from ultralytics.utils import ASSETS from ultralytics.utils import ASSETS
from ultralytics.models.yolo.classify import ClassificationPredictor from ultralytics.models.yolo.classify import ClassificationPredictor
args = dict(model="yolov8n-cls.pt", source=ASSETS) args = dict(model="yolo11n-cls.pt", source=ASSETS)
predictor = ClassificationPredictor(overrides=args) predictor = ClassificationPredictor(overrides=args)
predictor.predict_cli() predictor.predict_cli()
``` ```

View file

@ -24,7 +24,7 @@ class ClassificationTrainer(BaseTrainer):
```python ```python
from ultralytics.models.yolo.classify import ClassificationTrainer from ultralytics.models.yolo.classify import ClassificationTrainer
args = dict(model="yolov8n-cls.pt", data="imagenet10", epochs=3) args = dict(model="yolo11n-cls.pt", data="imagenet10", epochs=3)
trainer = ClassificationTrainer(overrides=args) trainer = ClassificationTrainer(overrides=args)
trainer.train() trainer.train()
``` ```

View file

@ -20,7 +20,7 @@ class ClassificationValidator(BaseValidator):
```python ```python
from ultralytics.models.yolo.classify import ClassificationValidator from ultralytics.models.yolo.classify import ClassificationValidator
args = dict(model="yolov8n-cls.pt", data="imagenet10") args = dict(model="yolo11n-cls.pt", data="imagenet10")
validator = ClassificationValidator(args=args) validator = ClassificationValidator(args=args)
validator() validator()
``` ```

View file

@ -16,7 +16,7 @@ class OBBPredictor(DetectionPredictor):
from ultralytics.utils import ASSETS from ultralytics.utils import ASSETS
from ultralytics.models.yolo.obb import OBBPredictor from ultralytics.models.yolo.obb import OBBPredictor
args = dict(model="yolov8n-obb.pt", source=ASSETS) args = dict(model="yolo11n-obb.pt", source=ASSETS)
predictor = OBBPredictor(overrides=args) predictor = OBBPredictor(overrides=args)
predictor.predict_cli() predictor.predict_cli()
``` ```

View file

@ -15,7 +15,7 @@ class OBBTrainer(yolo.detect.DetectionTrainer):
```python ```python
from ultralytics.models.yolo.obb import OBBTrainer from ultralytics.models.yolo.obb import OBBTrainer
args = dict(model="yolov8n-obb.pt", data="dota8.yaml", epochs=3) args = dict(model="yolo11n-obb.pt", data="dota8.yaml", epochs=3)
trainer = OBBTrainer(overrides=args) trainer = OBBTrainer(overrides=args)
trainer.train() trainer.train()
``` ```

View file

@ -18,7 +18,7 @@ class OBBValidator(DetectionValidator):
```python ```python
from ultralytics.models.yolo.obb import OBBValidator from ultralytics.models.yolo.obb import OBBValidator
args = dict(model="yolov8n-obb.pt", data="dota8.yaml") args = dict(model="yolo11n-obb.pt", data="dota8.yaml")
validator = OBBValidator(args=args) validator = OBBValidator(args=args)
validator(model=args["model"]) validator(model=args["model"])
``` ```

View file

@ -14,7 +14,7 @@ class PosePredictor(DetectionPredictor):
from ultralytics.utils import ASSETS from ultralytics.utils import ASSETS
from ultralytics.models.yolo.pose import PosePredictor from ultralytics.models.yolo.pose import PosePredictor
args = dict(model="yolov8n-pose.pt", source=ASSETS) args = dict(model="yolo11n-pose.pt", source=ASSETS)
predictor = PosePredictor(overrides=args) predictor = PosePredictor(overrides=args)
predictor.predict_cli() predictor.predict_cli()
``` ```

View file

@ -16,7 +16,7 @@ class PoseTrainer(yolo.detect.DetectionTrainer):
```python ```python
from ultralytics.models.yolo.pose import PoseTrainer from ultralytics.models.yolo.pose import PoseTrainer
args = dict(model="yolov8n-pose.pt", data="coco8-pose.yaml", epochs=3) args = dict(model="yolo11n-pose.pt", data="coco8-pose.yaml", epochs=3)
trainer = PoseTrainer(overrides=args) trainer = PoseTrainer(overrides=args)
trainer.train() trainer.train()
``` ```

View file

@ -20,7 +20,7 @@ class PoseValidator(DetectionValidator):
```python ```python
from ultralytics.models.yolo.pose import PoseValidator from ultralytics.models.yolo.pose import PoseValidator
args = dict(model="yolov8n-pose.pt", data="coco8-pose.yaml") args = dict(model="yolo11n-pose.pt", data="coco8-pose.yaml")
validator = PoseValidator(args=args) validator = PoseValidator(args=args)
validator() validator()
``` ```

View file

@ -14,7 +14,7 @@ class SegmentationPredictor(DetectionPredictor):
from ultralytics.utils import ASSETS from ultralytics.utils import ASSETS
from ultralytics.models.yolo.segment import SegmentationPredictor from ultralytics.models.yolo.segment import SegmentationPredictor
args = dict(model="yolov8n-seg.pt", source=ASSETS) args = dict(model="yolo11n-seg.pt", source=ASSETS)
predictor = SegmentationPredictor(overrides=args) predictor = SegmentationPredictor(overrides=args)
predictor.predict_cli() predictor.predict_cli()
``` ```

View file

@ -16,7 +16,7 @@ class SegmentationTrainer(yolo.detect.DetectionTrainer):
```python ```python
from ultralytics.models.yolo.segment import SegmentationTrainer from ultralytics.models.yolo.segment import SegmentationTrainer
args = dict(model="yolov8n-seg.pt", data="coco8-seg.yaml", epochs=3) args = dict(model="yolo11n-seg.pt", data="coco8-seg.yaml", epochs=3)
trainer = SegmentationTrainer(overrides=args) trainer = SegmentationTrainer(overrides=args)
trainer.train() trainer.train()
``` ```

View file

@ -22,7 +22,7 @@ class SegmentationValidator(DetectionValidator):
```python ```python
from ultralytics.models.yolo.segment import SegmentationValidator from ultralytics.models.yolo.segment import SegmentationValidator
args = dict(model="yolov8n-seg.pt", data="coco8-seg.yaml") args = dict(model="yolo11n-seg.pt", data="coco8-seg.yaml")
validator = SegmentationValidator(args=args) validator = SegmentationValidator(args=args)
validator() validator()
``` ```

View file

@ -14,7 +14,7 @@ import torch.nn as nn
from PIL import Image from PIL import Image
from ultralytics.utils import ARM64, IS_JETSON, IS_RASPBERRYPI, LINUX, LOGGER, PYTHON_VERSION, ROOT, yaml_load from ultralytics.utils import ARM64, IS_JETSON, IS_RASPBERRYPI, LINUX, LOGGER, PYTHON_VERSION, ROOT, yaml_load
from ultralytics.utils.checks import check_requirements, check_suffix, check_version, check_yaml from ultralytics.utils.checks import check_requirements, check_suffix, check_version, check_yaml, is_rockchip
from ultralytics.utils.downloads import attempt_download_asset, is_url from ultralytics.utils.downloads import attempt_download_asset, is_url
@ -60,7 +60,7 @@ class AutoBackend(nn.Module):
Supported Formats and Naming Conventions: Supported Formats and Naming Conventions:
| Format | File Suffix | | Format | File Suffix |
|-----------------------|-------------------| | --------------------- | ----------------- |
| PyTorch | *.pt | | PyTorch | *.pt |
| TorchScript | *.torchscript | | TorchScript | *.torchscript |
| ONNX Runtime | *.onnx | | ONNX Runtime | *.onnx |
@ -75,6 +75,8 @@ class AutoBackend(nn.Module):
| PaddlePaddle | *_paddle_model/ | | PaddlePaddle | *_paddle_model/ |
| MNN | *.mnn | | MNN | *.mnn |
| NCNN | *_ncnn_model/ | | NCNN | *_ncnn_model/ |
| IMX | *_imx_model/ |
| RKNN | *_rknn_model/ |
This class offers dynamic backend switching capabilities based on the input model format, making it easier to deploy This class offers dynamic backend switching capabilities based on the input model format, making it easier to deploy
models across various platforms. models across various platforms.
@ -124,10 +126,11 @@ class AutoBackend(nn.Module):
mnn, mnn,
ncnn, ncnn,
imx, imx,
rknn,
triton, triton,
) = self._model_type(w) ) = self._model_type(w)
fp16 &= pt or jit or onnx or xml or engine or nn_module or triton # FP16 fp16 &= pt or jit or onnx or xml or engine or nn_module or triton # FP16
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH) nhwc = coreml or saved_model or pb or tflite or edgetpu or rknn # BHWC formats (vs torch BCWH)
stride = 32 # default stride stride = 32 # default stride
model, metadata, task = None, None, None model, metadata, task = None, None, None
@ -466,6 +469,22 @@ class AutoBackend(nn.Module):
model = TritonRemoteModel(w) model = TritonRemoteModel(w)
metadata = model.metadata metadata = model.metadata
# RKNN
elif rknn:
if not is_rockchip():
raise OSError("RKNN inference is only supported on Rockchip devices.")
LOGGER.info(f"Loading {w} for RKNN inference...")
check_requirements("rknn-toolkit-lite2")
from rknnlite.api import RKNNLite
w = Path(w)
if not w.is_file(): # if not *.rknn
w = next(w.rglob("*.rknn")) # get *.rknn file from *_rknn_model dir
rknn_model = RKNNLite()
rknn_model.load_rknn(w)
ret = rknn_model.init_runtime()
metadata = Path(w).parent / "metadata.yaml"
# Any other format (unsupported) # Any other format (unsupported)
else: else:
from ultralytics.engine.exporter import export_formats from ultralytics.engine.exporter import export_formats
@ -652,6 +671,12 @@ class AutoBackend(nn.Module):
im = im.cpu().numpy() # torch to numpy im = im.cpu().numpy() # torch to numpy
y = self.model(im) y = self.model(im)
# RKNN
elif self.rknn:
im = (im.cpu().numpy() * 255).astype("uint8")
im = im if isinstance(im, (list, tuple)) else [im]
y = self.rknn_model.inference(inputs=im)
# TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
else: else:
im = im.cpu().numpy() im = im.cpu().numpy()

View file

@ -296,10 +296,10 @@ class BaseModel(nn.Module):
class DetectionModel(BaseModel): class DetectionModel(BaseModel):
"""YOLOv8 detection model.""" """YOLO detection model."""
def __init__(self, cfg="yolov8n.yaml", ch=3, nc=None, verbose=True): # model, input channels, number of classes def __init__(self, cfg="yolo11n.yaml", ch=3, nc=None, verbose=True): # model, input channels, number of classes
"""Initialize the YOLOv8 detection model with the given config and parameters.""" """Initialize the YOLO detection model with the given config and parameters."""
super().__init__() super().__init__()
self.yaml = cfg if isinstance(cfg, dict) else yaml_model_load(cfg) # cfg dict self.yaml = cfg if isinstance(cfg, dict) else yaml_model_load(cfg) # cfg dict
if self.yaml["backbone"][0][2] == "Silence": if self.yaml["backbone"][0][2] == "Silence":
@ -388,10 +388,10 @@ class DetectionModel(BaseModel):
class OBBModel(DetectionModel): class OBBModel(DetectionModel):
"""YOLOv8 Oriented Bounding Box (OBB) model.""" """YOLO Oriented Bounding Box (OBB) model."""
def __init__(self, cfg="yolov8n-obb.yaml", ch=3, nc=None, verbose=True): def __init__(self, cfg="yolo11n-obb.yaml", ch=3, nc=None, verbose=True):
"""Initialize YOLOv8 OBB model with given config and parameters.""" """Initialize YOLO OBB model with given config and parameters."""
super().__init__(cfg=cfg, ch=ch, nc=nc, verbose=verbose) super().__init__(cfg=cfg, ch=ch, nc=nc, verbose=verbose)
def init_criterion(self): def init_criterion(self):
@ -400,9 +400,9 @@ class OBBModel(DetectionModel):
class SegmentationModel(DetectionModel): class SegmentationModel(DetectionModel):
"""YOLOv8 segmentation model.""" """YOLO segmentation model."""
def __init__(self, cfg="yolov8n-seg.yaml", ch=3, nc=None, verbose=True): def __init__(self, cfg="yolo11n-seg.yaml", ch=3, nc=None, verbose=True):
"""Initialize YOLOv8 segmentation model with given config and parameters.""" """Initialize YOLOv8 segmentation model with given config and parameters."""
super().__init__(cfg=cfg, ch=ch, nc=nc, verbose=verbose) super().__init__(cfg=cfg, ch=ch, nc=nc, verbose=verbose)
@ -412,9 +412,9 @@ class SegmentationModel(DetectionModel):
class PoseModel(DetectionModel): class PoseModel(DetectionModel):
"""YOLOv8 pose model.""" """YOLO pose model."""
def __init__(self, cfg="yolov8n-pose.yaml", ch=3, nc=None, data_kpt_shape=(None, None), verbose=True): def __init__(self, cfg="yolo11n-pose.yaml", ch=3, nc=None, data_kpt_shape=(None, None), verbose=True):
"""Initialize YOLOv8 Pose model.""" """Initialize YOLOv8 Pose model."""
if not isinstance(cfg, dict): if not isinstance(cfg, dict):
cfg = yaml_model_load(cfg) # load model YAML cfg = yaml_model_load(cfg) # load model YAML
@ -429,9 +429,9 @@ class PoseModel(DetectionModel):
class ClassificationModel(BaseModel): class ClassificationModel(BaseModel):
"""YOLOv8 classification model.""" """YOLO classification model."""
def __init__(self, cfg="yolov8n-cls.yaml", ch=3, nc=None, verbose=True): def __init__(self, cfg="yolo11n-cls.yaml", ch=3, nc=None, verbose=True):
"""Init ClassificationModel with YAML, channels, number of classes, verbose flag.""" """Init ClassificationModel with YAML, channels, number of classes, verbose flag."""
super().__init__() super().__init__()
self._from_yaml(cfg, ch, nc, verbose) self._from_yaml(cfg, ch, nc, verbose)
@ -842,14 +842,14 @@ def torch_safe_load(weight, safe_only=False):
f"with https://github.com/ultralytics/yolov5.\nThis model is NOT forwards compatible with " f"with https://github.com/ultralytics/yolov5.\nThis model is NOT forwards compatible with "
f"YOLOv8 at https://github.com/ultralytics/ultralytics." f"YOLOv8 at https://github.com/ultralytics/ultralytics."
f"\nRecommend fixes are to train a new model using the latest 'ultralytics' package or to " f"\nRecommend fixes are to train a new model using the latest 'ultralytics' package or to "
f"run a command with an official Ultralytics model, i.e. 'yolo predict model=yolov8n.pt'" f"run a command with an official Ultralytics model, i.e. 'yolo predict model=yolo11n.pt'"
) )
) from e ) from e
LOGGER.warning( LOGGER.warning(
f"WARNING ⚠️ {weight} appears to require '{e.name}', which is not in Ultralytics requirements." f"WARNING ⚠️ {weight} appears to require '{e.name}', which is not in Ultralytics requirements."
f"\nAutoInstall will run now for '{e.name}' but this feature will be removed in the future." f"\nAutoInstall will run now for '{e.name}' but this feature will be removed in the future."
f"\nRecommend fixes are to train a new model using the latest 'ultralytics' package or to " f"\nRecommend fixes are to train a new model using the latest 'ultralytics' package or to "
f"run a command with an official Ultralytics model, i.e. 'yolo predict model=yolov8n.pt'" f"run a command with an official Ultralytics model, i.e. 'yolo predict model=yolo11n.pt'"
) )
check_requirements(e.name) # install missing module check_requirements(e.name) # install missing module
ckpt = torch.load(file, map_location="cpu") ckpt = torch.load(file, map_location="cpu")

View file

@ -25,7 +25,7 @@ class AIGym(BaseSolution):
monitor: Processes a frame to detect poses, calculate angles, and count repetitions. monitor: Processes a frame to detect poses, calculate angles, and count repetitions.
Examples: Examples:
>>> gym = AIGym(model="yolov8n-pose.pt") >>> gym = AIGym(model="yolo11n-pose.pt")
>>> image = cv2.imread("gym_scene.jpg") >>> image = cv2.imread("gym_scene.jpg")
>>> processed_image = gym.monitor(image) >>> processed_image = gym.monitor(image)
>>> cv2.imshow("Processed Image", processed_image) >>> cv2.imshow("Processed Image", processed_image)

View file

@ -26,7 +26,7 @@ class Heatmap(ObjectCounter):
Examples: Examples:
>>> from ultralytics.solutions import Heatmap >>> from ultralytics.solutions import Heatmap
>>> heatmap = Heatmap(model="yolov8n.pt", colormap=cv2.COLORMAP_JET) >>> heatmap = Heatmap(model="yolo11n.pt", colormap=cv2.COLORMAP_JET)
>>> frame = cv2.imread("frame.jpg") >>> frame = cv2.imread("frame.jpg")
>>> processed_frame = heatmap.generate_heatmap(frame) >>> processed_frame = heatmap.generate_heatmap(frame)
""" """

View file

@ -178,7 +178,7 @@ class ParkingManagement(BaseSolution):
Examples: Examples:
>>> from ultralytics.solutions import ParkingManagement >>> from ultralytics.solutions import ParkingManagement
>>> parking_manager = ParkingManagement(model="yolov8n.pt", json_file="parking_regions.json") >>> parking_manager = ParkingManagement(model="yolo11n.pt", json_file="parking_regions.json")
>>> print(f"Occupied spaces: {parking_manager.pr_info['Occupancy']}") >>> print(f"Occupied spaces: {parking_manager.pr_info['Occupancy']}")
>>> print(f"Available spaces: {parking_manager.pr_info['Available']}") >>> print(f"Available spaces: {parking_manager.pr_info['Available']}")
""" """

View file

@ -35,7 +35,7 @@ class BaseSolution:
display_output: Display the results of processing, including showing frames or saving results. display_output: Display the results of processing, including showing frames or saving results.
Examples: Examples:
>>> solution = BaseSolution(model="yolov8n.pt", region=[(0, 0), (100, 0), (100, 100), (0, 100)]) >>> solution = BaseSolution(model="yolo11n.pt", region=[(0, 0), (100, 0), (100, 100), (0, 100)])
>>> solution.initialize_region() >>> solution.initialize_region()
>>> image = cv2.imread("image.jpg") >>> image = cv2.imread("image.jpg")
>>> solution.extract_tracks(image) >>> solution.extract_tracks(image)

View file

@ -51,6 +51,20 @@ PYTHON_VERSION = platform.python_version()
TORCH_VERSION = torch.__version__ TORCH_VERSION = torch.__version__
TORCHVISION_VERSION = importlib.metadata.version("torchvision") # faster than importing torchvision TORCHVISION_VERSION = importlib.metadata.version("torchvision") # faster than importing torchvision
IS_VSCODE = os.environ.get("TERM_PROGRAM", False) == "vscode" IS_VSCODE = os.environ.get("TERM_PROGRAM", False) == "vscode"
RKNN_CHIPS = frozenset(
{
"rk3588",
"rk3576",
"rk3566",
"rk3568",
"rk3562",
"rv1103",
"rv1106",
"rv1103b",
"rv1106b",
"rk2118",
}
) # Rockchip processors available for export
HELP_MSG = """ HELP_MSG = """
Examples for running Ultralytics: Examples for running Ultralytics:

View file

@ -4,25 +4,26 @@ Benchmark a YOLO model formats for speed and accuracy.
Usage: Usage:
from ultralytics.utils.benchmarks import ProfileModels, benchmark from ultralytics.utils.benchmarks import ProfileModels, benchmark
ProfileModels(['yolov8n.yaml', 'yolov8s.yaml']).profile() ProfileModels(['yolo11n.yaml', 'yolov8s.yaml']).profile()
benchmark(model='yolov8n.pt', imgsz=160) benchmark(model='yolo11n.pt', imgsz=160)
Format | `format=argument` | Model Format | `format=argument` | Model
--- | --- | --- --- | --- | ---
PyTorch | - | yolov8n.pt PyTorch | - | yolo11n.pt
TorchScript | `torchscript` | yolov8n.torchscript TorchScript | `torchscript` | yolo11n.torchscript
ONNX | `onnx` | yolov8n.onnx ONNX | `onnx` | yolo11n.onnx
OpenVINO | `openvino` | yolov8n_openvino_model/ OpenVINO | `openvino` | yolo11n_openvino_model/
TensorRT | `engine` | yolov8n.engine TensorRT | `engine` | yolo11n.engine
CoreML | `coreml` | yolov8n.mlpackage CoreML | `coreml` | yolo11n.mlpackage
TensorFlow SavedModel | `saved_model` | yolov8n_saved_model/ TensorFlow SavedModel | `saved_model` | yolo11n_saved_model/
TensorFlow GraphDef | `pb` | yolov8n.pb TensorFlow GraphDef | `pb` | yolo11n.pb
TensorFlow Lite | `tflite` | yolov8n.tflite TensorFlow Lite | `tflite` | yolo11n.tflite
TensorFlow Edge TPU | `edgetpu` | yolov8n_edgetpu.tflite TensorFlow Edge TPU | `edgetpu` | yolo11n_edgetpu.tflite
TensorFlow.js | `tfjs` | yolov8n_web_model/ TensorFlow.js | `tfjs` | yolo11n_web_model/
PaddlePaddle | `paddle` | yolov8n_paddle_model/ PaddlePaddle | `paddle` | yolo11n_paddle_model/
MNN | `mnn` | yolov8n.mnn MNN | `mnn` | yolo11n.mnn
NCNN | `ncnn` | yolov8n_ncnn_model/ NCNN | `ncnn` | yolo11n_ncnn_model/
RKNN | `rknn` | yolo11n_rknn_model/
""" """
import glob import glob
@ -41,7 +42,7 @@ from ultralytics import YOLO, YOLOWorld
from ultralytics.cfg import TASK2DATA, TASK2METRIC from ultralytics.cfg import TASK2DATA, TASK2METRIC
from ultralytics.engine.exporter import export_formats from ultralytics.engine.exporter import export_formats
from ultralytics.utils import ARM64, ASSETS, IS_JETSON, IS_RASPBERRYPI, LINUX, LOGGER, MACOS, TQDM, WEIGHTS_DIR from ultralytics.utils import ARM64, ASSETS, IS_JETSON, IS_RASPBERRYPI, LINUX, LOGGER, MACOS, TQDM, WEIGHTS_DIR
from ultralytics.utils.checks import IS_PYTHON_3_12, check_requirements, check_yolo from ultralytics.utils.checks import IS_PYTHON_3_12, check_requirements, check_yolo, is_rockchip
from ultralytics.utils.downloads import safe_download from ultralytics.utils.downloads import safe_download
from ultralytics.utils.files import file_size from ultralytics.utils.files import file_size
from ultralytics.utils.torch_utils import get_cpu_info, select_device from ultralytics.utils.torch_utils import get_cpu_info, select_device
@ -121,6 +122,11 @@ def benchmark(
assert not isinstance(model, YOLOWorld), "YOLOWorldv2 IMX exports not supported" assert not isinstance(model, YOLOWorld), "YOLOWorldv2 IMX exports not supported"
assert model.task == "detect", "IMX only supported for detection task" assert model.task == "detect", "IMX only supported for detection task"
assert "C2f" in model.__str__(), "IMX only supported for YOLOv8" assert "C2f" in model.__str__(), "IMX only supported for YOLOv8"
if i == 15: # RKNN
assert not isinstance(model, YOLOWorld), "YOLOWorldv2 RKNN exports not supported yet"
assert not is_end2end, "End-to-end models not supported by RKNN yet"
assert LINUX, "RKNN only supported on Linux"
assert not is_rockchip(), "RKNN Inference only supported on Rockchip devices"
if "cpu" in device.type: if "cpu" in device.type:
assert cpu, "inference not supported on CPU" assert cpu, "inference not supported on CPU"
if "cuda" in device.type: if "cuda" in device.type:
@ -334,7 +340,7 @@ class ProfileModels:
Examples: Examples:
Profile models and print results Profile models and print results
>>> from ultralytics.utils.benchmarks import ProfileModels >>> from ultralytics.utils.benchmarks import ProfileModels
>>> profiler = ProfileModels(["yolov8n.yaml", "yolov8s.yaml"], imgsz=640) >>> profiler = ProfileModels(["yolo11n.yaml", "yolov8s.yaml"], imgsz=640)
>>> profiler.profile() >>> profiler.profile()
""" """
@ -368,7 +374,7 @@ class ProfileModels:
Examples: Examples:
Initialize and profile models Initialize and profile models
>>> from ultralytics.utils.benchmarks import ProfileModels >>> from ultralytics.utils.benchmarks import ProfileModels
>>> profiler = ProfileModels(["yolov8n.yaml", "yolov8s.yaml"], imgsz=640) >>> profiler = ProfileModels(["yolo11n.yaml", "yolov8s.yaml"], imgsz=640)
>>> profiler.profile() >>> profiler.profile()
""" """
self.paths = paths self.paths = paths

View file

@ -19,6 +19,7 @@ import requests
import torch import torch
from ultralytics.utils import ( from ultralytics.utils import (
ARM64,
ASSETS, ASSETS,
AUTOINSTALL, AUTOINSTALL,
IS_COLAB, IS_COLAB,
@ -30,6 +31,7 @@ from ultralytics.utils import (
MACOS, MACOS,
ONLINE, ONLINE,
PYTHON_VERSION, PYTHON_VERSION,
RKNN_CHIPS,
ROOT, ROOT,
TORCHVISION_VERSION, TORCHVISION_VERSION,
USER_CONFIG_DIR, USER_CONFIG_DIR,
@ -487,10 +489,10 @@ def check_yolov5u_filename(file: str, verbose: bool = True):
return file return file
def check_model_file_from_stem(model="yolov8n"): def check_model_file_from_stem(model="yolo11n"):
"""Return a model filename from a valid model stem.""" """Return a model filename from a valid model stem."""
if model and not Path(model).suffix and Path(model).stem in downloads.GITHUB_ASSETS_STEMS: if model and not Path(model).suffix and Path(model).stem in downloads.GITHUB_ASSETS_STEMS:
return Path(model).with_suffix(".pt") # add suffix, i.e. yolov8n -> yolov8n.pt return Path(model).with_suffix(".pt") # add suffix, i.e. yolo11n -> yolo11n.pt
else: else:
return model return model
@ -782,6 +784,21 @@ def cuda_is_available() -> bool:
return cuda_device_count() > 0 return cuda_device_count() > 0
def is_rockchip():
"""Check if the current environment is running on a Rockchip SoC."""
if LINUX and ARM64:
try:
with open("/proc/device-tree/compatible") as f:
dev_str = f.read()
*_, soc = dev_str.split(",")
if soc.replace("\x00", "") in RKNN_CHIPS:
return True
except OSError:
return False
else:
return False
def is_sudo_available() -> bool: def is_sudo_available() -> bool:
""" """
Check if the sudo command is available in the environment. Check if the sudo command is available in the environment.
@ -798,5 +815,7 @@ def is_sudo_available() -> bool:
# Run checks and define constants # Run checks and define constants
check_python("3.8", hard=False, verbose=True) # check python version check_python("3.8", hard=False, verbose=True) # check python version
check_torchvision() # check torch-torchvision compatibility check_torchvision() # check torch-torchvision compatibility
# Define constants
IS_PYTHON_MINIMUM_3_10 = check_python("3.10", hard=False) IS_PYTHON_MINIMUM_3_10 = check_python("3.10", hard=False)
IS_PYTHON_3_12 = PYTHON_VERSION.startswith("3.12") IS_PYTHON_3_12 = PYTHON_VERSION.startswith("3.12")

View file

@ -405,7 +405,7 @@ def get_github_assets(repo="ultralytics/assets", version="latest", retry=False):
LOGGER.warning(f"⚠️ GitHub assets check failure for {url}: {r.status_code} {r.reason}") LOGGER.warning(f"⚠️ GitHub assets check failure for {url}: {r.status_code} {r.reason}")
return "", [] return "", []
data = r.json() data = r.json()
return data["tag_name"], [x["name"] for x in data["assets"]] # tag, assets i.e. ['yolov8n.pt', 'yolov8s.pt', ...] return data["tag_name"], [x["name"] for x in data["assets"]] # tag, assets i.e. ['yolo11n.pt', 'yolov8s.pt', ...]
def attempt_download_asset(file, repo="ultralytics/assets", release="v8.3.0", **kwargs): def attempt_download_asset(file, repo="ultralytics/assets", release="v8.3.0", **kwargs):

View file

@ -297,7 +297,7 @@ class v8SegmentationLoss(v8DetectionLoss):
raise TypeError( raise TypeError(
"ERROR ❌ segment dataset incorrectly formatted or not a segment dataset.\n" "ERROR ❌ segment dataset incorrectly formatted or not a segment dataset.\n"
"This error can occur when incorrectly training a 'segment' model on a 'detect' dataset, " "This error can occur when incorrectly training a 'segment' model on a 'detect' dataset, "
"i.e. 'yolo train model=yolov8n-seg.pt data=coco8.yaml'.\nVerify your dataset is a " "i.e. 'yolo train model=yolo11n-seg.pt data=coco8.yaml'.\nVerify your dataset is a "
"correctly formatted 'segment' dataset using 'data=coco8-seg.yaml' " "correctly formatted 'segment' dataset using 'data=coco8-seg.yaml' "
"as an example.\nSee https://docs.ultralytics.com/datasets/segment/ for help." "as an example.\nSee https://docs.ultralytics.com/datasets/segment/ for help."
) from e ) from e
@ -666,7 +666,7 @@ class v8OBBLoss(v8DetectionLoss):
raise TypeError( raise TypeError(
"ERROR ❌ OBB dataset incorrectly formatted or not a OBB dataset.\n" "ERROR ❌ OBB dataset incorrectly formatted or not a OBB dataset.\n"
"This error can occur when incorrectly training a 'OBB' model on a 'detect' dataset, " "This error can occur when incorrectly training a 'OBB' model on a 'detect' dataset, "
"i.e. 'yolo train model=yolov8n-obb.pt data=dota8.yaml'.\nVerify your dataset is a " "i.e. 'yolo train model=yolo11n-obb.pt data=dota8.yaml'.\nVerify your dataset is a "
"correctly formatted 'OBB' dataset using 'data=dota8.yaml' " "correctly formatted 'OBB' dataset using 'data=dota8.yaml' "
"as an example.\nSee https://docs.ultralytics.com/datasets/obb/ for help." "as an example.\nSee https://docs.ultralytics.com/datasets/obb/ for help."
) from e ) from e

View file

@ -30,10 +30,10 @@ def run_ray_tune(
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
# Load a YOLOv8n model # Load a YOLO11n model
model = YOLO("yolo11n.pt") model = YOLO("yolo11n.pt")
# Start tuning hyperparameters for YOLOv8n training on the COCO8 dataset # Start tuning hyperparameters for YOLO11n training on the COCO8 dataset
result_grid = model.tune(data="coco8.yaml", use_ray=True) result_grid = model.tune(data="coco8.yaml", use_ray=True)
``` ```
""" """