ultralytics 8.0.235 YOLOv8 OBB train, val, predict and export (#4499)

Co-authored-by: Yash Khurana <ykhurana6@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Swamita Gupta <swamita2001@gmail.com>
Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
Co-authored-by: Laughing-q <1185102784@qq.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
Co-authored-by: Laughing-q <1182102784@qq.com>
This commit is contained in:
Glenn Jocher 2024-01-05 03:00:26 +01:00 committed by GitHub
parent f702b34a50
commit 072291bc78
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
52 changed files with 2090 additions and 524 deletions

View file

@ -32,16 +32,17 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
## Supported Tasks and Modes
The YOLOv8 series offers a diverse range of models, each specialized for specific tasks in computer vision. These models are designed to cater to various requirements, from object detection to more complex tasks like instance segmentation, pose/keypoints detection, and classification.
The YOLOv8 series offers a diverse range of models, each specialized for specific tasks in computer vision. These models are designed to cater to various requirements, from object detection to more complex tasks like instance segmentation, pose/keypoints detection, oriented object detection, and classification.
Each variant of the YOLOv8 series is optimized for its respective task, ensuring high performance and accuracy. Additionally, these models are compatible with various operational modes including [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), facilitating their use in different stages of deployment and development.
| Model | Filenames | Task | Inference | Validation | Training | Export |
|-------------|----------------------------------------------------------------------------------------------------------------|----------------------------------------------|-----------|------------|----------|--------|
| YOLOv8 | `yolov8n.pt` `yolov8s.pt` `yolov8m.pt` `yolov8l.pt` `yolov8x.pt` | [Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-seg | `yolov8n-seg.pt` `yolov8s-seg.pt` `yolov8m-seg.pt` `yolov8l-seg.pt` `yolov8x-seg.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-pose | `yolov8n-pose.pt` `yolov8s-pose.pt` `yolov8m-pose.pt` `yolov8l-pose.pt` `yolov8x-pose.pt` `yolov8x-pose-p6.pt` | [Pose/Keypoints](../tasks/pose.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-cls | `yolov8n-cls.pt` `yolov8s-cls.pt` `yolov8m-cls.pt` `yolov8l-cls.pt` `yolov8x-cls.pt` | [Classification](../tasks/classify.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8 | `yolov8n.pt` `yolov8s.pt` `yolov8m.pt` `yolov8l.pt` `yolov8x.pt` | [Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-seg | `yolov8n-seg.pt` `yolov8s-seg.pt` `yolov8m-seg.pt` `yolov8l-seg.pt` `yolov8x-seg.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-pose | `yolov8n-pose.pt` `yolov8s-pose.pt` `yolov8m-pose.pt` `yolov8l-pose.pt` `yolov8x-pose.pt` `yolov8x-pose-p6.pt` | [Pose/Keypoints](../tasks/pose.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-obb | `yolov8n-obb.pt` `yolov8s-obb.pt` `yolov8m-obb.pt` `yolov8l-obb.pt` `yolov8x-obb.pt` | [Oriented Detection](../tasks/obb.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-cls | `yolov8n-cls.pt` `yolov8s-cls.pt` `yolov8m-cls.pt` `yolov8l-cls.pt` `yolov8x-cls.pt` | [Classification](../tasks/classify.md) | ✅ | ✅ | ✅ | ✅ |
This table provides an overview of the YOLOv8 model variants, highlighting their applicability in specific tasks and their compatibility with various operational modes such as Inference, Validation, Training, and Export. It showcases the versatility and robustness of the YOLOv8 series, making them suitable for a variety of applications in computer vision.
@ -99,7 +100,7 @@ This table provides an overview of the YOLOv8 model variants, highlighting their
=== "Pose (COCO)"
See [Pose Estimation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/pose/coco/), which include 1 pre-trained class, 'person'.
See [Pose Estimation Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/pose/coco/), which include 1 pre-trained class, 'person'.
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
@ -110,11 +111,23 @@ This table provides an overview of the YOLOv8 model variants, highlighting their
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
=== "OBB (DOTAv1)"
See [Oriented Detection Docs](https://docs.ultralytics.com/tasks/obb/) for usage examples with these models trained on [DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v1/), which include 15 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------|-------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [YOLOv8n-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-obb.pt) | 1024 | <++> | <++> | <++> | 3.2 | 23.3 |
| [YOLOv8s-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-obb.pt) | 1024 | <++> | <++> | <++> | 11.4 | 76.3 |
| [YOLOv8m-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-obb.pt) | 1024 | <++> | <++> | <++> | 26.4 | 208.6 |
| [YOLOv8l-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-obb.pt) | 1024 | <++> | <++> | <++> | 44.5 | 433.8 |
| [YOLOv8x-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-obb.pt) | 1024 | <++> | <++> | <++> | 69.5 | 676.7 |
## Usage Examples
This example provides simple YOLOv8 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md) and [Pose](../tasks/pose.md) docs.
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md), [Obb](../tasks/obb.md) docs and [Pose](../tasks/pose.md) docs.
!!! Example

View file

@ -0,0 +1,39 @@
# Reference for `ultralytics/data/split_dota.py`
!!! Note
This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/split_dota.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/split_dota.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/data/split_dota.py) 🛠️. Thank you 🙏!
<br><br>
## ::: ultralytics.data.split_dota.bbox_iof
<br><br>
## ::: ultralytics.data.split_dota.load_yolo_dota
<br><br>
## ::: ultralytics.data.split_dota.get_windows
<br><br>
## ::: ultralytics.data.split_dota.get_window_obj
<br><br>
## ::: ultralytics.data.split_dota.crop_and_save
<br><br>
## ::: ultralytics.data.split_dota.split_images_and_labels
<br><br>
## ::: ultralytics.data.split_dota.split_trainval
<br><br>
## ::: ultralytics.data.split_dota.split_test
<br><br>

View file

@ -34,3 +34,7 @@ keywords: Ultralytics, engine, results, base tensor, boxes, keypoints
## ::: ultralytics.engine.results.Probs
<br><br>
## ::: ultralytics.engine.results.OBB
<br><br>

View file

@ -0,0 +1,11 @@
# Reference for `ultralytics/models/yolo/obb/predict.py`
!!! Note
This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/obb/predict.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/obb/predict.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/models/yolo/obb/predict.py) 🛠️. Thank you 🙏!
<br><br>
## ::: ultralytics.models.yolo.obb.predict.OBBPredictor
<br><br>

View file

@ -0,0 +1,11 @@
# Reference for `ultralytics/models/yolo/obb/train.py`
!!! Note
This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/obb/train.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/obb/train.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/models/yolo/obb/train.py) 🛠️. Thank you 🙏!
<br><br>
## ::: ultralytics.models.yolo.obb.train.OBBTrainer
<br><br>

View file

@ -0,0 +1,11 @@
# Reference for `ultralytics/models/yolo/obb/val.py`
!!! Note
This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/obb/val.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/obb/val.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/models/yolo/obb/val.py) 🛠️. Thank you 🙏!
<br><br>
## ::: ultralytics.models.yolo.obb.val.OBBValidator
<br><br>

View file

@ -19,6 +19,10 @@ keywords: Ultralytics, YOLO, Detection, Pose, RTDETRDecoder, nn modules, guides
<br><br>
## ::: ultralytics.nn.modules.head.OBB
<br><br>
## ::: ultralytics.nn.modules.head.Pose
<br><br>

View file

@ -19,6 +19,10 @@ keywords: Ultralytics, YOLO, nn tasks, DetectionModel, PoseModel, RTDETRDetectio
<br><br>
## ::: ultralytics.nn.tasks.OBBModel
<br><br>
## ::: ultralytics.nn.tasks.SegmentationModel
<br><br>

View file

@ -23,6 +23,10 @@ keywords: Ultralytics, Loss functions, VarifocalLoss, BboxLoss, v8DetectionLoss,
<br><br>
## ::: ultralytics.utils.loss.RotatedBboxLoss
<br><br>
## ::: ultralytics.utils.loss.KeypointLoss
<br><br>
@ -42,3 +46,7 @@ keywords: Ultralytics, Loss functions, VarifocalLoss, BboxLoss, v8DetectionLoss,
## ::: ultralytics.utils.loss.v8ClassificationLoss
<br><br>
## ::: ultralytics.utils.loss.v8OBBLoss
<br><br>

View file

@ -35,6 +35,10 @@ keywords: Ultralytics, YOLO, YOLOv3, YOLOv4, metrics, confusion matrix, detectio
<br><br>
## ::: ultralytics.utils.metrics.OBBMetrics
<br><br>
## ::: ultralytics.utils.metrics.bbox_ioa
<br><br>
@ -55,6 +59,18 @@ keywords: Ultralytics, YOLO, YOLOv3, YOLOv4, metrics, confusion matrix, detectio
<br><br>
## ::: ultralytics.utils.metrics._get_covariance_matrix
<br><br>
## ::: ultralytics.utils.metrics.probiou
<br><br>
## ::: ultralytics.utils.metrics.batch_probiou
<br><br>
## ::: ultralytics.utils.metrics.smooth_BCE
<br><br>

View file

@ -27,6 +27,10 @@ keywords: Ultralytics YOLO, Utility Operations, segment2box, make_divisible, cli
<br><br>
## ::: ultralytics.utils.ops.nms_rotated
<br><br>
## ::: ultralytics.utils.ops.non_max_suppression
<br><br>

View file

@ -47,6 +47,10 @@ keywords: Ultralytics, plotting, utils, color annotation, label plotting, image
<br><br>
## ::: ultralytics.utils.plotting.output_to_rotated_target
<br><br>
## ::: ultralytics.utils.plotting.feature_visualization
<br><br>

View file

@ -15,11 +15,7 @@ keywords: Ultralytics, task aligned assigner, select highest overlaps, make anch
<br><br>
## ::: ultralytics.utils.tal.select_candidates_in_gts
<br><br>
## ::: ultralytics.utils.tal.select_highest_overlaps
## ::: ultralytics.utils.tal.RotatedTaskAlignedAssigner
<br><br>
@ -34,3 +30,7 @@ keywords: Ultralytics, task aligned assigner, select highest overlaps, make anch
## ::: ultralytics.utils.tal.bbox2dist
<br><br>
## ::: ultralytics.utils.tal.dist2rbox
<br><br>

View file

@ -1,7 +1,7 @@
---
comments: true
description: Learn about the cornerstone computer vision tasks YOLOv8 can perform including detection, segmentation, classification, and pose estimation. Understand their uses in your AI projects.
keywords: Ultralytics, YOLOv8, Detection, Segmentation, Classification, Pose Estimation, AI Framework, Computer Vision Tasks
keywords: Ultralytics, YOLOv8, Detection, Segmentation, Classification, Pose Estimation, Oriented Object Detection, AI Framework, Computer Vision Tasks
---
# Ultralytics YOLOv8 Tasks
@ -9,7 +9,7 @@ keywords: Ultralytics, YOLOv8, Detection, Segmentation, Classification, Pose Est
<br>
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to perform [detection](detect.md), [segmentation](segment.md), [classification](classify.md), and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to perform [detection](detect.md), [segmentation](segment.md), [obb](obb.md), [classification](classify.md), and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
<p align="center">
<br>
@ -19,7 +19,7 @@ YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Explore Ultralytics YOLO Tasks: Object Detection, Segmentation, Tracking, and Pose Estimation.
<strong>Watch:</strong> Explore Ultralytics YOLO Tasks: Object Detection, Segmentation, OBB, Tracking, and Pose Estimation.
</p>
## [Detection](detect.md)
@ -46,6 +46,12 @@ Pose/keypoint detection is a task that involves detecting specific points in an
[Pose Examples](pose.md){ .md-button }
## [Obb](obb.md)
Oriented object detection goes a step further than regular object detection with introducing an extra angle to locate objects more accurate in an image. YOLOv8 can detect rotated objects in an image or video frame with high accuracy and speed.
[Oriented Detection](obb.md){ .md-button }
## Conclusion
YOLOv8 supports multiple tasks, including detection, segmentation, classification, and keypoints detection. Each of these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose the appropriate task for your computer vision application.
YOLOv8 supports multiple tasks, including detection, segmentation, classification, oriented object detection and keypoints detection. Each of these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose the appropriate task for your computer vision application.

181
docs/en/tasks/obb.md Normal file
View file

@ -0,0 +1,181 @@
---
comments: true
description: Learn how to use oriented object detection models with Ultralytics YOLO. Instructions on training, validation, image prediction, and model export.
keywords: yolov8, oriented object detection, Ultralytics, DOTA dataset, rotated object detection, object detection, model training, model validation, image prediction, model export
---
# Oriented Object Detection
<!-- obb task poster -->
Oriented object detection goes a step further than object detection and introduce an extra angle to locate objects more accurate in an image.
The output of an oriented object detector is a set of rotated bounding boxes that exactly enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape.
<!-- youtube video link for obb task -->
!!! Tip "Tip"
YOLOv8 Obb models use the `-obb` suffix, i.e. `yolov8n-obb.pt` and are pretrained on [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml).
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
YOLOv8 pretrained Obb models are shown here, which are pretrained on the [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------|-------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [YOLOv8n-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-obb.pt) | 1024 | <++> | <++> | <++> | 3.2 | 23.3 |
| [YOLOv8s-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-obb.pt) | 1024 | <++> | <++> | <++> | 11.4 | 76.3 |
| [YOLOv8m-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-obb.pt) | 1024 | <++> | <++> | <++> | 26.4 | 208.6 |
| [YOLOv8l-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-obb.pt) | 1024 | <++> | <++> | <++> | 44.5 | 433.8 |
| [YOLOv8x-obb](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-obb.pt) | 1024 | <++> | <++> | <++> | 69.5 | 676.7 |
<!-- TODO: should we report multi-scale results only as they're better or both multi-scale and single-scale. -->
- **mAP<sup>val</sup>** values are for single-model single-scale on [DOTAv1 test](http://cocodataset.org) dataset.
<br>Reproduce by `yolo val obb data=DOTAv1.yaml device=0`
- **Speed** averaged over DOTAv1 val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
## Train
<!-- TODO: probably we should create a sample dataset like coco128.yaml, named dota128.yaml? -->
Train YOLOv8n-obb on the dota128.yaml dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-obb.yaml') # build a new model from YAML
model = YOLO('yolov8n-obb.pt') # load a pretrained model (recommended for training)
model = YOLO('yolov8n-obb.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model
results = model.train(data='dota128-obb.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Build a new model from YAML and start training from scratch
yolo obb train data=dota128-obb.yaml model=yolov8n-obb.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
yolo obb train data=dota128-obb.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
yolo obb train data=dota128-obb.yaml model=yolov8n-obb.yaml pretrained=yolov8n-obb.pt epochs=100 imgsz=640
```
### Dataset format
yolo obb dataset format can be found in detail in the [Dataset Guide](../datasets/obb/index.md)..
## Val
Validate trained YOLOv8n-obb model accuracy on the dota128-obb dataset. No argument need to passed as the `model`
retains it's training `data` and arguments as model attributes.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-obb.pt') # load an official model
model = YOLO('path/to/best.pt') # load a custom model
# Validate the model
metrics = model.val() # no arguments needed, dataset and settings remembered
metrics.box.map # map50-95(B)
metrics.box.map50 # map50(B)
metrics.box.map75 # map75(B)
metrics.box.maps # a list contains map50-95(B) of each category
```
=== "CLI"
```bash
yolo obb val model=yolov8n-obb.pt # val official model
yolo obb val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-obb model to run predictions on images.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-obb.pt') # load an official model
model = YOLO('path/to/best.pt') # load a custom model
# Predict with the model
results = model('https://ultralytics.com/images/bus.jpg') # predict on an image
```
=== "CLI"
```bash
yolo obb predict model=yolov8n-obb.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
yolo obb predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
```
See full `predict` mode details in the [Predict](https://docs.ultralytics.com/modes/predict/) page.
## Export
Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-obb.pt') # load an official model
model = YOLO('path/to/best.pt') # load a custom trained model
# Export the model
model.export(format='onnx')
```
=== "CLI"
```bash
yolo export model=yolov8n-obb.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-obb export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-obb.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-obb.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-obb.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-obb.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-obb_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-obb.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-obb.mlpackage` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-obb_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-obb.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-obb.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-obb_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-obb_web_model/` | ✅ | `imgsz`, `half`, `int8` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-obb_paddle_model/` | ✅ | `imgsz` |
| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-obb_ncnn_model/` | ✅ | `imgsz`, `half` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

View file

@ -170,6 +170,7 @@ nav:
- Segment: tasks/segment.md
- Classify: tasks/classify.md
- Pose: tasks/pose.md
- Obb: tasks/obb.md
- Guides:
- guides/index.md
- Models:
@ -203,6 +204,7 @@ nav:
- Segment: tasks/segment.md
- Classify: tasks/classify.md
- Pose: tasks/pose.md
- Obb: tasks/obb.md
- Models:
- models/index.md
- YOLOv3: models/yolov3.md
@ -335,131 +337,137 @@ nav:
- 'iOS': hub/app/ios.md
- 'Android': hub/app/android.md
- Inference API: hub/inference_api.md
- Reference:
- cfg:
- __init__: reference/cfg/__init__.md
- data:
- annotator: reference/data/annotator.md
- augment: reference/data/augment.md
- base: reference/data/base.md
- build: reference/data/build.md
- converter: reference/data/converter.md
- dataset: reference/data/dataset.md
- loaders: reference/data/loaders.md
- utils: reference/data/utils.md
- engine:
- exporter: reference/engine/exporter.md
- model: reference/engine/model.md
- predictor: reference/engine/predictor.md
- results: reference/engine/results.md
- trainer: reference/engine/trainer.md
- tuner: reference/engine/tuner.md
- validator: reference/engine/validator.md
- hub:
- __init__: reference/hub/__init__.md
- auth: reference/hub/auth.md
- session: reference/hub/session.md
- utils: reference/hub/utils.md
- models:
- fastsam:
- model: reference/models/fastsam/model.md
- predict: reference/models/fastsam/predict.md
- prompt: reference/models/fastsam/prompt.md
- utils: reference/models/fastsam/utils.md
- val: reference/models/fastsam/val.md
- nas:
- model: reference/models/nas/model.md
- predict: reference/models/nas/predict.md
- val: reference/models/nas/val.md
- rtdetr:
- model: reference/models/rtdetr/model.md
- predict: reference/models/rtdetr/predict.md
- train: reference/models/rtdetr/train.md
- val: reference/models/rtdetr/val.md
- sam:
- amg: reference/models/sam/amg.md
- build: reference/models/sam/build.md
- model: reference/models/sam/model.md
- modules:
- decoders: reference/models/sam/modules/decoders.md
- encoders: reference/models/sam/modules/encoders.md
- sam: reference/models/sam/modules/sam.md
- tiny_encoder: reference/models/sam/modules/tiny_encoder.md
- transformer: reference/models/sam/modules/transformer.md
- predict: reference/models/sam/predict.md
- utils:
- loss: reference/models/utils/loss.md
- ops: reference/models/utils/ops.md
- yolo:
- classify:
- predict: reference/models/yolo/classify/predict.md
- train: reference/models/yolo/classify/train.md
- val: reference/models/yolo/classify/val.md
- detect:
- predict: reference/models/yolo/detect/predict.md
- train: reference/models/yolo/detect/train.md
- val: reference/models/yolo/detect/val.md
- model: reference/models/yolo/model.md
- pose:
- predict: reference/models/yolo/pose/predict.md
- train: reference/models/yolo/pose/train.md
- val: reference/models/yolo/pose/val.md
- segment:
- predict: reference/models/yolo/segment/predict.md
- train: reference/models/yolo/segment/train.md
- val: reference/models/yolo/segment/val.md
- nn:
- autobackend: reference/nn/autobackend.md
- modules:
- block: reference/nn/modules/block.md
- conv: reference/nn/modules/conv.md
- head: reference/nn/modules/head.md
- transformer: reference/nn/modules/transformer.md
- utils: reference/nn/modules/utils.md
- tasks: reference/nn/tasks.md
- solutions:
- ai_gym: reference/solutions/ai_gym.md
- heatmap: reference/solutions/heatmap.md
- object_counter: reference/solutions/object_counter.md
- trackers:
- basetrack: reference/trackers/basetrack.md
- bot_sort: reference/trackers/bot_sort.md
- byte_tracker: reference/trackers/byte_tracker.md
- track: reference/trackers/track.md
- utils:
- gmc: reference/trackers/utils/gmc.md
- kalman_filter: reference/trackers/utils/kalman_filter.md
- matching: reference/trackers/utils/matching.md
- cfg:
- __init__: reference/cfg/__init__.md
- data:
- annotator: reference/data/annotator.md
- augment: reference/data/augment.md
- base: reference/data/base.md
- build: reference/data/build.md
- converter: reference/data/converter.md
- dataset: reference/data/dataset.md
- loaders: reference/data/loaders.md
- split_dota: reference/data/split_dota.md
- utils: reference/data/utils.md
- engine:
- exporter: reference/engine/exporter.md
- model: reference/engine/model.md
- predictor: reference/engine/predictor.md
- results: reference/engine/results.md
- trainer: reference/engine/trainer.md
- tuner: reference/engine/tuner.md
- validator: reference/engine/validator.md
- hub:
- __init__: reference/hub/__init__.md
- auth: reference/hub/auth.md
- session: reference/hub/session.md
- utils: reference/hub/utils.md
- models:
- fastsam:
- model: reference/models/fastsam/model.md
- predict: reference/models/fastsam/predict.md
- prompt: reference/models/fastsam/prompt.md
- utils: reference/models/fastsam/utils.md
- val: reference/models/fastsam/val.md
- nas:
- model: reference/models/nas/model.md
- predict: reference/models/nas/predict.md
- val: reference/models/nas/val.md
- rtdetr:
- model: reference/models/rtdetr/model.md
- predict: reference/models/rtdetr/predict.md
- train: reference/models/rtdetr/train.md
- val: reference/models/rtdetr/val.md
- sam:
- amg: reference/models/sam/amg.md
- build: reference/models/sam/build.md
- model: reference/models/sam/model.md
- modules:
- decoders: reference/models/sam/modules/decoders.md
- encoders: reference/models/sam/modules/encoders.md
- sam: reference/models/sam/modules/sam.md
- tiny_encoder: reference/models/sam/modules/tiny_encoder.md
- transformer: reference/models/sam/modules/transformer.md
- predict: reference/models/sam/predict.md
- utils:
- __init__: reference/utils/__init__.md
- autobatch: reference/utils/autobatch.md
- benchmarks: reference/utils/benchmarks.md
- callbacks:
- base: reference/utils/callbacks/base.md
- clearml: reference/utils/callbacks/clearml.md
- comet: reference/utils/callbacks/comet.md
- dvc: reference/utils/callbacks/dvc.md
- hub: reference/utils/callbacks/hub.md
- mlflow: reference/utils/callbacks/mlflow.md
- neptune: reference/utils/callbacks/neptune.md
- raytune: reference/utils/callbacks/raytune.md
- tensorboard: reference/utils/callbacks/tensorboard.md
- wb: reference/utils/callbacks/wb.md
- checks: reference/utils/checks.md
- dist: reference/utils/dist.md
- downloads: reference/utils/downloads.md
- errors: reference/utils/errors.md
- files: reference/utils/files.md
- instance: reference/utils/instance.md
- loss: reference/utils/loss.md
- metrics: reference/utils/metrics.md
- ops: reference/utils/ops.md
- patches: reference/utils/patches.md
- plotting: reference/utils/plotting.md
- tal: reference/utils/tal.md
- torch_utils: reference/utils/torch_utils.md
- triton: reference/utils/triton.md
- tuner: reference/utils/tuner.md
- loss: reference/models/utils/loss.md
- ops: reference/models/utils/ops.md
- yolo:
- classify:
- predict: reference/models/yolo/classify/predict.md
- train: reference/models/yolo/classify/train.md
- val: reference/models/yolo/classify/val.md
- detect:
- predict: reference/models/yolo/detect/predict.md
- train: reference/models/yolo/detect/train.md
- val: reference/models/yolo/detect/val.md
- model: reference/models/yolo/model.md
- obb:
- predict: reference/models/yolo/obb/predict.md
- train: reference/models/yolo/obb/train.md
- val: reference/models/yolo/obb/val.md
- pose:
- predict: reference/models/yolo/pose/predict.md
- train: reference/models/yolo/pose/train.md
- val: reference/models/yolo/pose/val.md
- segment:
- predict: reference/models/yolo/segment/predict.md
- train: reference/models/yolo/segment/train.md
- val: reference/models/yolo/segment/val.md
- nn:
- autobackend: reference/nn/autobackend.md
- modules:
- block: reference/nn/modules/block.md
- conv: reference/nn/modules/conv.md
- head: reference/nn/modules/head.md
- transformer: reference/nn/modules/transformer.md
- utils: reference/nn/modules/utils.md
- tasks: reference/nn/tasks.md
- solutions:
- ai_gym: reference/solutions/ai_gym.md
- heatmap: reference/solutions/heatmap.md
- object_counter: reference/solutions/object_counter.md
- trackers:
- basetrack: reference/trackers/basetrack.md
- bot_sort: reference/trackers/bot_sort.md
- byte_tracker: reference/trackers/byte_tracker.md
- track: reference/trackers/track.md
- utils:
- gmc: reference/trackers/utils/gmc.md
- kalman_filter: reference/trackers/utils/kalman_filter.md
- matching: reference/trackers/utils/matching.md
- utils:
- __init__: reference/utils/__init__.md
- autobatch: reference/utils/autobatch.md
- benchmarks: reference/utils/benchmarks.md
- callbacks:
- base: reference/utils/callbacks/base.md
- clearml: reference/utils/callbacks/clearml.md
- comet: reference/utils/callbacks/comet.md
- dvc: reference/utils/callbacks/dvc.md
- hub: reference/utils/callbacks/hub.md
- mlflow: reference/utils/callbacks/mlflow.md
- neptune: reference/utils/callbacks/neptune.md
- raytune: reference/utils/callbacks/raytune.md
- tensorboard: reference/utils/callbacks/tensorboard.md
- wb: reference/utils/callbacks/wb.md
- checks: reference/utils/checks.md
- dist: reference/utils/dist.md
- downloads: reference/utils/downloads.md
- errors: reference/utils/errors.md
- files: reference/utils/files.md
- instance: reference/utils/instance.md
- loss: reference/utils/loss.md
- metrics: reference/utils/metrics.md
- ops: reference/utils/ops.md
- patches: reference/utils/patches.md
- plotting: reference/utils/plotting.md
- tal: reference/utils/tal.md
- torch_utils: reference/utils/torch_utils.md
- triton: reference/utils/triton.md
- tuner: reference/utils/tuner.md
- Help:
- Help: help/index.md