ultralytics 8.0.97 confusion matrix, windows, docs updates (#2511)
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Dowon <ks2515@naver.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
parent
6ee3a9a74b
commit
d1107ca4cb
138 changed files with 744 additions and 351 deletions
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
comments: true
|
||||
description: Export YOLOv5 models to TFLite, ONNX, CoreML, and TensorRT formats. Achieve up to 5x GPU speedup using TensorRT. Benchmarks included.
|
||||
---
|
||||
|
||||
# TFLite, ONNX, CoreML, TensorRT Export
|
||||
|
|
@ -41,10 +42,10 @@ YOLOv5 inference is officially supported in 11 formats:
|
|||
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov5s_web_model/` |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov5s_paddle_model/` |
|
||||
|
||||
|
||||
## Benchmarks
|
||||
|
||||
Benchmarks below run on a Colab Pro with the YOLOv5 tutorial notebook <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>. To reproduce:
|
||||
|
||||
```bash
|
||||
python benchmarks.py --weights yolov5s.pt --imgsz 640 --device 0
|
||||
```
|
||||
|
|
@ -98,6 +99,7 @@ Benchmarks complete (241.20s)
|
|||
## Export a Trained YOLOv5 Model
|
||||
|
||||
This command exports a pretrained YOLOv5s model to TorchScript and ONNX formats. `yolov5s.pt` is the 'small' model, the second-smallest model available. Other options are `yolov5n.pt`, `yolov5m.pt`, `yolov5l.pt` and `yolov5x.pt`, along with their P6 counterparts i.e. `yolov5s6.pt` or you own custom training checkpoint i.e. `runs/exp/weights/best.pt`. For details on all available models please see our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints).
|
||||
|
||||
```bash
|
||||
python export.py --weights yolov5s.pt --include torchscript onnx
|
||||
```
|
||||
|
|
@ -105,6 +107,7 @@ python export.py --weights yolov5s.pt --include torchscript onnx
|
|||
💡 ProTip: Add `--half` to export models at FP16 half precision for smaller file sizes
|
||||
|
||||
Output:
|
||||
|
||||
```bash
|
||||
export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx']
|
||||
YOLOv5 🚀 v6.2-104-ge3e5122 Python-3.7.13 torch-1.12.1+cu113 CPU
|
||||
|
|
@ -137,10 +140,10 @@ The 3 exported models will be saved alongside the original PyTorch model:
|
|||
[Netron Viewer](https://github.com/lutzroeder/netron) is recommended for visualizing exported models:
|
||||
<p align="center"><img width="850" src="https://user-images.githubusercontent.com/26833433/191003260-f94011a7-5b2e-4fe3-93c1-e1a935e0a728.png"></p>
|
||||
|
||||
|
||||
## Exported Model Usage Examples
|
||||
|
||||
`detect.py` runs inference on exported models:
|
||||
|
||||
```bash
|
||||
python detect.py --weights yolov5s.pt # PyTorch
|
||||
yolov5s.torchscript # TorchScript
|
||||
|
|
@ -156,6 +159,7 @@ python detect.py --weights yolov5s.pt # PyTorch
|
|||
```
|
||||
|
||||
`val.py` runs validation on exported models:
|
||||
|
||||
```bash
|
||||
python val.py --weights yolov5s.pt # PyTorch
|
||||
yolov5s.torchscript # TorchScript
|
||||
|
|
@ -171,6 +175,7 @@ python val.py --weights yolov5s.pt # PyTorch
|
|||
```
|
||||
|
||||
Use PyTorch Hub with exported YOLOv5 models:
|
||||
|
||||
``` python
|
||||
import torch
|
||||
|
||||
|
|
@ -200,6 +205,7 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
|
|||
## OpenCV DNN inference
|
||||
|
||||
OpenCV inference with ONNX models:
|
||||
|
||||
```bash
|
||||
python export.py --weights yolov5s.pt --include onnx
|
||||
|
||||
|
|
@ -232,7 +238,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with
|
|||
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
|
||||
- **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
||||
|
||||
|
||||
## Status
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue