ultralytics 8.0.97 confusion matrix, windows, docs updates (#2511)
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Dowon <ks2515@naver.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
parent
6ee3a9a74b
commit
d1107ca4cb
138 changed files with 744 additions and 351 deletions
|
|
@ -1,8 +1,9 @@
|
|||
---
|
||||
comments: true
|
||||
description: Learn how to load YOLOv5🚀 from PyTorch Hub at https://pytorch.org/hub/ultralytics_yolov5 and perform image inference. UPDATED 26 March 2023.
|
||||
---
|
||||
|
||||
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5).
|
||||
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5).
|
||||
UPDATED 26 March 2023.
|
||||
|
||||
## Before You Start
|
||||
|
|
@ -20,6 +21,7 @@ pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requi
|
|||
### Simple Example
|
||||
|
||||
This example loads a pretrained YOLOv5s model from PyTorch Hub as `model` and passes an image for inference. `'yolov5s'` is the lightest and fastest YOLOv5 model. For details on all available models please see the [README](https://github.com/ultralytics/yolov5#pretrained-checkpoints).
|
||||
|
||||
```python
|
||||
import torch
|
||||
|
||||
|
|
@ -40,10 +42,10 @@ results.pandas().xyxy[0]
|
|||
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
|
||||
```
|
||||
|
||||
|
||||
### Detailed Example
|
||||
|
||||
This example shows **batched inference** with **PIL** and **OpenCV** image sources. `results` can be **printed** to console, **saved** to `runs/hub`, **showed** to screen on supported environments, and returned as **tensors** or **pandas** dataframes.
|
||||
|
||||
```python
|
||||
import cv2
|
||||
import torch
|
||||
|
|
@ -73,12 +75,15 @@ results.pandas().xyxy[0] # im1 predictions (pandas)
|
|||
# 2 114.75 195.75 1095.0 708.0 0.624512 0 person
|
||||
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
|
||||
```
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/26833433/124915064-62a49e00-dff1-11eb-86b3-a85b97061afb.jpg" width="500"> <img src="https://user-images.githubusercontent.com/26833433/124915055-60424400-dff1-11eb-9055-24585b375a29.jpg" width="300">
|
||||
|
||||
For all inference options see YOLOv5 `AutoShape()` forward [method](https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252).
|
||||
|
||||
### Inference Settings
|
||||
|
||||
YOLOv5 models contain various inference attributes such as **confidence threshold**, **IoU threshold**, etc. which can be set by:
|
||||
|
||||
```python
|
||||
model.conf = 0.25 # NMS confidence threshold
|
||||
iou = 0.45 # NMS IoU threshold
|
||||
|
|
@ -91,9 +96,10 @@ model.conf = 0.25 # NMS confidence threshold
|
|||
results = model(im, size=320) # custom inference size
|
||||
```
|
||||
|
||||
|
||||
### Device
|
||||
|
||||
Models can be transferred to any device after creation:
|
||||
|
||||
```python
|
||||
model.cpu() # CPU
|
||||
model.cuda() # GPU
|
||||
|
|
@ -101,6 +107,7 @@ model.to(device) # i.e. device=torch.device(0)
|
|||
```
|
||||
|
||||
Models can also be created directly on any `device`:
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', device='cpu') # load on CPU
|
||||
```
|
||||
|
|
@ -108,33 +115,45 @@ model = torch.hub.load('ultralytics/yolov5', 'yolov5s', device='cpu') # load on
|
|||
💡 ProTip: Input images are automatically transferred to the correct model device before inference.
|
||||
|
||||
### Silence Outputs
|
||||
|
||||
Models can be loaded silently with `_verbose=False`:
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', _verbose=False) # load silently
|
||||
```
|
||||
|
||||
### Input Channels
|
||||
|
||||
To load a pretrained YOLOv5s model with 4 input channels rather than the default 3:
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', channels=4)
|
||||
```
|
||||
|
||||
In this case the model will be composed of pretrained weights **except for** the very first input layer, which is no longer the same shape as the pretrained input layer. The input layer will remain initialized by random weights.
|
||||
|
||||
### Number of Classes
|
||||
|
||||
To load a pretrained YOLOv5s model with 10 output classes rather than the default 80:
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)
|
||||
```
|
||||
|
||||
In this case the model will be composed of pretrained weights **except for** the output layers, which are no longer the same shape as the pretrained output layers. The output layers will remain initialized by random weights.
|
||||
|
||||
### Force Reload
|
||||
|
||||
If you run into problems with the above steps, setting `force_reload=True` may help by discarding the existing cache and force a fresh download of the latest YOLOv5 version from PyTorch Hub.
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True) # force reload
|
||||
```
|
||||
|
||||
### Screenshot Inference
|
||||
|
||||
To run inference on your desktop screen:
|
||||
|
||||
```python
|
||||
import torch
|
||||
from PIL import ImageGrab
|
||||
|
|
@ -171,14 +190,18 @@ threading.Thread(target=run, args=[model1, 'https://ultralytics.com/images/bus.j
|
|||
```
|
||||
|
||||
### Training
|
||||
To load a YOLOv5 model for training rather than inference, set `autoshape=False`. To load a model with randomly initialized weights (to train from scratch) use `pretrained=False`. You must provide your own training script in this case. Alternatively see our YOLOv5 [Train Custom Data Tutorial](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data) for model training.
|
||||
|
||||
To load a YOLOv5 model for training rather than inference, set `autoshape=False`. To load a model with randomly initialized weights (to train from scratch) use `pretrained=False`. You must provide your own training script in this case. Alternatively see our YOLOv5 [Train Custom Data Tutorial](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data) for model training.
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # load pretrained
|
||||
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch
|
||||
```
|
||||
|
||||
### Base64 Results
|
||||
|
||||
For use with API services. See https://github.com/ultralytics/yolov5/pull/2291 and [Flask REST API](https://github.com/ultralytics/yolov5/tree/master/utils/flask_rest_api) example for details.
|
||||
|
||||
```python
|
||||
results = model(im) # inference
|
||||
|
||||
|
|
@ -192,18 +215,23 @@ for im in results.ims:
|
|||
```
|
||||
|
||||
### Cropped Results
|
||||
|
||||
Results can be returned and saved as detection crops:
|
||||
|
||||
```python
|
||||
results = model(im) # inference
|
||||
crops = results.crop(save=True) # cropped detections dictionary
|
||||
```
|
||||
|
||||
### Pandas Results
|
||||
|
||||
Results can be returned as [Pandas DataFrames](https://pandas.pydata.org/):
|
||||
|
||||
```python
|
||||
results = model(im) # inference
|
||||
results.pandas().xyxy[0] # Pandas DataFrame
|
||||
```
|
||||
|
||||
<details markdown>
|
||||
<summary>Pandas Output (click to expand)</summary>
|
||||
|
||||
|
|
@ -215,24 +243,31 @@ print(results.pandas().xyxy[0])
|
|||
# 2 114.75 195.75 1095.0 708.0 0.624512 0 person
|
||||
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Sorted Results
|
||||
|
||||
Results can be sorted by column, i.e. to sort license plate digit detection left-to-right (x-axis):
|
||||
|
||||
```python
|
||||
results = model(im) # inference
|
||||
results.pandas().xyxy[0].sort_values('xmin') # sorted left-right
|
||||
```
|
||||
|
||||
### Box-Cropped Results
|
||||
|
||||
Results can be returned and saved as detection crops:
|
||||
|
||||
```python
|
||||
results = model(im) # inference
|
||||
crops = results.crop(save=True) # cropped detections dictionary
|
||||
```
|
||||
|
||||
### JSON Results
|
||||
|
||||
Results can be returned in JSON format once converted to `.pandas()` dataframes using the `.to_json()` method. The JSON format can be modified using the `orient` argument. See pandas `.to_json()` [documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html) for details.
|
||||
|
||||
```python
|
||||
results = model(ims) # inference
|
||||
results.pandas().xyxy[0].to_json(orient="records") # JSON img1 predictions
|
||||
|
|
@ -253,7 +288,9 @@ results.pandas().xyxy[0].to_json(orient="records") # JSON img1 predictions
|
|||
</details>
|
||||
|
||||
## Custom Models
|
||||
|
||||
This example loads a custom 20-class [VOC](https://github.com/ultralytics/yolov5/blob/master/data/VOC.yaml)-trained YOLOv5s model `'best.pt'` with PyTorch Hub.
|
||||
|
||||
```python
|
||||
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt') # local model
|
||||
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo
|
||||
|
|
@ -286,7 +323,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with
|
|||
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
|
||||
- **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
||||
|
||||
|
||||
## Status
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue