OBB Docs updates (#7512)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: Muhammad Rizwan Munawar <chr043416@gmail.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
parent
09ee982d35
commit
4dc8c406f9
25 changed files with 117 additions and 47 deletions
|
|
@ -34,12 +34,9 @@ Creating a custom model to detect your objects is an iterative process of collec
|
|||
|
||||
For more details see [Ultralytics Licensing](https://ultralytics.com/license).
|
||||
|
||||
### 1. Create Dataset
|
||||
|
||||
YOLOv5 models must be trained on labelled data in order to learn classes of objects in that data. There are two options for creating your dataset before you start training:
|
||||
|
||||
<details open>
|
||||
<summary>Use <a href="https://roboflow.com/?ref=ultralytics">Roboflow</a> to create your dataset in YOLO format 🌟</summary>
|
||||
## Option 1: Create a <a href="https://roboflow.com/?ref=ultralytics">Roboflow</a> Dataset
|
||||
|
||||
### 1.1 Collect Images
|
||||
|
||||
|
|
@ -68,7 +65,7 @@ Note: YOLOv5 does online augmentation during training, so we do not recommend ap
|
|||
- **Auto-Orient** - to strip EXIF orientation from your images.
|
||||
- **Resize (Stretch)** - to the square input size of your model (640x640 is the YOLOv5 default).
|
||||
|
||||
Generating a version will give you a point in time snapshot of your dataset so you can always go back and compare your future model training runs against it, even if you add more images or change its configuration later.
|
||||
Generating a version will give you a snapshot of your dataset, so you can always go back and compare your future model training runs against it, even if you add more images or change its configuration later.
|
||||
|
||||
<p align="center"><img width="450" src="https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/6152a2733fd1da943619934e_roboflow-export.png" alt="Export in YOLOv5 Format"></p>
|
||||
|
||||
|
|
@ -76,14 +73,9 @@ Export in `YOLOv5 Pytorch` format, then copy the snippet into your training scri
|
|||
|
||||
<p align="center"><img width="450" src="https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/6152a273a92e4f5cb72594df_roboflow-snippet.png" alt="Roboflow dataset download snippet"></p>
|
||||
|
||||
Now continue with `2. Select a Model`.
|
||||
## Option 2: Create a Manual Dataset
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Or manually prepare your dataset</summary>
|
||||
|
||||
### 1.1 Create dataset.yaml
|
||||
### 2.1 Create `dataset.yaml`
|
||||
|
||||
[COCO128](https://www.kaggle.com/ultralytics/coco128) is an example small tutorial dataset composed of the first 128 images in [COCO](http://cocodataset.org/#home) train2017. These same 128 images are used for both training and validation to verify our training pipeline is capable of overfitting. [data/coco128.yaml](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), shown below, is the dataset config file that defines 1) the dataset root directory `path` and relative paths to `train` / `val` / `test` image directories (or *.txt files with image paths) and 2) a class `names` dictionary:
|
||||
|
||||
|
|
@ -105,7 +97,7 @@ names:
|
|||
79: toothbrush
|
||||
```
|
||||
|
||||
### 1.2 Create Labels
|
||||
### 2.2 Create Labels
|
||||
|
||||
After using an annotation tool to label your images, export your labels to **YOLO format**, with one `*.txt` file per image (if no objects in image, no `*.txt` file is required). The `*.txt` file specifications are:
|
||||
|
||||
|
|
@ -120,7 +112,7 @@ The label file corresponding to the above image contains 2 persons (class `0`) a
|
|||
|
||||
<p align="center"><img width="428" src="https://user-images.githubusercontent.com/26833433/112467037-d2568c00-8d66-11eb-8796-55402ac0d62f.png" alt="Roboflow dataset preprocessing"></p>
|
||||
|
||||
### 1.3 Organize Directories
|
||||
### 2.3 Organize Directories
|
||||
|
||||
Organize your train and val images and labels according to the example below. YOLOv5 assumes `/coco128` is inside a `/datasets` directory **next to** the `/yolov5` directory. **YOLOv5 locates labels automatically for each image** by replacing the last instance of `/images/` in each image path with `/labels/`. For example:
|
||||
|
||||
|
|
@ -130,15 +122,14 @@ Organize your train and val images and labels according to the example below. YO
|
|||
```
|
||||
|
||||
<p align="center"><img width="700" src="https://user-images.githubusercontent.com/26833433/134436012-65111ad1-9541-4853-81a6-f19a3468b75f.png" alt="YOLOv5 dataset structure"></p>
|
||||
</details>
|
||||
|
||||
### 2. Select a Model
|
||||
## 3. Select a Model
|
||||
|
||||
Select a pretrained model to start training from. Here we select [YOLOv5s](https://github.com/ultralytics/yolov5/blob/master/models/yolov5s.yaml), the second-smallest and fastest model available. See our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints) for a full comparison of all models.
|
||||
|
||||
<p align="center"><img width="800" alt="YOLOv5 models" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/model_comparison.png"></p>
|
||||
|
||||
### 3. Train
|
||||
## 4. Train
|
||||
|
||||
Train a YOLOv5s model on COCO128 by specifying dataset, batch-size, image size and either pretrained `--weights yolov5s.pt` (recommended), or randomly initialized `--weights '' --cfg yolov5s.yaml` (not recommended). Pretrained weights are auto-downloaded from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
|
||||
|
||||
|
|
@ -156,9 +147,9 @@ python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt
|
|||
|
||||
All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc. For more details see the Training section of our tutorial notebook. <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
||||
|
||||
### 4. Visualize
|
||||
## 5. Visualize
|
||||
|
||||
#### Comet Logging and Visualization 🌟 NEW
|
||||
### Comet Logging and Visualization 🌟 NEW
|
||||
|
||||
[Comet](https://bit.ly/yolov5-readme-comet) is now fully integrated with YOLOv5. Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://bit.ly/yolov5-colab-comet-panels)! Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!
|
||||
|
||||
|
|
@ -174,21 +165,21 @@ To learn more about all the supported Comet features for this integration, check
|
|||
|
||||
<img width="1920" alt="YOLO UI" src="https://user-images.githubusercontent.com/26833433/202851203-164e94e1-2238-46dd-91f8-de020e9d6b41.png">
|
||||
|
||||
#### ClearML Logging and Automation 🌟 NEW
|
||||
### ClearML Logging and Automation 🌟 NEW
|
||||
|
||||
[ClearML](https://cutt.ly/yolov5-notebook-clearml) is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML:
|
||||
[ClearML](https://clear.ml/) is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML:
|
||||
|
||||
- `pip install clearml`
|
||||
- run `clearml-init` to connect to a ClearML server (**deploy your own open-source server [here](https://github.com/allegroai/clearml-server)**, or use our free hosted server [here](https://cutt.ly/yolov5-notebook-clearml))
|
||||
- run `clearml-init` to connect to a ClearML server
|
||||
|
||||
You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a YOLOv5 training task on a queue to be executed by any number of ClearML Agents (workers).
|
||||
|
||||
You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the [ClearML Tutorial](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration) for details!
|
||||
|
||||
<a href="https://cutt.ly/yolov5-notebook-clearml">
|
||||
<a href="https://clear.ml/">
|
||||
<img alt="ClearML Experiment Management UI" src="https://github.com/thepycoder/clearml_screenshots/raw/main/scalars.jpg" width="1280"></a>
|
||||
|
||||
#### Local Logging
|
||||
### Local Logging
|
||||
|
||||
Training results are automatically logged with [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) loggers to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc.
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue