Add HeatMap guide in real-world-projects + Code in Solutions Directory (#6796)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Muhammad Rizwan Munawar 2023-12-07 01:39:29 +05:00 committed by GitHub
parent 1e1247ddee
commit 742cbc1b4e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
10 changed files with 448 additions and 52 deletions

View file

@ -38,13 +38,13 @@ To keep track of your experiments and/or data, ClearML needs to communicate to a
Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-tutorial-clearml) or you can set up your own server, see [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). Even the server is open-source, so even if you're dealing with sensitive data, you should be good to go!
1. Install the `clearml` python package:
- Install the `clearml` python package:
```bash
pip install clearml
```
2. Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions:
- Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions:
```bash
clearml-init
@ -89,15 +89,13 @@ This will capture:
- Images with bounding boxes per epoch
- Mosaic per epoch
- Validation images per epoch
- ...
That's a lot right? 🤯 Now, we can visualize all of this information in the ClearML UI to get an overview of our training progress. Add custom columns to the table view (such as e.g. mAP_0.5) so you can easily sort on the best performing model. Or select multiple experiments and directly compare them!
There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!
<br>
## 🔗 Dataset Version Management
### 🔗 Dataset Version Management
Versioning your data separately from your code is generally a good idea and makes it easy to acquire the latest version too. This repository supports supplying a dataset version ID, and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment!
@ -165,7 +163,7 @@ python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_i
<br>
## 👀 Hyperparameter Optimization
### 👀 Hyperparameter Optimization
Now that we have our experiments and data versioned, it's time to take a look at what we can build on top!