Add FAQs to Docs Datasets and Help sections (#14211)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
64862f1b69
commit
d5db9c916f
73 changed files with 3296 additions and 110 deletions
|
|
@ -90,3 +90,74 @@ If you use the VisDrone dataset in your research or development work, please cit
|
|||
```
|
||||
|
||||
We would like to acknowledge the AISKYEYE team at the Lab of Machine Learning and Data Mining, Tianjin University, China, for creating and maintaining the VisDrone dataset as a valuable resource for the drone-based computer vision research community. For more information about the VisDrone dataset and its creators, visit the [VisDrone Dataset GitHub repository](https://github.com/VisDrone/VisDrone-Dataset).
|
||||
|
||||
## FAQ
|
||||
|
||||
### What is the VisDrone Dataset and what are its key features?
|
||||
|
||||
The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-scale benchmark created by the AISKYEYE team at Tianjin University, China. It is designed for various computer vision tasks related to drone-based image and video analysis. Key features include:
|
||||
- **Composition**: 288 video clips with 261,908 frames and 10,209 static images.
|
||||
- **Annotations**: Over 2.6 million bounding boxes for objects like pedestrians, cars, bicycles, and tricycles.
|
||||
- **Diversity**: Collected across 14 cities, in urban and rural settings, under different weather and lighting conditions.
|
||||
- **Tasks**: Split into five main tasks—object detection in images and videos, single-object and multi-object tracking, and crowd counting.
|
||||
|
||||
### How can I use the VisDrone Dataset to train a YOLOv8 model with Ultralytics?
|
||||
|
||||
To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
|
||||
|
||||
!!! Example "Train Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a pretrained model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Train the model
|
||||
results = model.train(data="VisDrone.yaml", epochs=100, imgsz=640)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Start training from a pretrained *.pt model
|
||||
yolo detect train data=VisDrone.yaml model=yolov8n.pt epochs=100 imgsz=640
|
||||
```
|
||||
|
||||
For additional configuration options, please refer to the model [Training](../../modes/train.md) page.
|
||||
|
||||
### What are the main subsets of the VisDrone dataset and their applications?
|
||||
|
||||
The VisDrone dataset is divided into five main subsets, each tailored for a specific computer vision task:
|
||||
1. **Task 1**: Object detection in images.
|
||||
2. **Task 2**: Object detection in videos.
|
||||
3. **Task 3**: Single-object tracking.
|
||||
4. **Task 4**: Multi-object tracking.
|
||||
5. **Task 5**: Crowd counting.
|
||||
|
||||
These subsets are widely used for training and evaluating deep learning models in drone-based applications such as surveillance, traffic monitoring, and public safety.
|
||||
|
||||
### Where can I find the configuration file for the VisDrone dataset in Ultralytics?
|
||||
|
||||
The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found in the Ultralytics repository at the following link:
|
||||
[VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml).
|
||||
|
||||
### How can I cite the VisDrone dataset if I use it in my research?
|
||||
|
||||
If you use the VisDrone dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Quote "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@ARTICLE{9573394,
|
||||
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
|
||||
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
|
||||
title={Detection and Tracking Meet Drones Challenge},
|
||||
year={2021},
|
||||
volume={},
|
||||
number={},
|
||||
pages={1-1},
|
||||
doi={10.1109/TPAMI.2021.3119563}}
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue