Improve Docs dataset layout issues (#15696)
Co-authored-by: Francesco Mattioli <Francesco.mttl@gmail.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
90be5f7266
commit
62094bd03f
6 changed files with 83 additions and 57 deletions
|
|
@ -153,6 +153,10 @@ Each subset comprises images categorized into 10 classes, with their annotations
|
||||||
|
|
||||||
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
|
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
|
||||||
|
|
||||||
|
!!! Quote ""
|
||||||
|
|
||||||
|
=== "BibTeX"
|
||||||
|
|
||||||
```bibtex
|
```bibtex
|
||||||
@TECHREPORT{Krizhevsky09learningmultiple,
|
@TECHREPORT{Krizhevsky09learningmultiple,
|
||||||
author={Alex Krizhevsky},
|
author={Alex Krizhevsky},
|
||||||
|
|
|
||||||
|
|
@ -59,6 +59,10 @@ ImageWoof dataset comes in three different sizes to accommodate various research
|
||||||
|
|
||||||
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
|
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
|
||||||
|
|
||||||
|
!!! Example "Example"
|
||||||
|
|
||||||
|
=== "Python"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
|
|
@ -72,6 +76,13 @@ model.train(data="imagewoof320", epochs=100, imgsz=224)
|
||||||
model.train(data="imagewoof160", epochs=100, imgsz=224)
|
model.train(data="imagewoof160", epochs=100, imgsz=224)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Load a pretrained model and train on the small-sized dataset
|
||||||
|
yolo classify train model=yolov8n-cls.pt data=imagewoof320 epochs=100 imgsz=224
|
||||||
|
```
|
||||||
|
|
||||||
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
|
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
|
||||||
|
|
||||||
## Sample Images and Annotations
|
## Sample Images and Annotations
|
||||||
|
|
|
||||||
|
|
@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl
|
||||||
|
|
||||||
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
|
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
|
||||||
|
|
||||||
!!! Quote
|
!!! Quote ""
|
||||||
|
|
||||||
=== "BibTeX"
|
=== "BibTeX"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -159,7 +159,9 @@ The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found i
|
||||||
|
|
||||||
If you use the VisDrone dataset in your research or development work, please cite the following paper:
|
If you use the VisDrone dataset in your research or development work, please cite the following paper:
|
||||||
|
|
||||||
!!! Quote "BibTeX"
|
!!! Quote ""
|
||||||
|
|
||||||
|
=== "BibTeX"
|
||||||
|
|
||||||
```bibtex
|
```bibtex
|
||||||
@ARTICLE{9573394,
|
@ARTICLE{9573394,
|
||||||
|
|
@ -170,5 +172,6 @@ If you use the VisDrone dataset in your research or development work, please cit
|
||||||
volume={},
|
volume={},
|
||||||
number={},
|
number={},
|
||||||
pages={1-1},
|
pages={1-1},
|
||||||
doi={10.1109/TPAMI.2021.3119563}}
|
doi={10.1109/TPAMI.2021.3119563}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -135,6 +135,10 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c
|
||||||
|
|
||||||
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
|
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
|
||||||
|
|
||||||
|
!!! Quote ""
|
||||||
|
|
||||||
|
=== "BibTeX"
|
||||||
|
|
||||||
```bibtex
|
```bibtex
|
||||||
@misc{ crack-bphdr_dataset,
|
@misc{ crack-bphdr_dataset,
|
||||||
title = { crack Dataset },
|
title = { crack Dataset },
|
||||||
|
|
|
||||||
|
|
@ -99,7 +99,11 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor
|
||||||
|
|
||||||
### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset?
|
### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset?
|
||||||
|
|
||||||
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. For Python, use the snippet below:
|
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
|
||||||
|
|
||||||
|
!!! Example "Train Example"
|
||||||
|
|
||||||
|
=== "Python"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
@ -111,7 +115,7 @@ model = YOLO("yolov8n-seg.pt") # load a pretrained model
|
||||||
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
|
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
|
||||||
```
|
```
|
||||||
|
|
||||||
For CLI:
|
=== "CLI"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Start training from a pretrained *.pt model
|
# Start training from a pretrained *.pt model
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue