Update https://docs.ultralytics.com/models (#6513)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
0c4e97443b
commit
16a13a1ce0
178 changed files with 14224 additions and 561 deletions
|
|
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the objects in the Caltech-1
|
|||
|
||||
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ The example showcases the diversity and complexity of the objects in the Caltech
|
|||
|
||||
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
|
|||
|
||||
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
|
|||
|
||||
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ The example showcases the variety and complexity of the images in the ImageNet d
|
|||
|
||||
If you use the ImageNet dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ The example showcases the variety and complexity of the images in the ImageNet10
|
|||
|
||||
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ In this example, the `train` directory contains subdirectories for each class in
|
|||
|
||||
## Usage
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ If you use the MNIST dataset in your
|
|||
|
||||
research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ The example showcases the variety and complexity of the data in the Argoverse da
|
|||
|
||||
If you use the Argoverse dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the COCO datas
|
|||
|
||||
If you use the COCO dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ The example showcases the variety and complexity of the images in the COCO8 data
|
|||
|
||||
If you use the COCO dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Global Wheat
|
|||
|
||||
If you use the Global Wheat Head Dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ When using the Ultralytics YOLO format, organize your training and validation im
|
|||
|
||||
Here's how you can use these formats to train your model:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -93,7 +93,7 @@ If you have your own dataset and would like to use it for training detection mod
|
|||
|
||||
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Objects365 d
|
|||
|
||||
If you use the Objects365 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ Researchers can gain invaluable insights into the array of computer vision chall
|
|||
|
||||
For those employing Open Images V7 in their work, it's prudent to cite the relevant papers and acknowledge the creators:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ The example showcases the variety and complexity of the data in the SKU-110k dat
|
|||
|
||||
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ The example showcases the variety and complexity of the data in the VisDrone dat
|
|||
|
||||
If you use the VisDrone dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ The example showcases the variety and complexity of the images in the VOC datase
|
|||
|
||||
If you use the VOC dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -79,7 +79,7 @@ The example showcases the variety and complexity of the data in the xView datase
|
|||
|
||||
If you use the xView dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -109,7 +109,7 @@ The dataset's richness offers invaluable insights into object detection challeng
|
|||
|
||||
For those leveraging DOTA v2 in their endeavors, it's pertinent to cite the relevant research papers:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ An example of a `*.txt` label file for the above image, which contains an object
|
|||
|
||||
To train a model using these OBB formats:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -69,7 +69,7 @@ For those looking to introduce their own datasets with oriented bounding boxes,
|
|||
|
||||
Transitioning labels from the DOTA dataset format to the YOLO OBB format can be achieved with this script:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ The example showcases the variety and complexity of the images in the COCO-Pose
|
|||
|
||||
If you use the COCO-Pose dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ The example showcases the variety and complexity of the images in the COCO8-Pose
|
|||
|
||||
If you use the COCO dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ The `train` and `val` fields specify the paths to the directories containing the
|
|||
|
||||
## Usage
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -125,7 +125,7 @@ If you have your own dataset and would like to use it for training pose estimati
|
|||
|
||||
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the COCO-Seg d
|
|||
|
||||
If you use the COCO-Seg dataset in your research or development work, please cite the original COCO paper and acknowledge the extension to COCO-Seg:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ The example showcases the variety and complexity of the images in the COCO8-Seg
|
|||
|
||||
If you use the COCO dataset in your research or development work, please cite the following paper:
|
||||
|
||||
!!! Note ""
|
||||
!!! Quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ The `train` and `val` fields specify the paths to the directories containing the
|
|||
|
||||
## Usage
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -101,7 +101,7 @@ If you have your own dataset and would like to use it for training segmentation
|
|||
|
||||
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -123,7 +123,7 @@ Auto-annotation is an essential feature that allows you to generate a segmentati
|
|||
|
||||
To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below:
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
|
|||
|
||||
## Usage
|
||||
|
||||
!!! Example ""
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue