Update to lowercase MkDocs admonitions (#15990)
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
ce24c7273e
commit
c2b647a768
133 changed files with 529 additions and 521 deletions
|
|
@ -60,7 +60,7 @@ The FastSAM models are easy to integrate into your Python applications. Ultralyt
|
|||
|
||||
To perform object detection on an image, use the `predict` method as shown below:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -98,7 +98,7 @@ To perform object detection on an image, use the `predict` method as shown below
|
|||
|
||||
This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image.
|
||||
|
||||
!!! Example "FastSAMPredictor example"
|
||||
!!! example "FastSAMPredictor example"
|
||||
|
||||
This way you can run inference on image and get all the segment `results` once and run prompts inference multiple times without running inference multiple times.
|
||||
|
||||
|
|
@ -120,7 +120,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
|
|||
text_results = predictor.prompt(everything_results, texts="a photo of a dog")
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
All the returned `results` in above examples are [Results](../modes/predict.md#working-with-results) object which allows access predicted masks and source image easily.
|
||||
|
||||
|
|
@ -128,7 +128,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
|
|||
|
||||
Validation of the model on a dataset can be done as follows:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -155,7 +155,7 @@ Please note that FastSAM only supports detection and segmentation of a single cl
|
|||
|
||||
To perform object tracking on an image, use the `track` method as shown below:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -241,7 +241,7 @@ Additionally, you can try FastSAM through a [Colab demo](https://colab.research.
|
|||
|
||||
We would like to acknowledge the FastSAM authors for their significant contributions in the field of real-time instance segmentation:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ This example provides simple YOLO training and inference examples. For full docu
|
|||
|
||||
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md) and [Pose](../tasks/pose.md) docs.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -107,7 +107,7 @@ Ultralytics YOLOv8 offers enhanced capabilities such as real-time object detecti
|
|||
|
||||
Training a YOLOv8 model on custom data can be easily accomplished using Ultralytics' libraries. Here's a quick example:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
|
||||
### Point Prompt
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -85,7 +85,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
|
||||
### Box Prompt
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -105,7 +105,7 @@ We have implemented `MobileSAM` and `SAM` using the same API. For more usage inf
|
|||
|
||||
If you find MobileSAM useful in your research or development work, please consider citing our paper:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ The Ultralytics Python API provides pre-trained PaddlePaddle RT-DETR models with
|
|||
|
||||
This example provides simple RT-DETR training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -83,7 +83,7 @@ This table presents the model types, the specific pre-trained weights, the tasks
|
|||
|
||||
If you use Baidu's RT-DETR in your research or development work, please cite the [original paper](https://arxiv.org/abs/2304.08069):
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -110,7 +110,7 @@ Baidu's RT-DETR (Real-Time Detection Transformer) is an advanced real-time objec
|
|||
|
||||
You can leverage Ultralytics Python API to use pre-trained PaddlePaddle RT-DETR models. For instance, to load an RT-DETR-l model pre-trained on COCO val2017 and achieve high FPS on T4 GPU, you can utilize the following example:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -116,7 +116,7 @@ SAM 2 can be utilized across a broad spectrum of tasks, including real-time vide
|
|||
|
||||
#### Segment with Prompts
|
||||
|
||||
!!! Example "Segment with Prompts"
|
||||
!!! example "Segment with Prompts"
|
||||
|
||||
Use prompts to segment specific objects in images or videos.
|
||||
|
||||
|
|
@ -140,7 +140,7 @@ SAM 2 can be utilized across a broad spectrum of tasks, including real-time vide
|
|||
|
||||
#### Segment Everything
|
||||
|
||||
!!! Example "Segment Everything"
|
||||
!!! example "Segment Everything"
|
||||
|
||||
Segment the entire image or video content without specific prompts.
|
||||
|
||||
|
|
@ -185,7 +185,7 @@ This comparison shows the order-of-magnitude differences in the model sizes and
|
|||
|
||||
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM using `torch==2.3.1` and `ultralytics==8.3.82`. To reproduce this test:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -217,7 +217,7 @@ Auto-annotation is a powerful feature of SAM 2, enabling users to generate segme
|
|||
|
||||
To auto-annotate your dataset using SAM 2, follow this example:
|
||||
|
||||
!!! Example "Auto-Annotation Example"
|
||||
!!! example "Auto-Annotation Example"
|
||||
|
||||
```python
|
||||
from ultralytics.data.annotator import auto_annotate
|
||||
|
|
@ -248,7 +248,7 @@ Despite its strengths, SAM 2 has certain limitations:
|
|||
|
||||
If SAM 2 is a crucial part of your research or development work, please cite it using the following reference:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -281,7 +281,7 @@ For more details on SAM 2's architecture and capabilities, explore the [SAM 2 re
|
|||
|
||||
SAM 2 can be utilized for real-time video segmentation by leveraging its promptable interface and real-time inference capabilities. Here's a basic example:
|
||||
|
||||
!!! Example "Segment with Prompts"
|
||||
!!! example "Segment with Prompts"
|
||||
|
||||
Use prompts to segment specific objects in images or videos.
|
||||
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
|
||||
### SAM prediction example
|
||||
|
||||
!!! Example "Segment with prompts"
|
||||
!!! example "Segment with prompts"
|
||||
|
||||
Segment image with given prompts.
|
||||
|
||||
|
|
@ -62,7 +62,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
results = model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||
```
|
||||
|
||||
!!! Example "Segment everything"
|
||||
!!! example "Segment everything"
|
||||
|
||||
Segment the whole image.
|
||||
|
||||
|
|
@ -90,7 +90,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
|
||||
- The logic here is to segment the whole image if you don't pass any prompts(bboxes/points/masks).
|
||||
|
||||
!!! Example "SAMPredictor example"
|
||||
!!! example "SAMPredictor example"
|
||||
|
||||
This way you can set image once and run prompts inference multiple times without running image encoder multiple times.
|
||||
|
||||
|
|
@ -128,7 +128,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
results = predictor(source="ultralytics/assets/zidane.jpg", crop_n_layers=1, points_stride=64)
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
All the returned `results` in above examples are [Results](../modes/predict.md#working-with-results) object which allows access predicted masks and source image easily.
|
||||
|
||||
|
|
@ -149,7 +149,7 @@ This comparison shows the order-of-magnitude differences in the model sizes and
|
|||
|
||||
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -181,7 +181,7 @@ Auto-annotation is a key feature of SAM, allowing users to generate a [segmentat
|
|||
|
||||
To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -207,7 +207,7 @@ Auto-annotation with pre-trained models can dramatically cut down the time and e
|
|||
|
||||
If you find SAM useful in your research or development work, please consider citing our paper:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ The following examples show how to use YOLO-NAS models with the `ultralytics` pa
|
|||
|
||||
In this example we validate YOLO-NAS-s on the COCO8 dataset.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
This example provides simple inference and validation code for YOLO-NAS. For handling inference results see [Predict](../modes/predict.md) mode. For using YOLO-NAS with additional modes see [Val](../modes/val.md) and [Export](../modes/export.md). YOLO-NAS on the `ultralytics` package does not support training.
|
||||
|
||||
|
|
@ -99,7 +99,7 @@ Below is a detailed overview of each model, including links to their pre-trained
|
|||
|
||||
If you employ YOLO-NAS in your research or development work, please cite SuperGradients:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ YOLO-World tackles the challenges faced by traditional Open-Vocabulary detection
|
|||
|
||||
This section details the models available with their specific pre-trained weights, the tasks they support, and their compatibility with various operating modes such as [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), denoted by ✅ for supported modes and ❌ for unsupported modes.
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
All the YOLOv8-World weights have been directly migrated from the official [YOLO-World](https://github.com/AILab-CVC/YOLO-World) repository, highlighting their excellent contributions.
|
||||
|
||||
|
|
@ -77,13 +77,13 @@ The YOLO-World models are easy to integrate into your Python applications. Ultra
|
|||
|
||||
### Train Usage
|
||||
|
||||
!!! Tip "Tip"
|
||||
!!! tip "Tip"
|
||||
|
||||
We strongly recommend to use `yolov8-worldv2` model for custom training, because it supports deterministic training and also easy to export other formats i.e onnx/tensorrt.
|
||||
|
||||
Object detection is straightforward with the `train` method, as illustrated below:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -113,7 +113,7 @@ Object detection is straightforward with the `train` method, as illustrated belo
|
|||
|
||||
Object detection is straightforward with the `predict` method, as illustrated below:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -143,7 +143,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
|
|||
|
||||
Model validation on a dataset is streamlined as follows:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -168,7 +168,7 @@ Model validation on a dataset is streamlined as follows:
|
|||
|
||||
Object tracking with YOLO-World model on a video/images is streamlined as follows:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -189,7 +189,7 @@ Object tracking with YOLO-World model on a video/images is streamlined as follow
|
|||
yolo track model=yolov8s-world.pt imgsz=640 source="path/to/video/file.mp4"
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
The YOLO-World models provided by Ultralytics come pre-configured with [COCO dataset](../datasets/detect/coco.md) categories as part of their offline vocabulary, enhancing efficiency for immediate application. This integration allows the YOLOv8-World models to directly recognize and predict the 80 standard categories defined in the COCO dataset without requiring additional setup or customization.
|
||||
|
||||
|
|
@ -201,7 +201,7 @@ The YOLO-World framework allows for the dynamic specification of classes through
|
|||
|
||||
For instance, if your application only requires detecting 'person' and 'bus' objects, you can specify these classes directly:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Custom Inference Prompts"
|
||||
|
||||
|
|
@ -223,7 +223,7 @@ For instance, if your application only requires detecting 'person' and 'bus' obj
|
|||
|
||||
You can also save a model after setting custom classes. By doing this you create a version of the YOLO-World model that is specialized for your specific use case. This process embeds your custom class definitions directly into the model file, making the model ready to use with your specified classes without further adjustments. Follow these steps to save and load your custom YOLOv8 model:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Persisting Models with Custom Vocabulary"
|
||||
|
||||
|
|
@ -286,11 +286,11 @@ This approach provides a powerful means of customizing state-of-the-art object d
|
|||
|
||||
### Launch training from scratch
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
`WorldTrainerFromScratch` is highly customized to allow training yolo-world models on both detection datasets and grounding datasets simultaneously. More details please checkout [ultralytics.model.yolo.world.train_world.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/world/train_world.py).
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -322,7 +322,7 @@ This approach provides a powerful means of customizing state-of-the-art object d
|
|||
|
||||
We extend our gratitude to the [Tencent AILab Computer Vision Center](https://ai.tencent.com/) for their pioneering work in real-time open-vocabulary object detection with YOLO-World:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -140,7 +140,7 @@ Here is a detailed comparison of YOLOv10 variants with other state-of-the-art mo
|
|||
|
||||
For predicting new images with YOLOv10:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -166,7 +166,7 @@ For predicting new images with YOLOv10:
|
|||
|
||||
For training YOLOv10 on a custom dataset:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -225,7 +225,7 @@ YOLOv10 sets a new standard in real-time object detection by addressing the shor
|
|||
|
||||
We would like to acknowledge the YOLOv10 authors from [Tsinghua University](https://www.tsinghua.edu.cn/en/) for their extensive research and significant contributions to the [Ultralytics](https://www.ultralytics.com/) framework:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -252,7 +252,7 @@ YOLOv10, developed by researchers at [Tsinghua University](https://www.tsinghua.
|
|||
|
||||
For easy inference, you can use the Ultralytics YOLO Python library or the command line interface (CLI). Below are examples of predicting new images using YOLOv10:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ This table provides an at-a-glance view of the capabilities of each YOLOv3 varia
|
|||
|
||||
This example provides simple YOLOv3 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ This example provides simple YOLOv3 training and inference examples. For full do
|
|||
|
||||
If you use YOLOv3 in your research, please cite the original YOLO papers and the Ultralytics YOLOv3 repository:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -107,7 +107,7 @@ YOLOv3 is the third iteration of the YOLO (You Only Look Once) object detection
|
|||
|
||||
Training a YOLOv3 model with Ultralytics is straightforward. You can train the model using either Python or CLI:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -138,7 +138,7 @@ YOLOv3u improves upon YOLOv3 and YOLOv3-Ultralytics by incorporating the anchor-
|
|||
|
||||
You can perform inference using YOLOv3 models by either Python scripts or CLI commands:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -169,7 +169,7 @@ YOLOv3, YOLOv3-Ultralytics, and YOLOv3u primarily support object detection tasks
|
|||
|
||||
If you use YOLOv3 in your research, please cite the original YOLO papers and the Ultralytics YOLOv3 repository. Example BibTeX citation:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ YOLOv4 is a powerful and efficient object detection model that strikes a balance
|
|||
|
||||
We would like to acknowledge the YOLOv4 authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
|
|||
|
||||
## Performance Metrics
|
||||
|
||||
!!! Performance
|
||||
!!! performance
|
||||
|
||||
=== "Detection"
|
||||
|
||||
|
|
@ -56,7 +56,7 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
|
|||
|
||||
This example provides simple YOLOv5 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -94,7 +94,7 @@ This example provides simple YOLOv5 training and inference examples. For full do
|
|||
|
||||
If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -135,7 +135,7 @@ The performance metrics of YOLOv5u models vary depending on the platform and har
|
|||
|
||||
You can train a YOLOv5u model by loading a pre-trained model and running the training command with your dataset. Here's a quick example:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ YOLOv6 also provides quantized models for different precisions and models optimi
|
|||
|
||||
This example provides simple YOLOv6 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -88,7 +88,7 @@ This table provides a detailed overview of the YOLOv6 model variants, highlighti
|
|||
|
||||
We would like to acknowledge the authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -119,7 +119,7 @@ The Bi-directional Concatenation (BiC) module in YOLOv6 enhances localization si
|
|||
|
||||
You can train a YOLOv6 model using Ultralytics with simple Python or CLI commands. For instance:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@ We regret any inconvenience this may cause and will strive to update this docume
|
|||
|
||||
We would like to acknowledge the YOLOv7 authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ This table provides an overview of the YOLOv8 model variants, highlighting their
|
|||
|
||||
## Performance Metrics
|
||||
|
||||
!!! Performance
|
||||
!!! performance
|
||||
|
||||
=== "Detection (COCO)"
|
||||
|
||||
|
|
@ -129,7 +129,7 @@ This example provides simple YOLOv8 training and inference examples. For full do
|
|||
|
||||
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md), [OBB](../tasks/obb.md) docs and [Pose](../tasks/pose.md) docs.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -167,7 +167,7 @@ Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for obj
|
|||
|
||||
If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
@ -203,7 +203,7 @@ YOLOv8 models achieve state-of-the-art performance across various benchmarking d
|
|||
|
||||
Training a YOLOv8 model can be done using either Python or CLI. Below are examples for training a model using a COCO-pretrained YOLOv8 model on the COCO8 dataset for 100 epochs:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -229,7 +229,7 @@ For further details, visit the [Training](../modes/train.md) documentation.
|
|||
|
||||
Yes, YOLOv8 models can be benchmarked for performance in terms of speed and accuracy across various export formats. You can use PyTorch, ONNX, TensorRT, and more for benchmarking. Below are example commands for benchmarking using Python and CLI:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -128,7 +128,7 @@ YOLOv9 represents a pivotal development in real-time object detection, offering
|
|||
|
||||
This example provides simple YOLOv9 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -184,7 +184,7 @@ This table provides a detailed overview of the YOLOv9 model variants, highlighti
|
|||
|
||||
We would like to acknowledge the YOLOv9 authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue