Add Hindi हिन्दी and Arabic العربية Docs translations (#6428)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
b6baae584c
commit
02bf8003a8
337 changed files with 6584 additions and 777 deletions
|
|
@ -40,7 +40,7 @@ The FastSAM models are easy to integrate into your Python applications. Ultralyt
|
|||
|
||||
To perform object detection on an image, use the `predict` method as shown below:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -87,7 +87,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
|
|||
|
||||
Validation of the model on a dataset can be done as follows:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -168,7 +168,7 @@ Additionally, you can try FastSAM through a [Colab demo](https://colab.research.
|
|||
|
||||
We would like to acknowledge the FastSAM authors for their significant contributions in the field of real-time instance segmentation:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ Here are some of the key models supported:
|
|||
|
||||
## Getting Started: Usage Examples
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
|
||||
### Point Prompt
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -76,7 +76,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
|
||||
### Box Prompt
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -95,7 +95,7 @@ We have implemented `MobileSAM` and `SAM` using the same API. For more usage inf
|
|||
|
||||
If you find MobileSAM useful in your research or development work, please consider citing our paper:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ The Ultralytics Python API provides pre-trained PaddlePaddle RT-DETR models with
|
|||
|
||||
You can use RT-DETR for object detection tasks using the `ultralytics` pip package. The following is a sample code snippet showing how to use RT-DETR models for training and inference:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
This example provides simple inference code for RT-DETR. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using RT-DETR with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
|
||||
|
||||
|
|
@ -81,7 +81,7 @@ You can use RT-DETR for object detection tasks using the `ultralytics` pip packa
|
|||
|
||||
If you use Baidu's RT-DETR in your research or development work, please cite the [original paper](https://arxiv.org/abs/2304.08069):
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
|
||||
### SAM prediction example
|
||||
|
||||
!!! example "Segment with prompts"
|
||||
!!! Example "Segment with prompts"
|
||||
|
||||
Segment image with given prompts.
|
||||
|
||||
|
|
@ -54,7 +54,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
model('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
|
||||
```
|
||||
|
||||
!!! example "Segment everything"
|
||||
!!! Example "Segment everything"
|
||||
|
||||
Segment the whole image.
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
|
||||
- The logic here is to segment the whole image if you don't pass any prompts(bboxes/points/masks).
|
||||
|
||||
!!! example "SAMPredictor example"
|
||||
!!! Example "SAMPredictor example"
|
||||
|
||||
This way you can set image once and run prompts inference multiple times without running image encoder multiple times.
|
||||
|
||||
|
|
@ -152,7 +152,7 @@ This comparison shows the order-of-magnitude differences in the model sizes and
|
|||
|
||||
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -187,7 +187,7 @@ Auto-annotation is a key feature of SAM, allowing users to generate a [segmentat
|
|||
|
||||
To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -212,7 +212,7 @@ Auto-annotation with pre-trained models can dramatically cut down the time and e
|
|||
|
||||
If you find SAM useful in your research or development work, please consider citing our paper:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ The following examples show how to use YOLO-NAS models with the `ultralytics` pa
|
|||
|
||||
In this example we validate YOLO-NAS-s on the COCO8 dataset.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
This example provides simple inference and validation code for YOLO-NAS. For handling inference results see [Predict](../modes/predict.md) mode. For using YOLO-NAS with additional modes see [Val](../modes/val.md) and [Export](../modes/export.md). YOLO-NAS on the `ultralytics` package does not support training.
|
||||
|
||||
|
|
@ -106,7 +106,7 @@ Harness the power of the YOLO-NAS models to drive your object detection tasks to
|
|||
|
||||
If you employ YOLO-NAS in your research or development work, please cite SuperGradients:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ TODO
|
|||
|
||||
You can use YOLOv3 for object detection tasks using the Ultralytics repository. The following is a sample code snippet showing how to use YOLOv3 model for inference:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
This example provides simple inference code for YOLOv3. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using YOLOv3 with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
|
||||
|
||||
|
|
@ -91,7 +91,7 @@ You can use YOLOv3 for object detection tasks using the Ultralytics repository.
|
|||
|
||||
If you use YOLOv3 in your research, please cite the original YOLO papers and the Ultralytics YOLOv3 repository:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ YOLOv4 is a powerful and efficient object detection model that strikes a balance
|
|||
|
||||
We would like to acknowledge the YOLOv4 authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ YOLOv5u represents an advancement in object detection methodologies. Originating
|
|||
|
||||
You can use YOLOv5u for object detection tasks using the Ultralytics repository. The following is a sample code snippet showing how to use YOLOv5u model for inference:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
This example provides simple inference code for YOLOv5. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using YOLOv5 with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
|
||||
|
||||
|
|
@ -96,7 +96,7 @@ You can use YOLOv5u for object detection tasks using the Ultralytics repository.
|
|||
|
||||
If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
```bibtex
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ YOLOv6 also provides quantized models for different precisions and models optimi
|
|||
|
||||
You can use YOLOv6 for object detection tasks using the Ultralytics pip package. The following is a sample code snippet showing how to use YOLOv6 models for training:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
This example provides simple training code for YOLOv6. For more options including training settings see [Train](../modes/train.md) mode. For using YOLOv6 with additional modes see [Predict](../modes/predict.md), [Val](../modes/val.md) and [Export](../modes/export.md).
|
||||
|
||||
|
|
@ -95,7 +95,7 @@ You can use YOLOv6 for object detection tasks using the Ultralytics pip package.
|
|||
|
||||
We would like to acknowledge the authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ We regret any inconvenience this may cause and will strive to update this docume
|
|||
|
||||
We would like to acknowledge the YOLOv7 authors for their significant contributions in the field of real-time object detection:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@ YOLOv8 is the latest iteration in the YOLO series of real-time object detectors,
|
|||
|
||||
You can use YOLOv8 for object detection tasks using the Ultralytics pip package. The following is a sample code snippet showing how to use YOLOv8 models for inference:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
This example provides simple inference code for YOLOv8. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using YOLOv8 with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
|
||||
|
||||
|
|
@ -135,7 +135,7 @@ You can use YOLOv8 for object detection tasks using the Ultralytics pip package.
|
|||
|
||||
If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:
|
||||
|
||||
!!! note ""
|
||||
!!! Note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue