Add Hindi हिन्दी and Arabic العربية Docs translations (#6428)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
b6baae584c
commit
02bf8003a8
337 changed files with 6584 additions and 777 deletions
|
|
@ -32,7 +32,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
|
|||
- **OpenVINO:** For Intel hardware optimization
|
||||
- **CoreML, TensorFlow SavedModel, and More:** For diverse deployment needs.
|
||||
|
||||
!!! tip "Tip"
|
||||
!!! Tip "Tip"
|
||||
|
||||
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
|
||||
* Export to TensorRT for up to 5x GPU speedup.
|
||||
|
|
@ -41,7 +41,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
|
|||
|
||||
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ Here are some of the standout functionalities:
|
|||
- **Optimized Inference:** Exported models are optimized for quicker inference times.
|
||||
- **Tutorial Videos:** In-depth guides and tutorials for a smooth exporting experience.
|
||||
|
||||
!!! tip "Tip"
|
||||
!!! Tip "Tip"
|
||||
|
||||
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
|
||||
* Export to TensorRT for up to 5x GPU speedup.
|
||||
|
|
@ -48,7 +48,7 @@ Here are some of the standout functionalities:
|
|||
|
||||
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ YOLOv8's predict mode is designed to be robust and versatile, featuring:
|
|||
|
||||
Ultralytics YOLO models return either a Python list of `Results` objects, or a memory-efficient Python generator of `Results` objects when `stream=True` is passed to the model during inference:
|
||||
|
||||
!!! example "Predict"
|
||||
!!! Example "Predict"
|
||||
|
||||
=== "Return a list with `stream=False`"
|
||||
```python
|
||||
|
|
@ -92,7 +92,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
|
|||
|
||||
YOLOv8 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
|
||||
|
||||
!!! tip "Tip"
|
||||
!!! Tip "Tip"
|
||||
|
||||
Use `stream=True` for processing long videos or large datasets to efficiently manage memory. When `stream=False`, the results for all frames or data points are stored in memory, which can quickly add up and cause out-of-memory errors for large inputs. In contrast, `stream=True` utilizes a generator, which only keeps the results of the current frame or data point in memory, significantly reducing memory consumption and preventing out-of-memory issues.
|
||||
|
||||
|
|
@ -115,7 +115,7 @@ YOLOv8 can process different types of input sources for inference, as shown in t
|
|||
|
||||
Below are code examples for using each source type:
|
||||
|
||||
!!! example "Prediction sources"
|
||||
!!! Example "Prediction sources"
|
||||
|
||||
=== "image"
|
||||
Run inference on an image file.
|
||||
|
|
@ -327,7 +327,7 @@ Below are code examples for using each source type:
|
|||
|
||||
`model.predict()` accepts multiple arguments that can be passed at inference time to override defaults:
|
||||
|
||||
!!! example
|
||||
!!! Example
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -411,7 +411,7 @@ The below table contains valid Ultralytics video formats.
|
|||
|
||||
All Ultralytics `predict()` calls will return a list of `Results` objects:
|
||||
|
||||
!!! example "Results"
|
||||
!!! Example "Results"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -463,7 +463,7 @@ For more details see the `Results` class [documentation](../reference/engine/res
|
|||
|
||||
`Boxes` object can be used to index, manipulate, and convert bounding boxes to different formats.
|
||||
|
||||
!!! example "Boxes"
|
||||
!!! Example "Boxes"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -501,7 +501,7 @@ For more details see the `Boxes` class [documentation](../reference/engine/resul
|
|||
|
||||
`Masks` object can be used index, manipulate and convert masks to segments.
|
||||
|
||||
!!! example "Masks"
|
||||
!!! Example "Masks"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -534,7 +534,7 @@ For more details see the `Masks` class [documentation](../reference/engine/resul
|
|||
|
||||
`Keypoints` object can be used index, manipulate and normalize coordinates.
|
||||
|
||||
!!! example "Keypoints"
|
||||
!!! Example "Keypoints"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -568,7 +568,7 @@ For more details see the `Keypoints` class [documentation](../reference/engine/r
|
|||
|
||||
`Probs` object can be used index, get `top1` and `top5` indices and scores of classification.
|
||||
|
||||
!!! example "Probs"
|
||||
!!! Example "Probs"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
|
@ -603,7 +603,7 @@ For more details see the `Probs` class [documentation](../reference/engine/resul
|
|||
|
||||
You can use the `plot()` method of a `Result` objects to visualize predictions. It plots all prediction types (boxes, masks, keypoints, probabilities, etc.) contained in the `Results` object onto a numpy array that can then be shown or saved.
|
||||
|
||||
!!! example "Plotting"
|
||||
!!! Example "Plotting"
|
||||
|
||||
```python
|
||||
from PIL import Image
|
||||
|
|
@ -647,7 +647,7 @@ Ensuring thread safety during inference is crucial when you are running multiple
|
|||
|
||||
When using YOLO models in a multi-threaded application, it's important to instantiate separate model objects for each thread or employ thread-local storage to prevent conflicts:
|
||||
|
||||
!!! example "Thread-Safe Inference"
|
||||
!!! Example "Thread-Safe Inference"
|
||||
|
||||
Instantiate a single model inside each thread for thread-safe inference:
|
||||
```python
|
||||
|
|
@ -672,7 +672,7 @@ For an in-depth look at thread-safe inference with YOLO models and step-by-step
|
|||
|
||||
Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`).
|
||||
|
||||
!!! example "Streaming for-loop"
|
||||
!!! Example "Streaming for-loop"
|
||||
|
||||
```python
|
||||
import cv2
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ The default tracker is BoT-SORT.
|
|||
|
||||
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -97,7 +97,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
|
|||
|
||||
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](../modes/predict.md#inference-arguments) model page.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -120,7 +120,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
|
|||
|
||||
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -147,7 +147,7 @@ For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/tr
|
|||
|
||||
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
|
||||
|
||||
!!! example "Streaming for-loop with tracking"
|
||||
!!! Example "Streaming for-loop with tracking"
|
||||
|
||||
```python
|
||||
import cv2
|
||||
|
|
@ -195,7 +195,7 @@ Visualizing object tracks over consecutive frames can provide valuable insights
|
|||
|
||||
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
|
||||
|
||||
!!! example "Plotting tracks over multiple video frames"
|
||||
!!! Example "Plotting tracks over multiple video frames"
|
||||
|
||||
```python
|
||||
from collections import defaultdict
|
||||
|
|
@ -272,7 +272,7 @@ The `daemon=True` parameter in `threading.Thread` means that these threads will
|
|||
|
||||
Finally, after all threads have completed their task, the windows displaying the results are closed using `cv2.destroyAllWindows()`.
|
||||
|
||||
!!! example "Streaming for-loop with tracking"
|
||||
!!! Example "Streaming for-loop with tracking"
|
||||
|
||||
```python
|
||||
import threading
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ The following are some notable features of YOLOv8's Train mode:
|
|||
- **Hyperparameter Configuration:** The option to modify hyperparameters through YAML configuration files or CLI arguments.
|
||||
- **Visualization and Monitoring:** Real-time tracking of training metrics and visualization of the learning process for better insights.
|
||||
|
||||
!!! tip "Tip"
|
||||
!!! Tip "Tip"
|
||||
|
||||
* YOLOv8 datasets like COCO, VOC, ImageNet and many others automatically download on first use, i.e. `yolo train data=coco.yaml`
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ The following are some notable features of YOLOv8's Train mode:
|
|||
|
||||
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments.
|
||||
|
||||
!!! example "Single-GPU and CPU Training Example"
|
||||
!!! Example "Single-GPU and CPU Training Example"
|
||||
|
||||
Device is determined automatically. If a GPU is available then it will be used, otherwise training will start on CPU.
|
||||
|
||||
|
|
@ -84,7 +84,7 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The train
|
|||
|
||||
Multi-GPU training allows for more efficient utilization of available hardware resources by distributing the training load across multiple GPUs. This feature is available through both the Python API and the command-line interface. To enable multi-GPU training, specify the GPU device IDs you wish to use.
|
||||
|
||||
!!! example "Multi-GPU Training Example"
|
||||
!!! Example "Multi-GPU Training Example"
|
||||
|
||||
To train with 2 GPUs, CUDA devices 0 and 1 use the following commands. Expand to additional GPUs as required.
|
||||
|
||||
|
|
@ -113,7 +113,7 @@ With the support for Apple M1 and M2 chips integrated in the Ultralytics YOLO mo
|
|||
|
||||
To enable training on Apple M1 and M2 chips, you should specify 'mps' as your device when initiating the training process. Below is an example of how you could do this in Python and via the command line:
|
||||
|
||||
!!! example "MPS Training Example"
|
||||
!!! Example "MPS Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -146,7 +146,7 @@ You can easily resume training in Ultralytics YOLO by setting the `resume` argum
|
|||
|
||||
Below is an example of how to resume an interrupted training using Python and via the command line:
|
||||
|
||||
!!! example "Resume Training Example"
|
||||
!!! Example "Resume Training Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -236,7 +236,7 @@ To use a logger, select it from the dropdown menu in the code snippet above and
|
|||
|
||||
To use Comet:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -254,7 +254,7 @@ Remember to sign in to your Comet account on their website and get your API key.
|
|||
|
||||
To use ClearML:
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
|
|
@ -272,7 +272,7 @@ After running this script, you will need to sign in to your ClearML account on t
|
|||
|
||||
To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb):
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "CLI"
|
||||
```bash
|
||||
|
|
@ -282,7 +282,7 @@ To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ul
|
|||
|
||||
To use TensorBoard locally run the below command and view results at http://localhost:6006/.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "CLI"
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
|
|||
- **CLI and Python API:** Choose from command-line interface or Python API based on your preference for validation.
|
||||
- **Data Compatibility:** Works seamlessly with datasets used during the training phase as well as custom datasets.
|
||||
|
||||
!!! tip "Tip"
|
||||
!!! Tip "Tip"
|
||||
|
||||
* YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()`
|
||||
|
||||
|
|
@ -38,7 +38,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
|
|||
|
||||
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! example ""
|
||||
!!! Example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue