Add Docs glossary links (#16448)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Glenn Jocher 2024-09-23 23:48:46 +02:00 committed by GitHub
parent 8b8c25f216
commit 443fbce194
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
193 changed files with 1124 additions and 1124 deletions

View file

@ -9,7 +9,7 @@ model_name: yolov8n-cls
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/image-classification-examples.avif" alt="Image classification examples">
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of predefined classes.
[Image classification](https://www.ultralytics.com/glossary/image-classification) is the simplest of the three tasks and involves classifying an entire image into one of a set of predefined classes.
The output of an image classifier is a single class label and a confidence score. Image classification is useful when you need to know only what class an image belongs to and don't need to know where objects of that class are located or what their exact shape is.
@ -47,7 +47,7 @@ YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose model
## Train
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
Train YOLOv8n-cls on the MNIST160 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@ -84,7 +84,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
## Val
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
Validate trained YOLOv8n-cls model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the MNIST160 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example

View file

@ -8,7 +8,7 @@ keywords: object detection, YOLOv8, pretrained models, training, validation, pre
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/object-detection-examples.avif" alt="Object detection examples">
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
[Object detection](https://www.ultralytics.com/glossary/object-detection) is a task that involves identifying the location and class of objects in an image or video stream.
The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape.
@ -46,7 +46,7 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
## Train
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@ -83,7 +83,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
## Val
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example

View file

@ -9,7 +9,7 @@ keywords: Ultralytics YOLOv8, detection, segmentation, classification, oriented
<br>
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-tasks-banner.avif" alt="Ultralytics YOLO supported tasks">
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to perform [detection](detect.md), [segmentation](segment.md), [obb](obb.md), [classification](classify.md), and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
YOLOv8 is an AI framework that supports multiple [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) **tasks**. The framework can be used to perform [detection](detect.md), [segmentation](segment.md), [obb](obb.md), [classification](classify.md), and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
<p align="center">
<br>
@ -19,18 +19,18 @@ YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Explore Ultralytics YOLO Tasks: Object Detection, Segmentation, OBB, Tracking, and Pose Estimation.
<strong>Watch:</strong> Explore Ultralytics YOLO Tasks: [Object Detection](https://www.ultralytics.com/glossary/object-detection), Segmentation, OBB, Tracking, and Pose Estimation.
</p>
## [Detection](detect.md)
Detection is the primary task supported by YOLOv8. It involves detecting objects in an image or video frame and drawing bounding boxes around them. The detected objects are classified into different categories based on their features. YOLOv8 can detect multiple objects in a single image or video frame with high accuracy and speed.
Detection is the primary task supported by YOLOv8. It involves detecting objects in an image or video frame and drawing bounding boxes around them. The detected objects are classified into different categories based on their features. YOLOv8 can detect multiple objects in a single image or video frame with high [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed.
[Detection Examples](detect.md){ .md-button }
## [Segmentation](segment.md)
Segmentation is a task that involves segmenting an image into different regions based on the content of the image. Each region is assigned a label based on its content. This task is useful in applications such as image segmentation and medical imaging. YOLOv8 uses a variant of the U-Net architecture to perform segmentation.
Segmentation is a task that involves segmenting an image into different regions based on the content of the image. Each region is assigned a label based on its content. This task is useful in applications such as [image segmentation](https://www.ultralytics.com/glossary/image-segmentation) and medical imaging. YOLOv8 uses a variant of the U-Net architecture to perform segmentation.
[Segmentation Examples](segment.md){ .md-button }
@ -114,7 +114,7 @@ For more details and implementation tips, visit our [pose estimation examples](p
### Why should I choose Ultralytics YOLOv8 for oriented object detection (OBB)?
Oriented Object Detection (OBB) with YOLOv8 provides enhanced precision by detecting objects with an additional angle parameter. This feature is beneficial for applications requiring accurate localization of rotated objects, such as aerial imagery analysis and warehouse automation.
Oriented Object Detection (OBB) with YOLOv8 provides enhanced [precision](https://www.ultralytics.com/glossary/precision) by detecting objects with an additional angle parameter. This feature is beneficial for applications requiring accurate localization of rotated objects, such as aerial imagery analysis and warehouse automation.
- **Increased Precision:** The angle component reduces false positives for rotated objects.
- **Versatile Applications:** Useful for tasks in geospatial analysis, robotics, etc.

View file

@ -5,7 +5,7 @@ keywords: Oriented Bounding Boxes, OBB, Object Detection, YOLOv8, Ultralytics, D
model_name: yolov8n-obb
---
# Oriented Bounding Boxes Object Detection
# Oriented Bounding Boxes [Object Detection](https://www.ultralytics.com/glossary/object-detection)
<!-- obb task poster -->
@ -55,7 +55,7 @@ YOLOv8 pretrained OBB models are shown here, which are pretrained on the [DOTAv1
## Train
Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@ -103,7 +103,7 @@ OBB dataset format can be found in detail in the [Dataset Guide](../datasets/obb
## Val
Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
Validate trained YOLOv8n-obb model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the DOTA8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example

View file

@ -117,7 +117,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
## Val
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
Validate trained YOLOv8n-pose model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO128-pose dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example

View file

@ -9,7 +9,7 @@ model_name: yolov8n-seg
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/instance-segmentation-examples.avif" alt="Instance segmentation examples">
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image and segmenting them from the rest of the image.
[Instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) goes a step further than object detection and involves identifying individual objects in an image and segmenting them from the rest of the image.
The output of an instance segmentation model is a set of masks or contours that outline each object in the image, along with class labels and confidence scores for each object. Instance segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is.
@ -47,7 +47,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
## Train
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
Train YOLOv8n-seg on the COCO128-seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@ -84,7 +84,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](.
## Val
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
Validate trained YOLOv8n-seg model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO128-seg dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example
@ -204,7 +204,7 @@ To train a YOLOv8 segmentation model on a custom dataset, you first need to prep
Check the [Configuration](../usage/cfg.md) page for more available arguments.
### What is the difference between object detection and instance segmentation in YOLOv8?
### What is the difference between [object detection](https://www.ultralytics.com/glossary/object-detection) and instance segmentation in YOLOv8?
Object detection identifies and localizes objects within an image by drawing bounding boxes around them, whereas instance segmentation not only identifies the bounding boxes but also delineates the exact shape of each object. YOLOv8 instance segmentation models provide masks or contours that outline each detected object, which is particularly useful for tasks where knowing the precise shape of objects is important, such as medical imaging or autonomous driving.
@ -238,7 +238,7 @@ Loading and validating a pretrained YOLOv8 segmentation model is straightforward
yolo segment val model=yolov8n-seg.pt
```
These steps will provide you with validation metrics like Mean Average Precision (mAP), crucial for assessing model performance.
These steps will provide you with validation metrics like [Mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP), crucial for assessing model performance.
### How can I export a YOLOv8 segmentation model to ONNX format?