Update YOLO11 Actions and Docs (#16596)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
Ultralytics Assistant 2024-10-01 16:58:12 +02:00 committed by GitHub
parent 51e93d6111
commit 97f38409fb
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
124 changed files with 1948 additions and 1948 deletions

View file

@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train Ultralytics YOLO11n model on the Carparts Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -55,7 +55,7 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="carparts-seg.yaml", epochs=100, imgsz=640)
@ -65,7 +65,7 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
```bash
# Start training from a pretrained *.pt model
yolo segment train data=carparts-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=carparts-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -108,9 +108,9 @@ We extend our thanks to the Roboflow team for their dedication in developing and
The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics) is a curated collection of images and videos specifically designed for car part segmentation tasks in computer vision. This dataset includes a diverse range of visuals captured from multiple perspectives, making it an invaluable resource for training and testing segmentation models for automotive applications.
### How can I use the Carparts Segmentation Dataset with Ultralytics YOLOv8?
### How can I use the Carparts Segmentation Dataset with Ultralytics YOLO11?
To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow these steps:
To train a YOLO11 model on the Carparts Segmentation dataset, you can follow these steps:
!!! example "Train Example"
@ -120,7 +120,7 @@ To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="carparts-seg.yaml", epochs=100, imgsz=640)
@ -130,7 +130,7 @@ To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow the
```bash
# Start training from a pretrained *.pt model
yolo segment train data=carparts-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=carparts-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
For more details, refer to the [Training](../../modes/train.md) documentation.

View file

@ -1,7 +1,7 @@
---
comments: true
description: Explore the COCO-Seg dataset, an extension of COCO, with detailed segmentation annotations. Learn how to train YOLO models with COCO-Seg.
keywords: COCO-Seg, dataset, YOLO models, instance segmentation, object detection, COCO dataset, YOLOv8, computer vision, Ultralytics, machine learning
keywords: COCO-Seg, dataset, YOLO models, instance segmentation, object detection, COCO dataset, YOLO11, computer vision, Ultralytics, machine learning
---
# COCO-Seg Dataset
@ -12,11 +12,11 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset, an extension of the COCO
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
## Key Features
@ -49,7 +49,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-seg model on the COCO-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -59,7 +59,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
@ -69,7 +69,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -109,9 +109,9 @@ We extend our thanks to the COCO Consortium for creating and maintaining this in
The [COCO-Seg](https://cocodataset.org/#home) dataset is an extension of the original COCO (Common Objects in Context) dataset, specifically designed for instance segmentation tasks. While it uses the same images as the COCO dataset, COCO-Seg includes more detailed segmentation annotations, making it a powerful resource for researchers and developers focusing on object instance segmentation.
### How can I train a YOLOv8 model using the COCO-Seg dataset?
### How can I train a YOLO11 model using the COCO-Seg dataset?
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -121,7 +121,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
@ -131,7 +131,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
### What are the key features of the COCO-Seg dataset?
@ -145,15 +145,15 @@ The COCO-Seg dataset includes several key features:
### What pretrained models are available for COCO-Seg, and what are their performance metrics?
The COCO-Seg dataset supports multiple pretrained YOLOv8 segmentation models with varying performance metrics. Here's a summary of the available models and their key metrics:
The COCO-Seg dataset supports multiple pretrained YOLO11 segmentation models with varying performance metrics. Here's a summary of the available models and their key metrics:
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
### How is the COCO-Seg dataset structured and what subsets does it contain?

View file

@ -1,7 +1,7 @@
---
comments: true
description: Discover the versatile and manageable COCO8-Seg dataset by Ultralytics, ideal for testing and debugging segmentation models or new detection approaches.
keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLOv8, COCO 2017, model training, computer vision, dataset configuration
keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLO11, COCO 2017, model training, computer vision, dataset configuration
---
# COCO8-Seg Dataset
@ -10,7 +10,7 @@ keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLOv8, COCO 2017, model
[Ultralytics](https://www.ultralytics.com/) COCO8-Seg is a small, but versatile [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) dataset composed of the first 8 images of the COCO train 2017 set, 4 for training and 4 for validation. This dataset is ideal for testing and debugging segmentation models, or for experimenting with new detection approaches. With 8 images, it is small enough to be easily manageable, yet diverse enough to test training pipelines for errors and act as a sanity check before training larger datasets.
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
## Dataset YAML
@ -24,7 +24,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-seg model on the COCO8-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -34,7 +34,7 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 [epochs](https://w
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@ -44,7 +44,7 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 [epochs](https://w
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -80,13 +80,13 @@ We would like to acknowledge the COCO Consortium for creating and maintaining th
## FAQ
### What is the COCO8-Seg dataset, and how is it used in Ultralytics YOLOv8?
### What is the COCO8-Seg dataset, and how is it used in Ultralytics YOLO11?
The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralytics, consisting of the first 8 images from the COCO train 2017 set—4 images for training and 4 for validation. This dataset is tailored for testing and debugging segmentation models or experimenting with new detection methods. It is particularly useful with Ultralytics [YOLOv8](https://github.com/ultralytics/ultralytics) and [HUB](https://hub.ultralytics.com/) for rapid iteration and pipeline error-checking before scaling to larger datasets. For detailed usage, refer to the model [Training](../../modes/train.md) page.
The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralytics, consisting of the first 8 images from the COCO train 2017 set—4 images for training and 4 for validation. This dataset is tailored for testing and debugging segmentation models or experimenting with new detection methods. It is particularly useful with Ultralytics [YOLO11](https://github.com/ultralytics/ultralytics) and [HUB](https://hub.ultralytics.com/) for rapid iteration and pipeline error-checking before scaling to larger datasets. For detailed usage, refer to the model [Training](../../modes/train.md) page.
### How can I train a YOLOv8n-seg model using the COCO8-Seg dataset?
### How can I train a YOLO11n-seg model using the COCO8-Seg dataset?
To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
To train a **YOLO11n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
!!! example "Train Example"
@ -96,7 +96,7 @@ To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # Load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # Load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@ -106,7 +106,7 @@ To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
For a thorough explanation of available arguments and configuration options, you can check the [Training](../../modes/train.md) documentation.

View file

@ -34,7 +34,7 @@ A YAML (Yet Another Markup Language) file is employed to outline the configurati
## Usage
To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train Ultralytics YOLO11n model on the Crack Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -44,7 +44,7 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 [ep
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="crack-seg.yaml", epochs=100, imgsz=640)
@ -54,7 +54,7 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 [ep
```bash
# Start training from a pretrained *.pt model
yolo segment train data=crack-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=crack-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -98,9 +98,9 @@ We would like to acknowledge the Roboflow team for creating and maintaining the
The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/university-bswxt/crack-bphdr?ref=ultralytics) is a comprehensive collection of 4029 static images designed specifically for transportation and public safety studies. It is ideal for tasks such as self-driving car model development and infrastructure maintenance. The dataset includes training, testing, and validation sets, aiding in accurate crack detection and segmentation.
### How do I train a model using the Crack Segmentation Dataset with Ultralytics YOLOv8?
### How do I train a model using the Crack Segmentation Dataset with Ultralytics YOLO11?
To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
To train an Ultralytics YOLO11 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -110,7 +110,7 @@ To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="crack-seg.yaml", epochs=100, imgsz=640)
@ -120,7 +120,7 @@ To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the
```bash
# Start training from a pretrained *.pt model
yolo segment train data=crack-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=crack-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
### Why should I use the Crack Segmentation Dataset for my self-driving car project?

View file

@ -74,7 +74,7 @@ The `train` and `val` fields specify the paths to the directories containing the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@ -84,7 +84,7 @@ The `train` and `val` fields specify the paths to the directories containing the
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Supported Datasets
@ -137,13 +137,13 @@ To auto-annotate your dataset using the Ultralytics framework, you can use the `
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
auto_annotate(data="path/to/images", det_model="yolo11x.pt", sam_model="sam_b.pt")
```
| Argument | Type | Description | Default |
| ------------ | ----------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- |
| `data` | `str` | Path to a folder containing images to be annotated. | `None` |
| `det_model` | `str, optional` | Pre-trained YOLO detection model. Defaults to `'yolov8x.pt'`. | `'yolov8x.pt'` |
| `det_model` | `str, optional` | Pre-trained YOLO detection model. Defaults to `'yolo11x.pt'`. | `'yolo11x.pt'` |
| `sam_model` | `str, optional` | Pre-trained SAM segmentation model. Defaults to `'sam_b.pt'`. | `'sam_b.pt'` |
| `device` | `str, optional` | Device to run the models on. Defaults to an empty string (CPU or GPU, if available). | `''` |
| `output_dir` | `str or None, optional` | Directory to save the annotated results. Defaults to a `'labels'` folder in the same directory as `'data'`. | `None` |
@ -195,7 +195,7 @@ Auto-annotation in Ultralytics YOLO allows you to generate segmentation annotati
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
auto_annotate(data="path/to/images", det_model="yolo11x.pt", sam_model="sam_b.pt")
```
This function automates the annotation process, making it faster and more efficient. For more details, explore the [Auto-Annotation](#auto-annotation) section.

View file

@ -34,7 +34,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train Ultralytics YOLO11n model on the Package Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -44,7 +44,7 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 [
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
@ -54,7 +54,7 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 [
```bash
# Start training from a pretrained *.pt model
yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=package-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -97,9 +97,9 @@ We express our gratitude to the Roboflow team for their efforts in creating and
The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factorypackage/factory_package?ref=ultralytics) is a curated collection of images tailored for tasks involving package segmentation. It includes diverse images of packages in various contexts, making it invaluable for training and evaluating segmentation models. This dataset is particularly useful for applications in logistics, warehouse automation, and any project requiring precise package analysis. It helps optimize logistics and enhance vision models for accurate package identification and sorting.
### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset?
### How do I train an Ultralytics YOLO11 model on the Package Segmentation Dataset?
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
You can train an Ultralytics YOLO11n model using both Python and CLI methods. Use the snippets below:
!!! example "Train Example"
@ -109,7 +109,7 @@ You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Us
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model
model = YOLO("yolo11n-seg.pt") # load a pretrained model
# Train the model
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
@ -119,7 +119,7 @@ You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Us
```bash
# Start training from a pretrained *.pt model
yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=package-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
Refer to the model [Training](../../modes/train.md) page for more details.
@ -134,9 +134,9 @@ The dataset is structured into three main components:
This structure ensures a balanced dataset for thorough model training, validation, and testing, enhancing the performance of segmentation algorithms.
### Why should I use Ultralytics YOLOv8 with the Package Segmentation Dataset?
### Why should I use Ultralytics YOLO11 with the Package Segmentation Dataset?
Ultralytics YOLOv8 provides state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed for real-time object detection and segmentation tasks. Using it with the Package Segmentation Dataset allows you to leverage YOLOv8's capabilities for precise package segmentation. This combination is especially beneficial for industries like logistics and warehouse automation, where accurate package identification is critical. For more information, check out our [page on YOLOv8 segmentation](https://docs.ultralytics.com/models/yolov8/).
Ultralytics YOLO11 provides state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed for real-time object detection and segmentation tasks. Using it with the Package Segmentation Dataset allows you to leverage YOLO11's capabilities for precise package segmentation. This combination is especially beneficial for industries like logistics and warehouse automation, where accurate package identification is critical. For more information, check out our [page on YOLO11 segmentation](https://docs.ultralytics.com/models/yolo11/).
### How can I access and use the package-seg.yaml file for the Package Segmentation Dataset?