ultralytics 8.1.42 add YOLOv9 Segment models (#9296)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
1e547e60a0
commit
3208eb72ef
25 changed files with 236 additions and 93 deletions
|
|
@ -75,7 +75,6 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
|
|||
# Start prediction with a finetuned *.pt model
|
||||
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/african-wildlife-sample.jpg"
|
||||
```
|
||||
|
||||
|
||||
## Sample Images and Annotations
|
||||
|
||||
|
|
@ -89,4 +88,4 @@ This example illustrates the variety and complexity of images in the African wil
|
|||
|
||||
## Citations and Acknowledgments
|
||||
|
||||
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
||||
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
||||
|
|
|
|||
|
|
@ -74,7 +74,6 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
|
|||
# Start prediction with a finetuned *.pt model
|
||||
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/brain-tumor-sample.jpg"
|
||||
```
|
||||
|
||||
|
||||
## Sample Images and Annotations
|
||||
|
||||
|
|
@ -88,4 +87,4 @@ This example highlights the diversity and intricacy of images within the brain t
|
|||
|
||||
## Citations and Acknowledgments
|
||||
|
||||
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
||||
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
||||
|
|
|
|||
|
|
@ -29,7 +29,6 @@ The LVIS dataset is split into three subsets:
|
|||
3. **Minival**: This subset is exactly the same as COCO val2017 set which has 5k images used for validation purposes during model training.
|
||||
4. **Test**: This subset consists of 20k images used for testing and benchmarking the trained models. Ground truth annotations for this subset are not publicly available, and the results are submitted to the [LVIS evaluation server](https://eval.ai/web/challenges/challenge-page/675/overview) for performance evaluation.
|
||||
|
||||
|
||||
## Applications
|
||||
|
||||
The LVIS dataset is widely used for training and evaluating deep learning models in object detection (such as YOLO, Faster R-CNN, and SSD), instance segmentation (such as Mask R-CNN). The dataset's diverse set of object categories, large number of annotated images, and standardized evaluation metrics make it an essential resource for computer vision researchers and practitioners.
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ Bounding box object detection is a computer vision technique that involves detec
|
|||
|
||||
- [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
|
||||
- [COCO](detect/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning with over 200K labeled images.
|
||||
- [LVIS](lvis.md): A large-scale object detection, segmentation, and captioning dataset with 1203 object categories.
|
||||
- [LVIS](detect/lvis.md): A large-scale object detection, segmentation, and captioning dataset with 1203 object categories.
|
||||
- [COCO8](detect/coco8.md): Contains the first 4 images from COCO train and COCO val, suitable for quick tests.
|
||||
- [Global Wheat 2020](detect/globalwheat2020.md): A dataset of wheat head images collected from around the world for object detection and localization tasks.
|
||||
- [Objects365](detect/objects365.md): A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue