From e35cd0b490714b743b8a4fd5b5f8b0ec05e9dcf2 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Tue, 7 Jan 2025 10:12:58 +0100 Subject: [PATCH] Ultralytics Refactor https://ultralytics.com/actions (#18555) Signed-off-by: Glenn Jocher Co-authored-by: UltralyticsAssistant --- .github/workflows/format.yml | 2 +- docs/en/index.md | 2 +- docs/en/models/index.md | 2 +- docs/en/models/yolov6.md | 2 +- ultralytics/solutions/streamlit_inference.py | 6 +----- 5 files changed, 5 insertions(+), 9 deletions(-) diff --git a/.github/workflows/format.yml b/.github/workflows/format.yml index c98b8762..27aba101 100644 --- a/.github/workflows/format.yml +++ b/.github/workflows/format.yml @@ -20,7 +20,7 @@ jobs: - name: Run Ultralytics Formatting uses: ultralytics/actions@main with: - token: ${{ secrets._GITHUB_TOKEN || secrets.GITHUB_TOKEN}} + token: ${{ secrets._GITHUB_TOKEN || secrets.GITHUB_TOKEN }} labels: true # autolabel issues and PRs python: true # format Python code and docstrings prettier: true # format YAML, JSON, Markdown and CSS diff --git a/docs/en/index.md b/docs/en/index.md index 63c400cb..22e01994 100644 --- a/docs/en/index.md +++ b/docs/en/index.md @@ -138,7 +138,7 @@ Explore the Ultralytics Docs, a comprehensive resource designed to help you unde - [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling. - [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic [data augmentation](https://www.ultralytics.com/glossary/data-augmentation), a new anchor-free detection head, and a new [loss function](https://www.ultralytics.com/glossary/loss-function). - [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats. -- [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots. +- [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://www.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots. - [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset. - [YOLOv8](https://github.com/ultralytics/ultralytics) released in 2023 by Ultralytics. YOLOv8 introduced new features and improvements for enhanced performance, flexibility, and efficiency, supporting a full range of vision AI tasks, - [YOLOv9](models/yolov9.md) introduces innovative methods like Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN). diff --git a/docs/en/models/index.md b/docs/en/models/index.md index c0f4fd33..8300c520 100644 --- a/docs/en/models/index.md +++ b/docs/en/models/index.md @@ -17,7 +17,7 @@ Here are some of the key models supported: 1. **[YOLOv3](yolov3.md)**: The third iteration of the YOLO model family, originally by Joseph Redmon, known for its efficient real-time object detection capabilities. 2. **[YOLOv4](yolov4.md)**: A darknet-native update to YOLOv3, released by Alexey Bochkovskiy in 2020. 3. **[YOLOv5](yolov5.md)**: An improved version of the YOLO architecture by Ultralytics, offering better performance and speed trade-offs compared to previous versions. -4. **[YOLOv6](yolov6.md)**: Released by [Meituan](https://about.meituan.com/) in 2022, and in use in many of the company's autonomous delivery robots. +4. **[YOLOv6](yolov6.md)**: Released by [Meituan](https://www.meituan.com/) in 2022, and in use in many of the company's autonomous delivery robots. 5. **[YOLOv7](yolov7.md)**: Updated YOLO models released in 2022 by the authors of YOLOv4. 6. **[YOLOv8](yolov8.md)**: The latest version of the YOLO family, featuring enhanced capabilities such as [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation), pose/keypoints estimation, and classification. 7. **[YOLOv9](yolov9.md)**: An experimental model trained on the Ultralytics [YOLOv5](yolov5.md) codebase implementing Programmable Gradient Information (PGI). diff --git a/docs/en/models/yolov6.md b/docs/en/models/yolov6.md index 7200da21..6fb732ac 100644 --- a/docs/en/models/yolov6.md +++ b/docs/en/models/yolov6.md @@ -8,7 +8,7 @@ keywords: Meituan YOLOv6, object detection, real-time applications, BiC module, ## Overview -[Meituan](https://about.meituan.com/) YOLOv6 is a cutting-edge object detector that offers remarkable balance between speed and accuracy, making it a popular choice for real-time applications. This model introduces several notable enhancements on its architecture and training scheme, including the implementation of a Bi-directional Concatenation (BiC) module, an anchor-aided training (AAT) strategy, and an improved backbone and neck design for state-of-the-art accuracy on the COCO dataset. +[Meituan](https://www.meituan.com/) YOLOv6 is a cutting-edge object detector that offers remarkable balance between speed and accuracy, making it a popular choice for real-time applications. This model introduces several notable enhancements on its architecture and training scheme, including the implementation of a Bi-directional Concatenation (BiC) module, an anchor-aided training (AAT) strategy, and an improved backbone and neck design for state-of-the-art accuracy on the COCO dataset. ![Meituan YOLOv6](https://github.com/ultralytics/docs/releases/download/0/meituan-yolov6.avif) ![Model example image](https://github.com/ultralytics/docs/releases/download/0/yolov6-architecture-diagram.avif) **Overview of YOLOv6.** Model architecture diagram showing the redesigned network components and training strategies that have led to significant performance improvements. (a) The neck of YOLOv6 (N and S are shown). Note for M/L, RepBlocks is replaced with CSPStackRep. (b) The structure of a BiC module. (c) A SimCSPSPPF block. ([source](https://arxiv.org/pdf/2301.05586.pdf)). diff --git a/ultralytics/solutions/streamlit_inference.py b/ultralytics/solutions/streamlit_inference.py index 8a2d615e..31a88bae 100644 --- a/ultralytics/solutions/streamlit_inference.py +++ b/ultralytics/solutions/streamlit_inference.py @@ -184,12 +184,8 @@ class Inference: if __name__ == "__main__": import sys # Import the sys module for accessing command-line arguments - model = None # Initialize the model variable as None - # Check if a model name is provided as a command-line argument args = len(sys.argv) - if args > 1: - model = sys.argv[1] # Assign the first argument as the model name - + model = sys.argv[1] if args > 1 else None # assign first argument as the model name # Create an instance of the Inference class and run inference Inference(model=model).inference()