py-cpuinfo Exception context manager fix (#14814)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
f955fedb7f
commit
7ecab94b29
5 changed files with 77 additions and 72 deletions
1
.github/workflows/publish.yml
vendored
1
.github/workflows/publish.yml
vendored
|
|
@ -168,6 +168,7 @@ jobs:
|
|||
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
|
||||
INDEXNOW_KEY: ${{ secrets.INDEXNOW_KEY_DOCS }}
|
||||
run: |
|
||||
pip install black
|
||||
export JUPYTER_PLATFORM_DIRS=1
|
||||
python docs/build_docs.py
|
||||
git clone https://github.com/ultralytics/docs.git docs-repo
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ Here are some of the key models supported:
|
|||
7. **[YOLOv9](yolov9.md)**: An experimental model trained on the Ultralytics [YOLOv5](yolov5.md) codebase implementing Programmable Gradient Information (PGI).
|
||||
8. **[YOLOv10](yolov10.md)**: By Tsinghua University, featuring NMS-free training and efficiency-accuracy driven architecture, delivering state-of-the-art performance and latency.
|
||||
9. **[Segment Anything Model (SAM)](sam.md)**: Meta's original Segment Anything Model (SAM).
|
||||
10. **[Segment Anything Model 2 (SAM2)](sam2.md)**: The next generation of Meta's Segment Anything Model (SAM) for videos and images.
|
||||
10. **[Segment Anything Model 2 (SAM2)](sam-2.md)**: The next generation of Meta's Segment Anything Model (SAM) for videos and images.
|
||||
11. **[Mobile Segment Anything Model (MobileSAM)](mobile-sam.md)**: MobileSAM for mobile applications, by Kyung Hee University.
|
||||
12. **[Fast Segment Anything Model (FastSAM)](fast-sam.md)**: FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
|
||||
13. **[YOLO-NAS](yolo-nas.md)**: YOLO Neural Architecture Search (NAS) Models.
|
||||
|
|
|
|||
|
|
@ -112,7 +112,7 @@ pip install ultralytics
|
|||
The following table details the available SAM 2 models, their pre-trained weights, supported tasks, and compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md).
|
||||
|
||||
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|
||||
| ---------- | ------------------------------------------------------------------------------------- | -------------------------------------------- | --------- | ---------- | -------- | ------ |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | -------------------------------------------- | --------- | ---------- | -------- | ------ |
|
||||
| SAM 2 base | [sam2_b.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_b.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
|
||||
| SAM 2 large | [sam2_l.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_l.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
|
||||
|
||||
|
|
@ -129,10 +129,10 @@ SAM2 can be utilized across a broad spectrum of tasks, including real-time video
|
|||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import SAM2
|
||||
from ultralytics import SAM
|
||||
|
||||
# Load a model
|
||||
model = SAM2("sam2_b.pt")
|
||||
model = SAM("sam2_b.pt")
|
||||
|
||||
# Display model information (optional)
|
||||
model.info()
|
||||
|
|
@ -153,10 +153,10 @@ SAM2 can be utilized across a broad spectrum of tasks, including real-time video
|
|||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import SAM2
|
||||
from ultralytics import SAM
|
||||
|
||||
# Load a model
|
||||
model = SAM2("sam2_b.pt")
|
||||
model = SAM("sam2_b.pt")
|
||||
|
||||
# Display model information (optional)
|
||||
model.info()
|
||||
|
|
@ -261,10 +261,10 @@ If SAM2 is a crucial part of your research or development work, please cite it u
|
|||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@article{kirillov2024sam2,
|
||||
title={SAM2: Segment Anything Model 2},
|
||||
author={Alexander Kirillov and others},
|
||||
journal={arXiv preprint arXiv:2401.12741},
|
||||
@article{ravi2024sam2,
|
||||
title={SAM 2: Segment Anything in Images and Videos},
|
||||
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
|
||||
journal={arXiv preprint},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
|
|
@ -296,10 +296,10 @@ SAM2 can be utilized for real-time video segmentation by leveraging its promptab
|
|||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import SAM2
|
||||
from ultralytics import SAM
|
||||
|
||||
# Load a model
|
||||
model = SAM2("sam2_b.pt")
|
||||
model = SAM("sam2_b.pt")
|
||||
|
||||
# Display model information (optional)
|
||||
model.info()
|
||||
|
|
@ -311,7 +311,7 @@ SAM2 can be utilized for real-time video segmentation by leveraging its promptab
|
|||
results = model("path/to/image.jpg", points=[150, 150], labels=[1])
|
||||
```
|
||||
|
||||
For more comprehensive usage, refer to the [How to Use SAM2](#how-to-use-sam2-versatility-in-image-and-video-segmentation) section.
|
||||
For more comprehensive usage, refer to the [How to Use SAM 2](#how-to-use-sam-2-versatility-in-image-and-video-segmentation) section.
|
||||
|
||||
### What datasets are used to train SAM 2, and how do they enhance its performance?
|
||||
|
||||
|
|
@ -239,7 +239,7 @@ nav:
|
|||
- YOLOv9: models/yolov9.md
|
||||
- YOLOv10: models/yolov10.md
|
||||
- SAM (Segment Anything Model): models/sam.md
|
||||
- SAM2 (Segment Anything Model 2): models/sam2.md
|
||||
- SAM2 (Segment Anything Model 2): models/sam-2.md
|
||||
- MobileSAM (Mobile Segment Anything Model): models/mobile-sam.md
|
||||
- FastSAM (Fast Segment Anything Model): models/fast-sam.md
|
||||
- YOLO-NAS (Neural Architecture Search): models/yolo-nas.md
|
||||
|
|
@ -659,6 +659,7 @@ plugins:
|
|||
sdk.md: index.md
|
||||
hub/inference_api.md: hub/inference-api.md
|
||||
usage/hyperparameter_tuning.md: integrations/ray-tune.md
|
||||
models/sam2.md: models/sam-2.md
|
||||
reference/base_pred.md: reference/engine/predictor.md
|
||||
reference/base_trainer.md: reference/engine/trainer.md
|
||||
reference/exporter.md: reference/engine/exporter.md
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
||||
|
||||
import contextlib
|
||||
import gc
|
||||
import math
|
||||
import os
|
||||
|
|
@ -101,13 +101,16 @@ def autocast(enabled: bool, device: str = "cuda"):
|
|||
|
||||
def get_cpu_info():
|
||||
"""Return a string with system CPU information, i.e. 'Apple M2'."""
|
||||
with contextlib.suppress(Exception):
|
||||
import cpuinfo # pip install py-cpuinfo
|
||||
|
||||
k = "brand_raw", "hardware_raw", "arch_string_raw" # info keys sorted by preference (not all keys always available)
|
||||
k = "brand_raw", "hardware_raw", "arch_string_raw" # keys sorted by preference (not all keys always available)
|
||||
info = cpuinfo.get_cpu_info() # info dict
|
||||
string = info.get(k[0] if k[0] in info else k[1] if k[1] in info else k[2], "unknown")
|
||||
return string.replace("(R)", "").replace("CPU ", "").replace("@ ", "")
|
||||
|
||||
return "unknown"
|
||||
|
||||
|
||||
def select_device(device="", batch=0, newline=False, verbose=True):
|
||||
"""
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue