Docs Prettier reformat (#13483)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
2f2e81614f
commit
e5185ccf63
90 changed files with 763 additions and 742 deletions
|
|
@ -48,9 +48,9 @@ FastSAM is designed to address the limitations of the [Segment Anything Model (S
|
|||
This table presents the available models with their specific pre-trained weights, the tasks they support, and their compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), indicated by ✅ emojis for supported modes and ❌ emojis for unsupported modes.
|
||||
|
||||
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|
||||
|------------|---------------------------------------------------------------------------------------------|----------------------------------------------|-----------|------------|----------|--------|
|
||||
| FastSAM-s | [FastSAM-s.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/FastSAM-s.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
|
||||
| FastSAM-x | [FastSAM-x.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/FastSAM-x.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
|
||||
| ---------- | ------------------------------------------------------------------------------------------- | -------------------------------------------- | --------- | ---------- | -------- | ------ |
|
||||
| FastSAM-s | [FastSAM-s.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/FastSAM-s.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
|
||||
| FastSAM-x | [FastSAM-x.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/FastSAM-x.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
|
|
@ -139,7 +139,7 @@ To perform object tracking on an image, use the `track` method as shown below:
|
|||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
||||
```python
|
||||
from ultralytics import FastSAM
|
||||
|
||||
|
|
@ -149,7 +149,7 @@ To perform object tracking on an image, use the `track` method as shown below:
|
|||
# Track with a FastSAM model on a video
|
||||
results = model.track(source="path/to/video.mp4", imgsz=640)
|
||||
```
|
||||
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
|
|
@ -164,28 +164,28 @@ FastSAM is also available directly from the [https://github.com/CASIA-IVA-Lab/Fa
|
|||
|
||||
1. Clone the FastSAM repository:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/CASIA-IVA-Lab/FastSAM.git
|
||||
```
|
||||
```shell
|
||||
git clone https://github.com/CASIA-IVA-Lab/FastSAM.git
|
||||
```
|
||||
|
||||
2. Create and activate a Conda environment with Python 3.9:
|
||||
|
||||
```shell
|
||||
conda create -n FastSAM python=3.9
|
||||
conda activate FastSAM
|
||||
```
|
||||
```shell
|
||||
conda create -n FastSAM python=3.9
|
||||
conda activate FastSAM
|
||||
```
|
||||
|
||||
3. Navigate to the cloned repository and install the required packages:
|
||||
|
||||
```shell
|
||||
cd FastSAM
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
```shell
|
||||
cd FastSAM
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
4. Install the CLIP model:
|
||||
```shell
|
||||
pip install git+https://github.com/ultralytics/CLIP.git
|
||||
```
|
||||
```shell
|
||||
pip install git+https://github.com/ultralytics/CLIP.git
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
|
|
@ -195,26 +195,26 @@ FastSAM is also available directly from the [https://github.com/CASIA-IVA-Lab/Fa
|
|||
|
||||
- Segment everything in an image:
|
||||
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg
|
||||
```
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg
|
||||
```
|
||||
|
||||
- Segment specific objects using text prompt:
|
||||
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --text_prompt "the yellow dog"
|
||||
```
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --text_prompt "the yellow dog"
|
||||
```
|
||||
|
||||
- Segment objects within a bounding box (provide box coordinates in xywh format):
|
||||
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt "[570,200,230,400]"
|
||||
```
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt "[570,200,230,400]"
|
||||
```
|
||||
|
||||
- Segment objects near specific points:
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --point_prompt "[[520,360],[620,300]]" --point_label "[1,0]"
|
||||
```
|
||||
```shell
|
||||
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --point_prompt "[[520,360],[620,300]]" --point_label "[1,0]"
|
||||
```
|
||||
|
||||
Additionally, you can try FastSAM through a [Colab demo](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing) or on the [HuggingFace web demo](https://huggingface.co/spaces/An-619/FastSAM) for a visual experience.
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue