Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Glenn Jocher 2023-11-22 20:45:46 +01:00 committed by GitHub
parent 0c4e97443b
commit 16a13a1ce0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
178 changed files with 14224 additions and 561 deletions

View file

@ -30,17 +30,24 @@ FastSAM is designed to address the limitations of the [Segment Anything Model (S
7. **Model Compression Feasibility:** FastSAM demonstrates the feasibility of a path that can significantly reduce the computational effort by introducing an artificial prior to the structure, thus opening new possibilities for large model architecture for general vision tasks.
## Usage
## Available Models, Supported Tasks, and Operating Modes
### Python API
This table presents the available models with their specific pre-trained weights, the tasks they support, and their compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), indicated by ✅ emojis for supported modes and ❌ emojis for unsupported modes.
The FastSAM models are easy to integrate into your Python applications. Ultralytics provides a user-friendly Python API to streamline the process.
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
|------------|---------------------|----------------------------------------------|-----------|------------|----------|--------|
| FastSAM-s | `FastSAM-s.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
| FastSAM-x | `FastSAM-x.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ✅ |
#### Predict Usage
## Usage Examples
The FastSAM models are easy to integrate into your Python applications. Ultralytics provides user-friendly Python API and CLI commands to streamline development.
### Predict Usage
To perform object detection on an image, use the `predict` method as shown below:
!!! Example ""
!!! Example
=== "Python"
```python
@ -83,11 +90,11 @@ To perform object detection on an image, use the `predict` method as shown below
This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image.
#### Val Usage
### Val Usage
Validation of the model on a dataset can be done as follows:
!!! Example ""
!!! Example
=== "Python"
```python
@ -108,11 +115,11 @@ Validation of the model on a dataset can be done as follows:
Please note that FastSAM only supports detection and segmentation of a single class of object. This means it will recognize and segment all objects as the same class. Therefore, when preparing the dataset, you need to convert all object category IDs to 0.
### FastSAM official Usage
## FastSAM official Usage
FastSAM is also available directly from the [https://github.com/CASIA-IVA-Lab/FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) repository. Here is a brief overview of the typical steps you might take to use FastSAM:
#### Installation
### Installation
1. Clone the FastSAM repository:
```shell
@ -136,7 +143,7 @@ FastSAM is also available directly from the [https://github.com/CASIA-IVA-Lab/Fa
pip install git+https://github.com/openai/CLIP.git
```
#### Example Usage
### Example Usage
1. Download a [model checkpoint](https://drive.google.com/file/d/1m1sjY4ihXBU1fZXdQ-Xdj-mDltW-2Rqv/view?usp=sharing).
@ -168,7 +175,7 @@ Additionally, you can try FastSAM through a [Colab demo](https://colab.research.
We would like to acknowledge the FastSAM authors for their significant contributions in the field of real-time instance segmentation:
!!! Note ""
!!! Quote ""
=== "BibTeX"