diff --git a/docs/en/models/mobile-sam.md b/docs/en/models/mobile-sam.md index 26c92f68..4800fe86 100644 --- a/docs/en/models/mobile-sam.md +++ b/docs/en/models/mobile-sam.md @@ -12,6 +12,17 @@ The MobileSAM paper is now available on [arXiv](https://arxiv.org/pdf/2306.14289 A demonstration of MobileSAM running on a CPU can be accessed at this [demo link](https://huggingface.co/spaces/dhkim2810/MobileSAM). The performance on a Mac i5 CPU takes approximately 3 seconds. On the Hugging Face demo, the interface and lower-performance CPUs contribute to a slower response, but it continues to function effectively. +

+
+ +
+ Watch: How to Run Inference with MobileSAM using Ultralytics | Step-by-Step Guide 🎉 +

+ MobileSAM is implemented in various projects including [Grounding-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything), [AnyLabeling](https://github.com/vietanhdev/anylabeling), and [Segment Anything in 3D](https://github.com/Jumpat/SegmentAnythingin3D). MobileSAM is trained on a single GPU with a 100k dataset (1% of the original images) in less than a day. The code for this training will be made available in the future. diff --git a/docs/en/models/sam-2.md b/docs/en/models/sam-2.md index 025c18d2..ad3ccea4 100644 --- a/docs/en/models/sam-2.md +++ b/docs/en/models/sam-2.md @@ -12,6 +12,17 @@ SAM 2, the successor to Meta's [Segment Anything Model (SAM)](sam.md), is a cutt ## Key Features +

+
+ +
+ Watch: How to Run Inference with Meta's SAM2 using Ultralytics | Step-by-Step Guide 🎉 +

+ ### Unified Model Architecture SAM 2 combines the capabilities of image and video segmentation in a single model. This unification simplifies deployment and allows for consistent performance across different media types. It leverages a flexible prompt-based interface, enabling users to specify objects of interest through various prompt types, such as points, bounding boxes, or masks.