ultralytics 8.3.12 SAM and SAM2 multi-point prompts (#16643)
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Ultralytics Assistant <135830346+UltralyticsAssistant@users.noreply.github.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
1c4d788aa1
commit
b89d6f4070
6 changed files with 79 additions and 13 deletions
|
|
@ -90,8 +90,17 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
# Load the model
|
||||
model = SAM("mobile_sam.pt")
|
||||
|
||||
# Predict a segment based on a point prompt
|
||||
# Predict a segment based on a single point prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||
|
||||
# Predict multiple segments based on multiple points prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[1, 1])
|
||||
|
||||
# Predict a segment based on multiple points prompt per object
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 1]])
|
||||
|
||||
# Predict a segment using both positive and negative prompts.
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 0]])
|
||||
```
|
||||
|
||||
### Box Prompt
|
||||
|
|
@ -106,8 +115,17 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
# Load the model
|
||||
model = SAM("mobile_sam.pt")
|
||||
|
||||
# Predict a segment based on a box prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
|
||||
# Predict a segment based on a single point prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||
|
||||
# Predict mutiple segments based on multiple points prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[1, 1])
|
||||
|
||||
# Predict a segment based on multiple points prompt per object
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 1]])
|
||||
|
||||
# Predict a segment using both positive and negative prompts.
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 0]])
|
||||
```
|
||||
|
||||
We have implemented `MobileSAM` and `SAM` using the same API. For more usage information, please see the [SAM page](sam.md).
|
||||
|
|
|
|||
|
|
@ -58,8 +58,17 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
# Run inference with bboxes prompt
|
||||
results = model("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
|
||||
|
||||
# Run inference with points prompt
|
||||
results = model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||
# Run inference with single point
|
||||
results = predictor(points=[900, 370], labels=[1])
|
||||
|
||||
# Run inference with multiple points
|
||||
results = predictor(points=[[400, 370], [900, 370]], labels=[1, 1])
|
||||
|
||||
# Run inference with multiple points prompt per object
|
||||
results = predictor(points=[[[400, 370], [900, 370]]], labels=[[1, 1]])
|
||||
|
||||
# Run inference with negative points prompt
|
||||
results = predictor(points=[[[400, 370], [900, 370]]], labels=[[1, 0]])
|
||||
```
|
||||
|
||||
!!! example "Segment everything"
|
||||
|
|
@ -107,8 +116,16 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
|
|||
predictor.set_image("ultralytics/assets/zidane.jpg") # set with image file
|
||||
predictor.set_image(cv2.imread("ultralytics/assets/zidane.jpg")) # set with np.ndarray
|
||||
results = predictor(bboxes=[439, 437, 524, 709])
|
||||
|
||||
# Run inference with single point prompt
|
||||
results = predictor(points=[900, 370], labels=[1])
|
||||
|
||||
# Run inference with multiple points prompt
|
||||
results = predictor(points=[[400, 370], [900, 370]], labels=[[1, 1]])
|
||||
|
||||
# Run inference with negative points prompt
|
||||
results = predictor(points=[[[400, 370], [900, 370]]], labels=[[1, 0]])
|
||||
|
||||
# Reset image
|
||||
predictor.reset_image()
|
||||
```
|
||||
|
|
@ -245,6 +262,15 @@ model("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
|
|||
|
||||
# Segment with points prompt
|
||||
model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||
|
||||
# Segment with multiple points prompt
|
||||
model("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[[1, 1]])
|
||||
|
||||
# Segment with multiple points prompt per object
|
||||
model("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 1]])
|
||||
|
||||
# Segment with negative points prompt.
|
||||
model("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 0]])
|
||||
```
|
||||
|
||||
Alternatively, you can run inference with SAM in the command line interface (CLI):
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue