Skip to content

Reference for ultralytics/models/fastsam/model.py

Note

This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/fastsam/model.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!


ultralytics.models.fastsam.model.FastSAM

FastSAM(model='FastSAM-x.pt')

Bases: Model

FastSAM model interface.

Example
from ultralytics import FastSAM

model = FastSAM("last.pt")
results = model.predict("ultralytics/assets/bus.jpg")
Source code in ultralytics/models/fastsam/model.py
def __init__(self, model="FastSAM-x.pt"):
    """Call the __init__ method of the parent class (YOLO) with the updated default model."""
    if str(model) == "FastSAM.pt":
        model = "FastSAM-x.pt"
    assert Path(model).suffix not in {".yaml", ".yml"}, "FastSAM models only support pre-trained models."
    super().__init__(model=model, task="segment")

task_map property

task_map

Returns a dictionary mapping segment task to corresponding predictor and validator classes.

predict

predict(
    source,
    stream=False,
    bboxes=None,
    points=None,
    labels=None,
    texts=None,
    **kwargs
)

Perform segmentation prediction on image or video source.

Supports prompted segmentation with bounding boxes, points, labels, and texts.

Parameters:

Name Type Description Default
source str | Image | ndarray

Input source.

required
stream bool

Enable real-time streaming.

False
bboxes list

Bounding box coordinates for prompted segmentation.

None
points list

Points for prompted segmentation.

None
labels list

Labels for prompted segmentation.

None
texts list

Texts for prompted segmentation.

None
**kwargs Any

Additional keyword arguments.

{}

Returns:

Type Description
list

Model predictions.

Source code in ultralytics/models/fastsam/model.py
def predict(self, source, stream=False, bboxes=None, points=None, labels=None, texts=None, **kwargs):
    """
    Perform segmentation prediction on image or video source.

    Supports prompted segmentation with bounding boxes, points, labels, and texts.

    Args:
        source (str | PIL.Image | numpy.ndarray): Input source.
        stream (bool): Enable real-time streaming.
        bboxes (list): Bounding box coordinates for prompted segmentation.
        points (list): Points for prompted segmentation.
        labels (list): Labels for prompted segmentation.
        texts (list): Texts for prompted segmentation.
        **kwargs (Any): Additional keyword arguments.

    Returns:
        (list): Model predictions.
    """
    prompts = dict(bboxes=bboxes, points=points, labels=labels, texts=texts)
    return super().predict(source, stream, prompts=prompts, **kwargs)



📅 Created 1 year ago ✏️ Updated 2 months ago