Skip to content

MobileSAM Logo

Mobile Segment Anything (MobileSAM)

MobileSAM is a compact, efficient image segmentation model purpose-built for mobile and edge devices. Designed to bring the power of Meta's Segment Anything Model (SAM) to environments with limited compute, MobileSAM delivers near-instant segmentation while maintaining compatibility with the original SAM pipeline. Whether you're developing real-time applications or lightweight deployments, MobileSAM provides impressive segmentation results with a fraction of the size and speed requirements of its predecessors.



Watch: How to Run Inference with MobileSAM using Ultralytics | Step-by-Step Guide 🎉

MobileSAM has been adopted in a variety of projects, including Grounding-SAM, AnyLabeling, and Segment Anything in 3D.

MobileSAM was trained on a single GPU using a 100k image dataset (1% of the original images) in less than a day. The training code will be released in the future.

Available Models, Supported Tasks, and Operating Modes

The table below outlines the available MobileSAM model, its pre-trained weights, supported tasks, and compatibility with different operating modes such as Inference, Validation, Training, and Export. Supported modes are indicated by ✅ and unsupported modes by ❌.

Model Type Pre-trained Weights Tasks Supported Inference Validation Training Export
MobileSAM mobile_sam.pt Instance Segmentation

MobileSAM Comparison vs YOLO

The following comparison highlights the differences between Meta's SAM variants, MobileSAM, and Ultralytics' smallest segmentation models, including YOLO11n-seg:

Model Size
(MB)
Parameters
(M)
Speed (CPU)
(ms/im)
Meta SAM-b 375 93.7 49401
Meta SAM2-b 162 80.8 31901
Meta SAM2-t 78.1 38.9 25997
MobileSAM 40.7 10.1 25381
FastSAM-s with YOLOv8 backbone 23.7 11.8 55.9
Ultralytics YOLOv8n-seg 6.7 (11.7x smaller) 3.4 (11.4x less) 24.5 (1061x faster)
Ultralytics YOLO11n-seg 5.9 (13.2x smaller) 2.9 (13.4x less) 30.1 (864x faster)

This comparison demonstrates the substantial differences in model size and speed between SAM variants and YOLO segmentation models. While SAM models offer unique automatic segmentation capabilities, YOLO models—especially YOLOv8n-seg and YOLO11n-seg—are significantly smaller, faster, and more computationally efficient.

Tests were conducted on a 2025 Apple M4 Pro with 24GB RAM using torch==2.6.0 and ultralytics==8.3.90. To reproduce these results:

Example

from ultralytics import ASSETS, SAM, YOLO, FastSAM

# Profile SAM2-t, SAM2-b, SAM-b, MobileSAM
for file in ["sam_b.pt", "sam2_b.pt", "sam2_t.pt", "mobile_sam.pt"]:
    model = SAM(file)
    model.info()
    model(ASSETS)

# Profile FastSAM-s
model = FastSAM("FastSAM-s.pt")
model.info()
model(ASSETS)

# Profile YOLO models
for file_name in ["yolov8n-seg.pt", "yolo11n-seg.pt"]:
    model = YOLO(file_name)
    model.info()
    model(ASSETS)

Adapting from SAM to MobileSAM

MobileSAM retains the same pipeline as the original SAM, including pre-processing, post-processing, and all interfaces. This means you can transition from SAM to MobileSAM with minimal changes to your workflow.

The key difference is the image encoder: MobileSAM replaces the original ViT-H encoder (632M parameters) with a much smaller Tiny-ViT encoder (5M parameters). On a single GPU, MobileSAM processes an image in about 12ms (8ms for the encoder, 4ms for the mask decoder).

ViT-Based Image Encoder Comparison

Image Encoder Original SAM MobileSAM
Parameters 611M 5M
Speed 452ms 8ms

Prompt-Guided Mask Decoder

Mask Decoder Original SAM MobileSAM
Parameters 3.876M 3.876M
Speed 4ms 4ms

Whole Pipeline Comparison

Whole Pipeline (Enc+Dec) Original SAM MobileSAM
Parameters 615M 9.66M
Speed 456ms 12ms

The performance of MobileSAM and the original SAM is illustrated below using both point and box prompts.

Image with Point as Prompt

Image with Box as Prompt

MobileSAM is approximately 5 times smaller and 7 times faster than FastSAM. For further details, visit the MobileSAM project page.

Testing MobileSAM in Ultralytics

Just like the original SAM, Ultralytics provides a simple interface for testing MobileSAM, supporting both Point and Box prompts.

Model Download

Download the MobileSAM pretrained weights from Ultralytics assets.

Point Prompt

Example

from ultralytics import SAM

# Load the model
model = SAM("mobile_sam.pt")

# Predict a segment based on a single point prompt
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])

# Predict multiple segments based on multiple points prompt
model.predict("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[1, 1])

# Predict a segment based on multiple points prompt per object
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 1]])

# Predict a segment using both positive and negative prompts.
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 0]])

Box Prompt

Example

from ultralytics import SAM

# Load the model
model = SAM("mobile_sam.pt")

# Predict a segment based on a single point prompt
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])

# Predict multiple segments based on multiple points prompt
model.predict("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[1, 1])

# Predict a segment based on multiple points prompt per object
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 1]])

# Predict a segment using both positive and negative prompts.
model.predict("ultralytics/assets/zidane.jpg", points=[[[400, 370], [900, 370]]], labels=[[1, 0]])

Both MobileSAM and SAM share the same API. For more usage details, see the SAM documentation.

Automatically Build Segmentation Datasets Using a Detection Model

To automatically annotate your dataset with the Ultralytics framework, use the auto_annotate function as shown below:

Example

from ultralytics.data.annotator import auto_annotate

auto_annotate(data="path/to/images", det_model="yolo11x.pt", sam_model="mobile_sam.pt")
Argument Type Default Description
data str required Path to directory containing target images for annotation or segmentation.
det_model str 'yolo11x.pt' YOLO detection model path for initial object detection.
sam_model str 'sam_b.pt' SAM model path for segmentation (supports SAM, SAM2 variants and mobile_sam models).
device str '' Computation device (e.g., 'cuda:0', 'cpu', or '' for automatic device detection).
conf float 0.25 YOLO detection confidence threshold for filtering weak detections.
iou float 0.45 IoU threshold for Non-Maximum Suppression to filter overlapping boxes.
imgsz int 640 Input size for resizing images (must be multiple of 32).
max_det int 300 Maximum number of detections per image for memory efficiency.
classes list[int] None List of class indices to detect (e.g., [0, 1] for person & bicycle).
output_dir str None Save directory for annotations (defaults to './labels' relative to data path).

Citations and Acknowledgements

If MobileSAM is helpful in your research or development, please consider citing the following paper:

@article{mobile_sam,
  title={Faster Segment Anything: Towards Lightweight SAM for Mobile Applications},
  author={Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung Ho and Lee, Seungkyu and Hong, Choong Seon},
  journal={arXiv preprint arXiv:2306.14289},
  year={2023}
}

Read the full MobileSAM paper on arXiv.

FAQ

What Is MobileSAM and How Does It Differ from the Original SAM Model?

MobileSAM is a lightweight, fast image segmentation model optimized for mobile and edge applications. It maintains the same pipeline as the original SAM but replaces the large ViT-H encoder (632M parameters) with a compact Tiny-ViT encoder (5M parameters). This results in MobileSAM being about 5 times smaller and 7 times faster than the original SAM, operating at roughly 12ms per image versus SAM's 456ms. Explore more about MobileSAM's implementation on the MobileSAM GitHub repository.

How Can I Test MobileSAM Using Ultralytics?

Testing MobileSAM in Ultralytics is straightforward. You can use Point and Box prompts to predict segments. For example, using a Point prompt:

from ultralytics import SAM

# Load the model
model = SAM("mobile_sam.pt")

# Predict a segment based on a point prompt
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])

For more details, see the Testing MobileSAM in Ultralytics section.

Why Should I Use MobileSAM for My Mobile Application?

MobileSAM is ideal for mobile and edge applications due to its lightweight design and rapid inference speed. Compared to the original SAM, MobileSAM is about 5 times smaller and 7 times faster, making it suitable for real-time segmentation on devices with limited computational resources. Its efficiency enables mobile devices to perform real-time image segmentation without significant latency. Additionally, MobileSAM supports Inference mode optimized for mobile performance.

How Was MobileSAM Trained, and Is the Training Code Available?

MobileSAM was trained on a single GPU with a 100k image dataset (1% of the original images) in under a day. While the training code will be released in the future, you can currently access pre-trained weights and implementation details from the MobileSAM GitHub repository.

What Are the Primary Use Cases for MobileSAM?

MobileSAM is designed for fast, efficient image segmentation in mobile and edge environments. Primary use cases include:

  • Real-time object detection and segmentation for mobile apps
  • Low-latency image processing on devices with limited compute
  • Integration in AI-powered mobile applications for augmented reality (AR), analytics, and more

For more details on use cases and performance, see Adapting from SAM to MobileSAM and the Ultralytics blog on MobileSAM applications.



📅 Created 1 year ago ✏️ Updated 7 days ago

Comments