Skip to content

Dedicated Endpoints

Ultralytics Platform enables deployment of YOLO models to dedicated endpoints in 43 global regions. Each endpoint is a single-tenant service with auto-scaling, a unique endpoint URL, and independent monitoring.

Ultralytics Platform Model Deploy Tab With Region Map And Table

Create Endpoint

From the Deploy Tab

Deploy a model from its Deploy tab:

  1. Navigate to your model
  2. Click the Deploy tab
  3. Select a region from the region table (sorted by latency from your location)
  4. Click Deploy on the region row

The deployment name is auto-generated from the model name and region city (e.g., yolo11n-iowa).

From the Deployments Page

Create a deployment from the global Deploy page in the sidebar:

  1. Click New Deployment
  2. Select a model from the model selector
  3. Select a region from the map or table
  4. Optionally customize the deployment name and resources
  5. Click Deploy Model

Ultralytics Platform New Deployment Dialog With Model Selector And Region Map

Deployment Lifecycle

stateDiagram-v2
    [*] --> Creating: Deploy
    Creating --> Deploying: Container starting
    Deploying --> Ready: Health check passed
    Ready --> Stopping: Stop
    Stopping --> Stopped: Stopped
    Stopped --> Ready: Start
    Ready --> [*]: Delete
    Stopped --> [*]: Delete
    Creating --> Failed: Error
    Deploying --> Failed: Error
    Failed --> [*]: Delete

Region Selection

Choose from 43 regions worldwide. The interactive region map and table show:

  • Region pins: Color-coded by latency (green < 100ms, yellow < 200ms, red > 200ms)
  • Deployed regions: Highlighted with a "Deployed" badge
  • Deploying regions: Animated pulse indicator
  • Bidirectional highlighting: Hover on the map highlights the table row, and vice versa

Ultralytics Platform Deploy Tab Region Latency Table Sorted By Latency

The region table on the model Deploy tab includes:

ColumnDescription
LocationCity and country with flag icon
ZoneRegion identifier
LatencyMeasured ping time (median of 3 pings)
DistanceDistance from your location in km
ActionsDeploy button or "Deployed" status badge

New Deployment Dialog

The New Deployment dialog (from the global Deploy page) shows a simpler region table with only Location, Latency, and Select columns.

Choose Wisely

Select the region closest to your users for lowest latency. Use the Rescan button to re-measure latency from your current location.

Available Regions

ZoneLocation
us-central1Iowa, USA
us-east1South Carolina, USA
us-east4Northern Virginia, USA
us-east5Columbus, USA
us-south1Dallas, USA
us-west1Oregon, USA
us-west2Los Angeles, USA
us-west3Salt Lake City, USA
us-west4Las Vegas, USA
northamerica-northeast1Montreal, Canada
northamerica-northeast2Toronto, Canada
northamerica-south1Queretaro, Mexico
southamerica-east1Sao Paulo, Brazil
southamerica-west1Santiago, Chile
ZoneLocation
europe-west1St. Ghislain, Belgium
europe-west2London, UK
europe-west3Frankfurt, Germany
europe-west4Eemshaven, Netherlands
europe-west6Zurich, Switzerland
europe-west8Milan, Italy
europe-west9Paris, France
europe-west10Berlin, Germany
europe-west12Turin, Italy
europe-north1Hamina, Finland
europe-north2Stockholm, Sweden
europe-central2Warsaw, Poland
europe-southwest1Madrid, Spain
ZoneLocation
asia-east1Changhua, Taiwan
asia-east2Kowloon, Hong Kong
asia-northeast1Tokyo, Japan
asia-northeast2Osaka, Japan
asia-northeast3Seoul, South Korea
asia-south1Mumbai, India
asia-south2Delhi, India
asia-southeast1Jurong West, Singapore
asia-southeast2Jakarta, Indonesia
asia-southeast3Bangkok, Thailand
australia-southeast1Sydney, Australia
australia-southeast2Melbourne, Australia
ZoneLocation
africa-south1Johannesburg, South Africa
me-central1Doha, Qatar
me-central2Dammam, Saudi Arabia
me-west1Tel Aviv, Israel

Endpoint Configuration

New Deployment Dialog

The New Deployment dialog provides:

SettingDescriptionDefault
ModelSelect from completed models-
RegionDeployment region-
Deployment NameAuto-generated, editable-
CPU CoresCPU allocation (1-8)1
Memory (GB)Memory allocation (1-32 GB)2

Ultralytics Platform New Deployment Dialog Resources Panel Expanded

Resource settings are available under the collapsible Resources section. Deployments use scale-to-zero by default (min instances = 0, max instances = 1) — you only pay for active inference time.

Auto-Generated Names

The deployment name is automatically generated from the model name and region city (e.g., yolo11n-iowa). If you deploy the same model to the same region again, a numeric suffix is added (e.g., yolo11n-iowa-2).

Deploy Tab (Quick Deploy)

When deploying from the model's Deploy tab, endpoints are created with default resources (1 CPU, 2 GB memory) with scale-to-zero enabled. The deployment name is auto-generated.

Manage Endpoints

View Modes

The deployments list supports three view modes:

ModeDescription
CardsFull detail cards with logs, code examples, predict panel
CompactGrid of smaller cards with key metrics
TableDataTable with sortable columns and search

Ultralytics Platform Deploy Tab Active Deployments Cards View

Deployment Card (Cards View)

Each deployment card in the cards view shows:

  • Header: Name, region flag, status badge, start/stop/delete buttons
  • Endpoint URL: Copyable URL with link to API docs
  • Metrics: Request count (24h), P95 latency, error rate
  • Health check: Live health indicator with latency and manual refresh
  • Tabs: Logs, Code, and Predict

The Logs tab shows recent log entries with severity filtering (All / Errors). The Code tab shows ready-to-use code examples in Python, JavaScript, and cURL with your actual endpoint URL and API key. The Predict tab provides an inline predict panel for testing directly on the deployment.

Deployment Statuses

StatusDescription
CreatingDeployment is being set up
DeployingContainer is starting
ReadyEndpoint is live and accepting requests
StoppingEndpoint is shutting down
StoppedEndpoint is paused (no billing)
FailedDeployment failed (see error message)

Endpoint URL

Each endpoint has a unique URL, for example:

https://predict-abc123.run.app

Ultralytics Platform Deployment Card Endpoint Url With Copy Button

Click the copy button to copy the URL. Click the docs icon to view the auto-generated API documentation for the endpoint.

Lifecycle Management

Control your endpoint state:

graph LR
    R[Ready] -->|Stop| S[Stopped]
    S -->|Start| R
    R -->|Delete| D[Deleted]
    S -->|Delete| D

    style R fill:#4CAF50,color:#fff
    style S fill:#9E9E9E,color:#fff
    style D fill:#F44336,color:#fff
ActionDescription
StartResume a stopped endpoint
StopPause the endpoint (no billing)
DeletePermanently remove endpoint

Stop Endpoint

Stop an endpoint to pause billing:

  1. Click the pause icon on the deployment card
  2. Endpoint status changes to "Stopping" then "Stopped"

Stopped endpoints:

  • Don't accept requests
  • Don't incur charges
  • Can be restarted anytime

Delete Endpoint

Permanently remove an endpoint:

  1. Click the delete (trash) icon on the deployment card
  2. Confirm deletion in the dialog

Permanent Action

Deletion is immediate and permanent. You can always create a new endpoint.

Using Endpoints

Authentication

Each deployment is created with an API key from your account. Include it in requests:

Authorization: Bearer YOUR_API_KEY

The API key prefix is displayed on the deployment card footer for identification. Generate keys from API Keys.

No Rate Limits

Dedicated endpoints are not subject to the Platform API rate limits. Requests go directly to your dedicated service, so throughput is limited only by your endpoint's CPU, memory, and scaling configuration. This is a key advantage over shared inference, which is rate-limited to 20 requests/min per API key.

Request Example

import requests

# Deployment endpoint
url = "https://predict-abc123.run.app/predict"

# Headers with your deployment API key
headers = {"Authorization": "Bearer YOUR_API_KEY"}

# Inference parameters
data = {"conf": 0.25, "iou": 0.7, "imgsz": 640}

# Send image for inference
with open("image.jpg", "rb") as f:
    response = requests.post(url, headers=headers, data=data, files={"file": f})

print(response.json())
// Build form data with image and parameters
const formData = new FormData();
formData.append("file", fileInput.files[0]);
formData.append("conf", "0.25");
formData.append("iou", "0.7");
formData.append("imgsz", "640");

// Send image for inference
const response = await fetch(
  "https://predict-abc123.run.app/predict",
  {
    method: "POST",
    headers: { Authorization: "Bearer YOUR_API_KEY" },
    body: formData,
  }
);

const result = await response.json();
console.log(result);
curl -X POST \
  "https://predict-abc123.run.app/predict" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "file=@image.jpg" \
  -F "conf=0.25" \
  -F "iou=0.7" \
  -F "imgsz=640"

Request Parameters

ParameterTypeDefaultDescription
filefile-Image file (required)
conffloat0.25Minimum confidence threshold
ioufloat0.7NMS IoU threshold
imgszint640Input image size
normalizestring-Return normalized coordinates

Response Format

Same as shared inference with task-specific fields.

Pricing

Dedicated endpoints bill based on:

ComponentRate
CPUPer vCPU-second
MemoryPer GB-second
RequestsPer million requests

Cost Optimization

  • Use scale-to-zero for development endpoints
  • Set appropriate max instances
  • Monitor usage in the Monitoring dashboard
  • Review costs in Settings > Billing

FAQ

How many endpoints can I create?

Endpoint limits depend on plan:

  • Free: Up to 3 deployments
  • Pro: Up to 10 deployments
  • Enterprise: Unlimited deployments

Each model can still be deployed to multiple regions within your plan quota.

Can I change the region after deployment?

No, regions are fixed. To change regions:

  1. Delete the existing endpoint
  2. Create a new endpoint in the desired region

How do I handle multi-region deployment?

For global coverage:

  1. Deploy to multiple regions
  2. Use a load balancer or DNS routing
  3. Route users to the nearest endpoint

What's the cold start time?

Cold start time depends on model size and whether the container is already cached in the region. Typical ranges:

ScenarioCold Start
Cached container~5-15 seconds
First deploy/region~15-45 seconds

The health check uses a 55-second timeout to accommodate worst-case cold starts.

Can I use custom domains?

Custom domains are coming soon. Currently, endpoints use platform-generated URLs.



📅 Created 1 month ago ✏️ Updated 4 days ago
glenn-jochersergiuwaxmann

Comments