Skip to main content

Azure OpenAI in Microsoft Foundry Models quotas and limits

This article contains a quick reference and a detailed description of the quotas and limits for Azure OpenAI.

Scope of quota

Quotas and limits aren’t enforced at the tenant level. Instead, the highest level of quota restrictions is scoped at the Azure subscription level.

Regional quota allocation

Tokens per minute (TPM) and requests per minute (RPM) limits are defined per region, per subscription, and per model or deployment type. For example, if the gpt-4.1 Global Standard model is listed with a quota of 5 million TPM and 5,000 RPM, then each region where that model or deployment type is available has its own dedicated quota pool of that amount for each of your Azure subscriptions. Within a single Azure subscription, it’s possible to use a larger quantity of total TPM and RPM quota for a given model and deployment type, as long as you have resources and model deployments spread across multiple regions.

Quotas and limits reference

The following section provides you with a quick guide to the default quotas and limits that apply to Azure OpenAI:
Limit nameLimit value
Azure OpenAI resources per region, per Azure subscription30.
Default DALL-E 2 quota limits2 concurrent requests.
Default DALL-E 3 quota limits6 requests per minute
Default GPT-image-1 quota limits9 requests per minute
Default GPT-image-1-mini quota limits12 requests per minute
Default GPT-image-1.5 quota limits9 requests per minute
Default Sora quota limits60 requests per minute.
Default Sora 2 quota limits2 job requests1 per minute
Default speech-to-text audio API quota limits3 requests per minute.
Maximum prompt tokens per requestVaries per model. For more information, see Azure OpenAI models.
Maximum standard deployments per resource32.
Maximum fine-tuned model deployments10.
Total number of training jobs per resource100.
Maximum simultaneously running training jobs per resourceStandard and global training: 3;
Developer training: 5
Maximum training jobs queued20.
Maximum files per resource (fine-tuning)100.
Total size of all files per resource (fine-tuning)1 GB.
Maximum training job time (job fails if exceeded)720 hours.
Maximum training job size (tokens in training file) x (# of epochs)2 billion.
Maximum size of all files per upload (Azure OpenAI on your data)16 MB.
Maximum number or inputs in array with /embeddings2,048.
Maximum number of /chat/completions messages2,048.
Maximum number of /chat/completions functions128.
Maximum number of /chat completions tools128.
Maximum number of provisioned throughput units per deployment100,000.
Maximum files per assistant or thread10,000 when using the API or the Microsoft Foundry portal.
Maximum file size for assistants and fine-tuning512 MB via the API

200 MB via the Foundry portal.
Maximum file upload requests per resource30 requests per second.
Maximum size for all uploaded files for assistants200 GB.
Assistants token limit2,000,000 token limit.
GPT-4o and GPT-4.1 maximum images per request (number of images in the messages array or conversation history)50.
GPT-4 vision-preview and GPT-4 turbo-2024-04-09 default maximum tokens16.

Increase the max_tokens parameter value to avoid truncated responses. GPT-4o maximum tokens defaults to 4,096.
Maximum number of custom headers in API requests210.
Message character limit1,048,576.
Message size for audio files20 MB.
1 The Sora 2 RPM quota only counts video job requests. Other types of requests are not rate-limited. 2 Our current APIs allow up to 10 custom headers, which are passed through the pipeline and returned. Some customers now exceed this header count, which results in HTTP 431 errors. There’s no solution for this error, other than to reduce header volume. In future API versions, we won’t pass through custom headers. We recommend that customers don’t depend on custom headers in future system architectures.
Quota limits are subject to change.

GPT-5.2 series

ModelDeployment TypeDefault RPMDefault TPMEnterprise and MCA-E RPMEnterprise and MCA-E TPM
gpt-5.2DataZoneStandard3,000300,00030,0003,000,000
gpt-5.2GlobalStandard10,0001,000,000100,00010,000,000
gpt-5.2-chatGlobalStandard10,0001,000,00050,0005,000,000
gpt-5.2-codexGlobalStandard1,0001,000,00010,00010,000,000

GPT-5.1 series

ModelDeployment TypeDefault RPMDefault TPMEnterprise and MCA-E RPMEnterprise and MCA-E TPM
gpt-5.1DataZoneStandard3,000300,00030,0003,000,000
gpt-5.1GlobalStandard10,0001,000,000100,00010,000,000
gpt-5.1-chatGlobalStandard10,0001,000,00050,0005,000,000
gpt-5.1-codexGlobalStandard1,0001,000,00010,00010,000,000
gpt-5.1-codex-miniGlobalStandard1,0001,000,00010,00010,000,000
gpt-5.1-codex-maxGlobalStandard10,0001,000,000100,00010,000,000

GPT-5 series

ModelDeployment TypeDefault RPMDefault TPMEnterprise and MCA-E RPMEnterprise and MCA-E TPM
gpt-5DataZoneStandard3,000300,00030,0003,000,000
gpt-5GlobalStandard10,0001,000,000100,00010,000,000
gpt-5-chatGlobalStandard1,0001,000,0005,0005,000,000
gpt-5-miniDataZoneStandard300300,0003,0003,000,000
gpt-5-miniGlobalStandard1,0001,000,00010,00010,000,000
gpt-5-nanoDataZoneStandard2,0002,000,00050,00050,000,000
gpt-5-nanoGlobalStandard5,0005,000,000150,000150,000,000
gpt-5-codexGlobalStandard1,0001,000,00010,00010,000,000
gpt-5-proGlobalStandard1,600160,00016,0001,600,000

model-router rate limits

ModelDeployment TypeDefault RPMDefault TPMEnterprise and MCA-E RPMEnterprise and MCA-E TPM
model-router
(2025-11-18)
DataZoneStandard150150,000300300,000
model-router
(2025-11-18)
GlobalStandard250250,000400400,000

Batch limits

Limit nameLimit value
Maximum Batch input files - (no expiration)500
Maximum Batch input files - (expiration set)10,000
Maximum input file size200 MB
Maximum input file size - Bring your own storage (BYOS)1 GB
Maximum requests per file100,000
Batch file limits don’t apply to output files (for example, result.jsonl, and error.jsonl). To remove batch input file limits, use Batch with Azure Blob Storage.

Batch quota

The table shows the batch quota limit. Quota values for global batch are represented in terms of enqueued tokens. When you submit a file for batch processing, the number of tokens in the file is counted. Until the batch job reaches a terminal state, those tokens count against your total enqueued token limit.

Global batch

ModelEnterprise and MCA-EDefaultMonthly credit card-based subscriptionsMSDN subscriptionsAzure for Students, free trials
gpt-4.15B200M50M90KN/A
gpt-4.1 mini15B1B50M90KN/A
gpt-4.1-nano15B1B50M90KN/A
gpt-4o5B200M50M90KN/A
gpt-4o-mini15B1B50M90KN/A
gpt-4-turbo300M80M40M90KN/A
gpt-4150M30M5M100KN/A
o3-mini15B1B50M90KN/A
o4-mini15B1B50M90KN/A
gpt-55B200M50M90KN/A
gpt-5.15B200M50M90KN/A
B = billion | M = million | K = thousand

Data zone batch

ModelEnterprise and MCA-EDefaultMonthly credit card-based subscriptionsMSDN subscriptionsAzure for Students, free trials
gpt-4.1500M30M30M90KN/A
gpt-4.1-mini1.5B100M50M90KN/A
gpt-4o500M30M30M90KN/A
gpt-4o-mini1.5B100M50M90KN/A
o3-mini1.5B100M50M90KN/A
gpt-55B200M50M90KN/A
gpt-5.15B200M50M90KN/A

gpt-oss

ModelTokens per minute (TPM)Requests per minute (RPM)
gpt-oss-120b5 M5 K

GPT-4 rate limits

GPT-4.5 preview Global Standard

ModelTierQuota limit in tokens per minuteRequests per minute
gpt-4.5Enterprise and MCA-E200K200
gpt-4.5Default150K150

GPT-4.1 series Global Standard

ModelTierQuota limit in tokens per minute (TPM)Requests per minute
gpt-4.1 (2025-04-14)Enterprise and MCA-E5M5K
gpt-4.1 (2025-04-14)Default1M1K
gpt-4.1-nano (2025-04-14)Enterprise and MCA-E150M150K
gpt-4.1-nano (2025-04-14)Default5M5K
gpt-4.1-mini (2025-04-14)Enterprise and MCA-E150M150K
gpt-4.1-mini (2025-04-14)Default5M5K

GPT-4.1 series Data Zone Standard

ModelTierQuota limit in tokens per minute (TPM)Requests per minute
gpt-4.1 (2025-04-14)Enterprise and MCA-E2M2K
gpt-4.1 (2025-04-14)Default300K300
gpt-4.1-nano (2025-04-14)Enterprise and MCA-E50M50K
gpt-4.1-nano (2025-04-14)Default2M2K
gpt-4.1-mini (2025-04-14)Enterprise and MCA-E50M50K
gpt-4.1-mini (2025-04-14)Default2M2K

GPT-4 Turbo

gpt-4 (turbo-2024-04-09) has rate limit tiers with higher limits for certain customer types.
ModelTierQuota limit in tokens per minuteRequests per minute
gpt-4 (turbo-2024-04-09)Enterprise and MCA-E2M12K
gpt-4 (turbo-2024-04-09)Default450K2.7K

computer-use-preview Global Standard rate limits

ModelTierQuota limit in tokens per minuteRequests per minute
computer-use-previewEnterprise and MCA-E30M300K
computer-use-previewDefault450K4.5K

o-series rate limits

The ratio of requests per minute to tokens per minute for quota can vary by model. When you deploy a model programmatically or request a quota increase, you don’t have granular control over tokens per minute and requests per minute as independent values. Quota is allocated in terms of units of capacity, which have corresponding amounts of requests per minute and tokens per minute.
ModelCapacityRequests per minute (RPM)Tokens per minute (TPM)
Older chat models1 unit6 RPM1,000 TPM
o1 and o1-preview1 unit1 RPM6,000 TPM
o31 unit1 RPM1,000 TPM
o4-mini1 unit1 RPM1,000 TPM
o3-mini1 unit1 RPM10,000 TPM
o1-mini1 unit1 RPM10,000 TPM
o3-pro1 unit1 RPM10,000 TPM
This concept is important for programmatic model deployment, because changes in the RPM to TPM ratio can result in accidental misallocation of quota.

o-series Global Standard

ModelTierQuota limit in tokens per minuteRequests per minute
codex-miniEnterprise and MCA-E10M10K
o3-proEnterprise and MCA-E16M1.6K
o4-miniEnterprise and MCA-E10M10K
o3Enterprise and MCA-E10M10K
o3-miniEnterprise and MCA-E50M5K
o1 and o1-previewEnterprise and MCA-E30M5K
o1-miniEnterprise and MCA-E50M5K
codex-miniDefault1M1K
o3-proDefault1.6M160
o4-miniDefault1M1K
o3Default1M1K
o3-miniDefault5M500
o1 and o1-previewDefault3M500
o1-miniDefault5M500

o-series Data Zone Standard

ModelTierQuota limit in tokens per minuteRequests per minute
o3Default10M10K
o4-miniDefault10M10K
o3-miniEnterprise and MCA-E20M2K
o3-miniDefault2M200
o1Enterprise and MCA-E6M1K
o1Default600K100

o1-preview and o1-mini Standard

ModelTierQuota limit in tokens per minuteRequests per minute
o1-previewEnterprise and MCA-E600K100
o1-miniEnterprise and MCA-E1M100
o1-previewDefault300K50
o1-miniDefault500K50

gpt-4o rate limits

gpt-4o and gpt-4o-mini have rate limit tiers with higher limits for certain customer types.

gpt-4o Global Standard

ModelTierQuota limit in tokens per minuteRequests per minute
gpt-4oEnterprise and MCA-E30M180K
gpt-4o-miniEnterprise and MCA-E150M1.5M
gpt-4oDefault450K2.7K
gpt-4o-miniDefault2M12K

gpt-4o Data Zone Standard

ModelTierQuota limit in tokens per minuteRequests per minute
gpt-4oEnterprise and MCA-E10M60K
gpt-4o-miniEnterprise and MCA-E20M120K
gpt-4oDefault300K1.8K
gpt-4o-miniDefault1M6K

gpt-4o Standard

ModelTierQuota limit in tokens per minuteRequests per minute
gpt-4oEnterprise and MCA-E1M6K
gpt-4o-miniEnterprise and MCA-E2M12K
gpt-4oDefault150K900
gpt-4o-miniDefault450K2.7K

gpt-4o audio

ModelTierQuota limit in tokens per minuteRequests per minute
gpt-4o-audio-previewDefault450K1K
gpt-4o-realtime-previewDefault800K1K
gpt-4o-mini-audio-previewDefault2M1K
gpt-4o-mini-realtime-previewDefault800K1K
gpt-audioDefault100K30
gpt-audio-miniDefault100K30
gpt-realtimeDefault100K100
gpt-realtime-miniDefault100K100
gpt-realtime-mini-2025-12-15Default100K100

GPT-image-1 series rate limits

GPT-image-1 Global Standard

ModelTierQuota limit in tokens per minuteRequests per minute
gpt-image-1Enterprise and MCA-EN/A60
gpt-image-1MediumN/A36
gpt-image-1LowN/A9
gpt-image-1-miniLowN/A12
gpt-image-1-miniMediumN/A36
gpt-image-1-miniHighN/A120
gpt-image-1LowN/A9
gpt-image-1MediumN/A18
gpt-image-1HighN/A60

Usage tiers

Global Standard deployments use the global infrastructure of Azure. They dynamically route customer traffic to the data center with the best availability for the customer’s inference requests. Similarly, Data Zone Standard deployments allow you to use the global infrastructure of Azure to dynamically route traffic to the data center within the Microsoft-defined data zone with the best availability for each request. This practice enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage might see greater variability in response latency. Azure OpenAI usage tiers are designed to provide consistent performance for most customers with low to medium levels of traffic. Each usage tier defines the maximum throughput (tokens per minute) you can expect with predictable latency. When your usage stays within your assigned tier, latency remains stable and response times are consistent.

What happens if you exceed your usage tier?

  • If your request throughput exceeds your usage tier—especially during periods of high demand—your response latency may increase significantly.
  • Latency can vary and, in some cases, may be more than two times higher than when operating within your usage tier.
  • This variability is most noticeable for customers with high sustained usage or bursty traffic patterns.
If you encounter 429 errors or notice increased latency variability, here’s what you should do:
  • Request a quota increase: visit the Azure portal to request a higher quota for your subscription.
  • Consider upgrading to a premium offer (PTU): for latency-critical or high-volume workloads, upgrade to Provisioned Throughput Units (PTU). PTU provides dedicated resources, guaranteed capacity, and predictable latency—even at scale. This is the best choice for mission-critical applications that require consistent performance.
  • Monitor your usage: regularly review your usage metrics in the Azure portal to ensure you are operating within your tier limits. Adjust your workload or deployment strategy as needed.
The usage limit determines the level of usage above which customers might see larger variability in response latency. A customer’s usage is defined per model. It’s the total number of tokens consumed across all deployments in all subscriptions in all regions for a given tenant.
Usage tiers apply only to Standard, Data Zone Standard, and Global Standard deployment types. Usage tiers don’t apply to global batch and provisioned throughput deployments.

Global Standard, Data Zone Standard, and Standard

ModelUsage tiers per month
gpt-532 billion tokens
gpt-5-mini160 billion tokens
gpt-5-nano800 billion tokens
gpt-5-chat32 billion tokens
gpt-4 + gpt-4-32k (all versions)6 billion tokens
gpt-4o12 billion tokens
gpt-4o-mini85 billion tokens
o3-mini50 billion tokens
o14 billion tokens
o4-mini50 billion tokens
o35 billion tokens
gpt-4.130 billion tokens
gpt-4.1-mini150 billion tokens
gpt-4.1-nano550 billion tokens

Other offer types

If your Azure subscription is linked to certain offer types, your maximum quota values are lower than the values indicated in the previous tables.
  • GPT-5-pro quota is only available to MCA-E and default quota subscriptions. All other offer types have zero quota for this model by default.
  • GPT-5 reasoning model quota is 20K TPM and 200 RPM for all offer types that do not have access to MCA-E or default quota. GPT-5-chat is 50K and 50 RPM.
  • Some offer types are restricted to only Global Standard deployments in the East US2 and Sweden Central regions.
TierQuota limit in tokens per minute
Azure for Students1K (all models)
Exception o-series, GPT-4.1, and GPT 4.5 Preview: 0
MSDNGPT-4o-mini: 200K
computer-use-preview: 8K
gpt-4o-realtime-preview: 1K
o-series: 0
GPT 4.5 Preview: 0
GPT-4.1: 50K
GPT-4.1-nano: 200K
Standard& Pay-as-you-goGPT-4o-mini: 200K
computer-use-preview: 30K
o-series: 0
GPT 4.5 Preview: 0
GPT-4.1: 50K
GPT-4.1-nano: 200K
Azure_MS-AZR-0111P
Azure_MS-AZR-0035P
Azure_MS-AZR-0025P
Azure_MS-AZR-0052P
GPT-4o-mini: 200K
CSP Integration Sandbox *All models: 0
Lightweight trial
Free trials
Azure Pass
All models: 0
*This limit applies to only a small number of legacy CSP sandbox subscriptions. Use the following query to determine what quotaId value is associated with your subscription. To determine the offer type associated with your subscription, you can check your quotaId value. If your quotaId value isn’t listed in this table, your subscription qualifies for the default quota.
See the API reference.
az login
access_token=$(az account get-access-token --query accessToken -o tsv)
curl -X GET "https://management.azure.com/subscriptions/{subscriptionId}?api-version=2020-01-01" \
  -H "Authorization: Bearer $access_token" \
  -H "Content-Type: application/json"

Output

{
  "authorizationSource": "Legacy",
  "displayName": "Pay-As-You-Go",
  "id": "/subscriptions/aaaaaa-bbbbb-cccc-ddddd-eeeeee",
  "state": "Enabled",
  "subscriptionId": "aaaaaa-bbbbb-cccc-ddddd-eeeeee",
  "subscriptionPolicies": {
    "locationPlacementId": "Public_2014-09-01",
    "quotaId": "PayAsYouGo_2014-09-01",
    "spendingLimit": "Off"
  }
}
Quota allocation/Offer typeSubscription quota ID
Enterprise and MCA-EEnterpriseAgreement_2014-09-01
Pay-as-you-goPayAsYouGo_2014-09-01
MSDNMSDN_2014-09-01
CSP Integration SandboxCSPDEVTEST_2018-05-01
Azure for StudentsAzureForStudents_2018-01-01
Free trialFreeTrial_2014-09-01
Azure PassAzurePass_2014-09-01
Azure_MS-AZR-0111PAzureInOpen_2014-09-01
Azure_MS-AZR-0150PLightweightTrial_2016-09-01
Azure_MS-AZR-0035P
Azure_MS-AZR-0025P
Azure_MS-AZR-0052P
MPN_2014-09-01
Azure_MS-AZR-0023P
Azure_MS-AZR-0060P
Azure_MS-AZR-0148P
Azure_MS-AZR-0148G
MSDNDevTest_2014-09-01
DefaultAny quota ID not listed in this table

General best practices to remain within rate limits

To minimize issues related to rate limits, it’s a good idea to use the following techniques:
  • Implement retry logic in your application.
  • Avoid sharp changes in the workload. Increase the workload gradually.
  • Test different load increase patterns.
  • Increase the quota assigned to your deployment. Move quota from another deployment, if necessary.

Request quota increases

You can request quota increases for Foundry Models sold directly by Azure, including Azure OpenAI models. Quota increases aren’t generally available for Models from partners and community. Anthropic models are an exception. Submit the quota increase request form to request a quota increase. Requests are processed in the order received. Priority goes to customers who actively consume their existing quota allocation. Requests that don’t meet this condition might be denied. For other rate limit increases, submit a service request.

Regional quota capacity limits

You can view quota availability by region for your subscription in the Foundry portal. To view quota capacity by region for a specific model or version, you can query the capacity API for your subscription. Provide a subscriptionId, model_name, and model_version and the API returns the available capacity for that model across all regions and deployment types for your subscription.
Currently, both the Foundry portal and the capacity API return quota/capacity information for models that are retired and no longer available.
See the API reference.
import requests
import json
from azure.identity import DefaultAzureCredential

subscriptionId = "Replace with your subscription ID" #replace with your subscription ID
model_name = "gpt-4o"     # Example value, replace with model name
model_version = "2024-08-06"   # Example value, replace with model version

token_credential = DefaultAzureCredential()
token = token_credential.get_token('https://management.azure.com/.default')
headers = {'Authorization': 'Bearer ' + token.token}

url = f"https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/modelCapacities"
params = {
    "api-version": "2024-06-01-preview",
    "modelFormat": "OpenAI",
    "modelName": model_name,
    "modelVersion": model_version
}

response = requests.get(url, params=params, headers=headers)
model_capacity = response.json()

print(json.dumps(model_capacity, indent=2))