Skip to main content

Azure OpenAI in Microsoft Foundry Models v1 API

This article shows you how to use the v1 Azure OpenAI API. The v1 API simplifies authentication, removes the need for dated api-version parameters, and supports cross-provider model calls.
New API response objects might be added to the API response at any time. We recommend you only parse the response objects you require.

Prerequisites

API evolution

Previously, Azure OpenAI received monthly updates of new API versions. Taking advantage of new features required constantly updating code and environment variables with each new API release. Azure OpenAI also required the extra step of using Azure specific clients which created overhead when migrating code between OpenAI and Azure OpenAI. Starting in August 2025, you can opt in to the next generation v1 Azure OpenAI APIs which add support for:
  • Ongoing access to the latest features with no need to specify new api-version’s each month.
  • Faster API release cycle with new features launching more frequently.
  • OpenAI client support with minimal code changes to swap between OpenAI and Azure OpenAI when using key-based authentication.
  • OpenAI client support for token based authentication and automatic token refresh without the need to take a dependency on a separate Azure OpenAI client.
  • Make chat completions calls with models from other providers like DeepSeek and Grok which support the v1 chat completions syntax.
Access to new API calls that are still in preview will be controlled by passing feature specific preview headers allowing you to opt in to the features you want, without having to swap API versions. Alternatively, some features will indicate preview status through their API path and don’t require an additional header. Examples:
  • /openai/v1/evals is in preview and requires passing an "aoai-evals":"preview" header.
  • /openai/v1/fine_tuning/alpha/graders/ is in preview and requires no custom header due to the presence of alpha in the API path.
For the initial v1 Generally Available (GA) API launch, only a subset of the inference and authoring API capabilities are supported. All GA features are supported for use in production. Support for more capabilities is being added rapidly.

Code changes

v1 API

Python v1 examplesAPI Key:
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),
    base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/"
)

response = client.responses.create(   
  model="gpt-4.1-nano", # Replace with your model deployment name 
  input="This is a test.",
)

print(response.model_dump_json(indent=2)) 
Key differences from the previous API:
  • OpenAI() client is used instead of AzureOpenAI().
  • base_url passes the Azure OpenAI endpoint and /openai/v1 is appended to the endpoint address.
  • api-version is no longer a required parameter with the v1 GA API.
API Key with environment variables:Set the following environment variables before running the code:
VariableValue
OPENAI_BASE_URLhttps://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/
OPENAI_API_KEYYour Azure OpenAI API key
Then create the client without parameters:
client = OpenAI()
Microsoft Entra ID:
Handling automatic token refresh was previously handled through use of the AzureOpenAI() client. The v1 API removes this dependency, by adding automatic token refresh support to the OpenAI() client.
from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

token_provider = get_bearer_token_provider(
    DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)

client = OpenAI(  
  base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",  
  api_key = token_provider  
)

response = client.responses.create(
    model="gpt-4.1-nano",
    input= "This is a test" 
)

print(response.model_dump_json(indent=2)) 
  • base_url passes the Azure OpenAI endpoint and /openai/v1 is appended to the endpoint address.
  • api_key parameter is set to token_provider, enabling automatic retrieval and refresh of an authentication token instead of using a static API key.

Model support

For Azure OpenAI models we recommend using the Responses API, however, the v1 API also allows you to make chat completions calls with models from other providers like DeepSeek and Grok which support the OpenAI v1 chat completions syntax. base_url will accept both https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/ and https://YOUR-RESOURCE-NAME.services.ai.azure.com/openai/v1/ formats.
Responses API also works with Foundry Models sold directly by Azure, such as Microsoft AI, DeepSeek, and Grok models. To learn how to use the Responses API with these models, see How to generate text responses with Microsoft Foundry Models.
    from openai import OpenAI
    from azure.identity import DefaultAzureCredential, get_bearer_token_provider

    token_provider = get_bearer_token_provider(
        DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
    )

    client = OpenAI(  
      base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",  
      api_key=token_provider,
    )
    completion = client.chat.completions.create(
      model="MAI-DS-R1", # Replace with your model deployment name.
      messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me about the attention is all you need paper"}
      ]
    )

    #print(completion.choices[0].message)
    print(completion.model_dump_json(indent=2))

v1 API support

Status

Generally Available features are supported for use in production.
API PathStatus
/openai/v1/chat/completionsGenerally Available
/openai/v1/embeddingsGenerally Available
/openai/v1/evalsPreview
/openai/v1/filesGenerally Available
/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copyPreview
/openai/v1/fine_tuning/alpha/graders/Preview
/openai/v1/fine_tuning/Generally Available
/openai/v1/modelsGenerally Available
/openai/v1/responsesGenerally Available
/openai/v1/vector_storesGenerally Available

Preview headers

API PathHeader
/openai/v1/evals"aoai-evals":"preview"
/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy"aoai-copy-ft-checkpoints" : "preview"

API version changelog

The following sections summarize changes between API versions.

Changes between v1 preview release and 2025-04-01-preview

  • v1 preview API
  • Video generation support
  • NEW Responses API features:
    • Remote Model Context Protocol (MCP) servers tool integration
    • Support for asynchronous background tasks
    • Encrypted reasoning items
    • Image generation

Changes between 2025-04-01-preview and 2025-03-01-preview

Changes between 2025-03-01-preview and 2025-02-01-preview

Changes between 2025-02-01-preview and 2025-01-01-preview

  • Stored completions (distillation API support).

Changes between 2025-01-01-preview and 2024-12-01-preview

Changes between 2024-12-01-preview and 2024-10-01-preview

Changes between 2024-09-01-preview and 2024-08-01-preview

  • max_completion_tokens added to support o1-preview and o1-mini models. max_tokens doesn’t work with the o1 series models.
  • parallel_tool_calls added.
  • completion_tokens_details & reasoning_tokens added.
  • stream_options & include_usage added.

Changes between 2024-07-01-preview and 2024-08-01-preview API specification

  • Structured outputs support.
  • Large file upload API added.
  • On your data changes:
    • Mongo DB integration.
    • role_information parameter removed.
    • rerank_score added to citation object.
    • AML datasource removed.
    • AI Search vectorization integration improvements.

Changes between 2024-05-01-preview and 2024-07-01-preview API specification

Changes between 2024-04-01-preview and 2024-05-01-preview API specification

Changes between 2024-03-01-preview and 2024-04-01-preview API specification

Troubleshooting

IssueCauseSolution
404 Not Found when calling the v1 APIIncorrect base_url formatVerify the URL ends with /openai/v1/. Both https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/ and https://YOUR-RESOURCE-NAME.services.ai.azure.com/openai/v1/ are valid.
401 Unauthorized with Entra IDMissing or incorrect role assignmentAssign the Cognitive Services OpenAI User role to your identity. Role assignments can take up to 5 minutes to propagate.
AzureOpenAI() client doesn’t work with v1v1 API uses the OpenAI() clientReplace AzureOpenAI() with OpenAI() and set base_url to your Azure endpoint with /openai/v1/ appended.
api-version parameter rejectedv1 API doesn’t use api-versionRemove any api-version query parameters from your requests. The v1 API doesn’t require or accept them.
Preview features not availableMissing preview headerFor preview APIs like /openai/v1/evals, pass the required preview header (for example, "aoai-evals":"preview"). See Preview headers.

Known issues

  • The 2025-04-01-preview Azure OpenAI spec uses OpenAPI 3.1. It’s a known issue that this version isn’t fully supported by Azure API Management.

Next steps