Use vision-enabled chat models
This article refers to the Microsoft Foundry (new) portal.
Prerequisites
- An Azure subscription - Create one for free
- An Azure OpenAI resource with a vision-enabled model deployed (GPT-4o, GPT-4.5, GPT-5, or o-series). See Create and deploy an Azure OpenAI Service resource.
- For Python: The
openaiPython package version 1.0 or later. Install withpip install openai.
Call the Chat Completion APIs
The following command shows the most basic way to use a vision-enabled chat model with code. If this is your first time using these models programmatically, we recommend starting with our Chat with images quickstart.- REST
- Python
Send a POST request to
https://{RESOURCE_NAME}.openai.azure.com/openai/v1/chat/completions where- RESOURCE_NAME is the name of your Azure OpenAI resource
Content-Type: application/jsonapi-key: {API_KEY}
Remember to set a
"max_tokens" or max_completion_tokens value, or the return output will be cut off. For o-series reasoning models, use max_completion_tokens instead of max_tokens.When uploading images, there is a limit of 10 images per chat request.
Supported image formats include JPEG, PNG, GIF (first frame only), and WEBP.
Configure image detail level
You can optionally define a"detail" parameter in the "image_url" field. Choose one of three values, low, high, or auto, to adjust the way the model interprets and processes images.
autosetting: The default setting. The model decides between low or high based on the size of the image input.lowsetting: the model does not activate the “high res” mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn’t crucial.highsetting: the model activates “high res” mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.
Output
When you send an image to a vision-enabled model, the API returns a chat completion response with the model’s analysis. The response includes content filter results specific to Azure OpenAI."finish_reason" field. It has the following possible values:
stop: API returned complete model output.length: Incomplete model output due to themax_tokensinput parameter or model’s token limit.content_filter: Omitted content due to a flag from our content filters.
Troubleshooting
| Issue | Resolution |
|---|---|
| Output truncated | Increase max_tokens or max_completion_tokens value |
| Image not processed | Verify URL is publicly accessible or base64 encoding is correct |
| Rate limit exceeded | Implement retry logic with exponential backoff |