Learn how to use JSON mode
JSON mode allows you to set the model’s response format to return a valid JSON object as part of a chat completion. While generating valid JSON was possible previously, there could be issues with response consistency that would lead to invalid JSON objects being generated. JSON mode guarantees valid JSON output, but it doesn’t guarantee the output matches a specific schema. If you need schema guarantees, use Structured Outputs.While JSON mode is still supported, when possible we recommend using structured outputs. Like JSON mode structured outputs generate valid JSON, but with the added benefit that you can constrain the model to use a specific JSON schema.
Currently Structured outputs are not supported on bring your own data scenario.
JSON mode support
JSON mode is only currently supported with the following models:API support
Support for JSON mode was first added in API version2023-12-01-preview
Example
Before you run the examples:- Replace
YOUR-RESOURCE-NAMEwith your Azure OpenAI resource name. - Replace
YOUR-MODEL_DEPLOYMENT_NAME
- Python
- PowerShell
response_format={ "type": "json_object" }- We told the model to output JSON as part of the system message.
Output
Other considerations
You should checkfinish_reason for the value length before parsing the response. The model might generate partial JSON. This means that output from the model was larger than the available max_tokens that were set as part of the request, or the conversation itself exceeded the token limit.
JSON mode produces JSON that is valid and parses without error. However, there’s no guarantee for
output to match a specific schema, even if requested in the prompt.
Troubleshooting
- If
finish_reasonislength, increasemax_tokens(or reduce prompt length) and retry. Don’t parse partial JSON. - If you need schema guarantees, switch to Structured Outputs.